Dear Elon, You're Wrong
by BARRETT AMES
In the early 1800s, George Stephenson gave mankind a gift: the steam locomotive. The machine, unlike horses and other beasts of burden to which most were accustomed to at the time, was of immense power and speed. Traveling at a blistering 15 mph on its first day, the steam locomotive far outpaced the 3 to 4 mph of a horse-drawn wagon. The first riders were awestruck, speechless, and nauseated but—defying predictions by fear mongers—none of them died. In fact, industrialists were so enthused by the idea that in the next year the British Parliament received applications to lay more than 2,000 miles of track (which doubled to more than 4,000 miles the next year). Yet, as the advent of this new technology was permeating Great Britain and quickly taking it to the vanguard of the industrial age, experts in various fields asserted that if a man were to go faster than 20 mph, he would most surely perish via asphyxiation. Or, that if a cow saw a train go by at that speed, the milk in its udders would curdle—rendering it useless. Humorous? Perhaps, but such fatalist visions of what new technology will bring is eerily similar to reactions today in regards to robotics and Artificial Intelligence (A.I.).
Fear of technology isn’t new. Socrates was adverse to the concept of writing, while Conrad Gessner, a Swiss scientist, issued dire warnings against the harm that would be brought about by the printing press. Even at the same time the steam locomotive was inspiring naysayers, the Luddites were “protesting against newly developed labor-economizing technologies” such as power looms—going so far to destroy machines and commit murder. And what about robots and A.I.? In my experience, most of the fear they inspire can be attributed to one of two main reasons: lack of knowledge, and a perceived attack on identity, or livelihood. But, are current concerns founded?
First: robotics. There’s no way to put this nicely, but, right now, robots suck. If you were lucky enough to be at the DARPA Robotics Challenge, then you know exactly what I’m talking about. If you weren’t, then here’s the rundown: DARPA, the Defense Department’s advanced research wing, sponsored a competition that brought together 23 of the world’s best robotics labs to make robots better. DARPA wanted to do this, mainly, because they realized just how bad robots are. During the nuclear disaster that occurred in Japan, multiple robots were sent into the facility in an attempt to turn off the system, and they all failed. Many couldn’t get over the rubble, the few that got over the rubble couldn’t get past the stairs, and the one that made it up the stairs…well, it got wrapped around a pole at the top. This all occurred in Japan who many, in the robotics world, consider to be at the forefront of robotics and associated technologies. Thus, DARPA challenged anyone smart enough and brave enough to make robots better. Their hope was to push the field of robotics forward, but instead of being inspired and encouraged, we got funny robo-fails. Robots are definitely better than they used to be, and the community is definitely moving forward, but there’s nothing to fear here, and certainly nothing sentient. Instead of fearing a mythical dystopian future, we should be looking forward to when our elderly parents can be assisted by robot-maids—allowing them to enjoy independence much later into their lives. Or, imagine if we had robots with enough dexterity and movement that could have been useful in mitigating the Fukushima nuclear disaster? Actually, forget disasters: Robots to simply handle nuclear material in general in a safe manner. Wouldn’t that be nice?
While robotics is still very much in its early stages of development, A.I. has had some early successes which has boosted its public opinion greatly. However, the general public doesn’t have a thorough understanding for how limited some of these successes are. For example, when Deep Blue beat Kasparov in 1996 the world was astonished. Many believed that we were right around the corner from real, honest-to-goodness, Artificial Intelligence. Coupled with the Hollywood treatment, movies like Terminator, The Bicentennial Man, and A.I. Artificial Intelligence, furthered this perception. Twenty years later, I, along with every other disappointed 90’s child, is still waiting. As a community, computer science has been able identify and implement, thus far, certain algorithms that humans use that are very difficult for humans to do (like play chess) but actually are relatively trivial for a computer to execute. This makes it seem that A.I. is immensely powerful—and in certain ways it is: it has shown time and time again that computers are great tools that humans can use to augment their thinking and creativity. But generalized, efficient, creative intelligence, has not been created, because, thus far, we still don’t even know how humans do it—much less invent it for robots. We have an inkling about how brains work. Neural nets, for example, seem to be a feasible way forward because they are (at least) a rough approximation of the human brain and have demonstrated some generalizable traits. Google’s Deepmind research branch recently published an Atari-playing neural network that could perform at or above the human level on reaction type games. However, it couldn’t perform games that were strategic in nature. In essence, A.I. is at the point where we have a framework with which we can use to describe intelligence, and in single instances, we can recreate the intelligence. But we haven’t gone much further. The current state of A.I. is tantamount to having the Cartesian coordinate system that Descartes gave us, but no Newtonian laws of physics. The Newton of A.I. could be walking amongst us now, materialize in two decades, or take two centuries. This is an inherent quality of searching the unknown: there’s no map, and thus time estimates are notoriously bad. Remember how flying cars should have been a thing by now? It’s safe to say that A.I. in the immediate future will be first able to help humans reach higher levels of operator efficiency, and perhaps act as a tool to help humans create solutions to specific problems (driverless cars for instance). And since we don’t even fully understand our own sentience, computers won’t do anything they aren’t programmed to do, much less be sentient (or achieve sentience) themselves (Skynet is as implausible as Pacific Rim).
Robots and A.I. are just like any tool that humanity has created thus far: they can be used for good or bad. However, it is worth noting the positive uses for robotics and A.I. far outweigh the possible negative ones. To evaluate this claim, let’s assume for a second that robotics and A.I. move forward as fast as many of the worry-warts say it will. We’d be living in a world where all of the dirty, dull, and dangerous tasks have been taken over by robots (much like power looms took over the tedious/slow labor of textile workers during the Industrial Revolution—was that even a huge loss?). Also, studies on the causes for disputes and conflicts assert that most strife is the result of economic tension...in other words, when people are happy and comfortable, they are less likely to cause trouble. As robots fulfill these menial tasks, our society becomes more productive, and we are able to meet more needs than our current system. And as more of the basic needs of society are met, more will begin to move up Maslow’s Hierarchy of Needs—realizing higher callings and answering to creativity, rather than necessity (thereby beginning a virtuous cycle of further increased productivity). It’s easy to imagine a dystopian robo-controlled state, the objective, and likely, reality however is not only more nuanced, but it’s less absurd. Robotics and A.I. are more likely to bring about a period of new enlightenment than one of subjugation.
Of course, at first there will be anxiety associated with losing the jobs associated with the dirty, dull, and dangerous—but this will pass as people find more fulfilling ways to live their lives. These new lives could include scientific pursuits, adventure and exploration, or engaging in new personal goals. Activities such as these are uniquely human and thus will have an increased relative value as robots will reduce the cost of fulfilling basic needs. Way-back-when we needed to fetch water out of a well, it was probably a lengthy ordeal. Now, we have indoor plumbing, and most don’t think twice about getting water from the tap. Less than two centuries ago, cross-continental travel was cumbersome and limited to the the very wealthy. Now, many more can travel back and forth between time zones rather inexpensively. The reduced cost of fulfilling basic needs will leave many with more discretionary spending, and therefore create an increased demand for creative endeavors and service jobs. Increased industrial efficiency allowed less people to work in agriculture. As we continue to push technology, increased service jobs are only natural. And fulfilled creative expression is correlated with greater meaning in life and decreased violence. In essence, by adopting robots and A.I. (and embracing their functionality), we reduce the very thing that we fear they will enable.
There’s a bright future for robotics and A.I., but it’s entirely up to us to allow it—and to use it correctly. And let’s be clear here: human beings are without equal. The dexterity of our hands is unparalleled, our intelligence takes us to ever greater levels of complexity, introspection, and achievement, and our mastery over the environment has not abated. And yet, there are 12 million people who perform manufacturing jobs in the U.S! It is excellent that these people have jobs and are feeding their families... but they’re not performing tasks that are maximally, or in most cases, minimally human. So while we waste all of this human potential assembling iPhones, making door locks, and sewing shoes, why don’t we begin to live without fear robots will become us? Isn’t it worse if we become robots?
Bell, Vaughan. 'Don't Touch that Dial! A History of Media Technology Scares, from the Printing Press to Facebook.' Slate.com, 2010. Web. 15 Aug. 2015.
Bilton, Nick. I Live In The Future & Here's How It Works. New York: Crown Business, 2010. Print.
Bls.gov. ‘Industries At A Glance: Manufacturing: NAICS 31-33.’ N.p., 2015. Web. 19 Aug. 2015.
D’Orazio, Dante. 'High-Tech Robots Falling Down Is The Funniest Thing You'll See This Weekend'. The Verge, 2015. Web. 17 Aug. 2015.
Huang, Chiungjung. 'Internet Use And Psychological Well-Being: A Meta-Analysis'. Cyberpsychology, Behavior, and Social Networking (2009): 100722182519069. Web. 15 Aug. 2015.
Maiese, Michelle. "Causes of Disputes and Conflicts." Beyond Intractability. Eds. Guy Burgess and Heidi Burgess. Conflict Research Consortium, University of Colorado, Boulder. Posted: October 2003
Malchiodi, Cathy. 'Creative Art Therapy: Brain-Wise Approaches To Violence'. Psychology Today, 2015. Web. 19 Aug. 2015.
Mnih, Volodymyr et al. 'Human-Level Control through Deep Reinforcement Learning'. Nature 518.7540 (2015): 529-533. Web.
Units.miamioh.edu. 'Plato, From The Phaedrus'. N.p., 2015. Web. 10 Aug. 2015.
Barrett Ames can be reached at firstname.lastname@example.org.