Neural Networks: Myths and Realities

Neural Networks: Myths and Realities

by BARRETT AMES

Last week the AlphaGo agent defeated Lee Sedol, the world champion of Go. This news was met with much adulation, and in the Artificial Intelligence (A.I.) community in particular it was important news as Go had been long considered the next big challenge: the vast number of moves that can be taken at any one time make the ancient Chinese game among the most difficult games for an A.I. to replicate human mastery. Many believed that the methods used to beat Chess Grandmaster Gary Kasparov in 1997 by Deep Blue could be applied to Go. Yet, due to the increase in the number of opening moves in Go (361) compared to the meager number of opening moves in Chess (20), the same methods can not be applied. The total number of moves grows as powers of possible moves: thus, for chess to look 10 moves into the future, there are 20^20 (approximately 10^25) moves that require analysis. To do this same thing in Go requires the analysis of 10^51 moves that must be analyzed (more than 10^26 multiplied a trillion times by one trillion). To place it in a different perspective: a regular game of Go, not a specially long one, has more possibilities than exist atoms in the known universe. So, that such an analysis was successful against the world champion of Go is thus a huge step for A.I. Furthermore, recent videos by groups like Boston Dynamics have brought the hype surrounding robotics and A.I. to new, frenzied heights. However, a closer inspection of both will illuminate that much of the hype surrounding AlphaGo, and robotics in general, remains precisely that.

Let’s first take a look at AlphaGo and see what separates facts from fiction. The AlphaGo agent is based on Deep Learning, a form of neural networks that has received a great deal of attention due to its ability to succeed at tasks that were previously unsolved. More specifically, the AlphaGo agent is based on a form of reinforcement learning, which is a framework that allows for computers to learn from the actions they take. The agent learned to play Go by playing millions of matches against itself, as well as learning from publicly available records of expert games. These deep learning methods used by the AlphaGo agent make it capable of handling previously impenetrable problems. More specifically, the neural networks that were utilized have the ability to deal with large quantities of data on the input side of things. For example, if there is a large repository of labeled examples of what to do—say a million images with all of the objects in them properly labeled—the neural network can learn a function that fits chunks of the image to labels. In other words, if there’s an orange fuzzy part of the image with ears and adorable eyes, the neural network will learn to supply the label of kitten. The neural network can do this (at least theoretically) because it is a universal function approximator. The best way to think of function approximation is to make a bunch of dots on a piece of paper, and then draw a curve through them. The resulting curve is exactly what neural networks are attempting to do. They’re just attempting to do it in a higher-dimensional space than the 2D drawing you just made.

Now having a universal function approximator is an important tool to have, but it hardly deserves the hype that many have given it. A universal function approximator is not a human brain. In fact, the very name “neural networks” is more exaggeration than reality. The name comes from the 1940s when Warren McCulloch and Walter Pitts first developed the artificial neural network. At the time, there was a rather limited understanding of how the brain worked, and it was within this limited framework that they found the inspiration for neural networks. Back then it was believed that computational power of the brain came from neurons being connected to each other, and that each neuron wasn’t capable of much individual computation. This idea is typically referred to as connectionism.

This new knowledge has shed light on just how much computation the brain does, and it points to a vastly more complicated brain. 

Unfortunately, inspiration from connectionism has contributed significantly to one particularly bad line of reasoning about the so-called “Singularity.” That is, it has frequently been argued based on this connectionist model, that a neural network would be able to simulate the human brain in the near future. It quickly then follows that the robots will take over, or Johnny Depp will become a benevolent internet dictator. However, we now know significantly more about the human brain, and how individual neurons work. This new knowledge has shed light on just how much computation the brain does, and it points to a vastly more complicated brain. For one thing, we now know that each neuron is capable of complex computation, and that they are connected at least ten times as densely as previously believed. This means that human brains are somewhere between 100 and 1000 times more powerful than experts might have estimated.  At the same time, we’ve begun to hit an upper limit on how quickly we can increase the computation of a computer.

In addition to this computational power argument, there is another key difference between brains and computers. The brain is a living thing that responds to stimuli by restructuring itself, and thus is infinitely more adaptable than any computer hardware. Certainly this flexibility could be simulated in a computer, but it would require a another whole level of not only computational power, but understanding of the inner workings of the brain. One discrepancy in ability is believed to be (at least partially due) to a difference in perspective frames. Perspective is very important when considering the capabilities of any agent. The way an entity perceives the world is known to the Germans as Merkwelt, which turns out to be a very useful concept in this discussion. Our brain and our senses, that is our Merkwelt, are particularly well suited to performing the tasks we perform on a daily basis. As humans, we tend to believe that these (neural network) agents that we observe are acting with the same Merkwelt as our own. Thus, the particular belief that if a robot can do something that is more difficult than anything I can do, it can do all the obviously simpler things I can, is a projection of the human’s Merkwelt, onto the robot. In reality, the robot has very different sensors, and computational processing routines, and thus is well suited to performing certain tasks that a human being is not, and vice-versa. It is safe to say that there is a significant difference between a robot’s Merkwelt and the human’s who is attempting to reason about its capabilities, and will be for some time to come, if not forever.

The other myth that seems to pervade neural networks is that they’re easy. In many articles, the narrative goes something like, “...then we let the agent learn, and out came amazing behavior X.” While in some sense this is true, in many cases it hides the vast amount of work that is required to get one of these methods to work. In many cases it can take months or years to prepare the data in the proper way such that it can be used for neural networks. In addition, finding the correct neural network architecture—how the neurons are connected to each other—has proven to be not only important, but also quite difficult to get right. Furthermore, each individual neuron in the network needs to have its function assigned, and then a learning method needs to be chosen. All of these steps are computationally intense, and require increasing amounts of data, as well as the rarely mentioned, but frequently performed magic of parameter tuning. Parameter tuning can be described as the process that a researcher engages in while attempting to get a learning method to work properly. The researcher may tweak one value and test its impact for months before the correct combination of neural architecture, neuron function, and learning method are discovered. All in all, it should be quite clear that neural networks are far from what any reasonable individual would call easy.

In reality, the robot has very different sensors, and computational processing routines, and thus is well suited to performing certain tasks that a human being is not, and vice-versa. It is safe to say that there is a significant difference between a robot’s Merkwelt and the human’s who is attempting to reason about its capabilities, and will be for some time to come, if not forever.

Yet, for the sake of argument, let’s assume that neural networks are both easy, and more powerful. I argue that neural networks would continue to be more myth than reality even in such a case. To make this point, let’s take a look at game four of the five game match between Lee Sedol and AlphaGo. This is the match that Lee Sedol actually won against AlphaGo. In game four, AlphaGo started out strong, but at move 79, the agent made a mistake. Now, humans make mistakes all the time, but when we make mistakes, we won't (theoretically) go off the rails. When neural networks make mistakes, they begin to behave in arbitrarily bad ways. Lee Sedol was able to easily capitalize on this behavior and win the game. This type of behavior is illustrative because it brings to light a common mistake that many algorithms in artificial intelligence make when they are brought into the real environment.

It is this type of behavior that prevents neural networks from controlling cars, because performing poorly due to previous mistakes becomes not just an embarrassment, but a matter of life and death. Unlike humans, neural networks are extremely poor at extrapolating. If the agent has never been exposed to situations that provide context for a new problem, then the neural network can perform much worse than a random action might. And it’s unfeasible to expose a neural network to even a small but representative set of examples for a particular real world task, because neural networks require thousands, or hundreds of thousands of examples to tune their performance so that they perform well. So the next time someone shows you how to do something, and in showing how to perform the task, says “it’s kind of like doing X, with a bit of Y mixed in” realize that you’re performing a form of learning, that currently neural networks are incapable of achieving. While these inherent limitations begin to clarify the weaknesses of neural networks, it would be good to take a look at a few other factors prevalent in our A.I. mania.

While the researchers may have been able to get ATLAS to do all of those wonderful things, they certainly weren’t so robust to handle the variations that occur in the real world, or even in between the labs of two different researchers!

One particularly pernicious error that I find myself frequently guilty of is a confusion about the practicality, or robustness of a particular new algorithm. This problem is particularly bad when there is a video demonstrating the latest algorithm. Take for example the Boston Dynamics video of ATLAS 2.0 walking around. It truly is an astounding video: the robot performs well on ice, snow, and under dynamic loads. These are all very difficult things to master, and so anytime there is evidence that someone has mastered it, I become excited. However, after this latest video I had a bit more insight due to my recent work with ATLAS in the DARPA Robotics Finals. The videos for the first ATLAS were equally astounding when they came out; it was being hit by flying weights, and walking over obstacles on a treadmill. Yet, once ATLAS was unboxed and we began to put it through its paces, we quickly found the limits of its capability, and it didn’t appear to be capable of anything nearly as complicated as the videos had demonstrated. This brings us to a key point about demonstration videos: the robot, or A.I., only had to perform that action once while the camera was rolling to create the video, demonstrating that such videos are often made to impress without showcasing limitations. While the researchers may have been able to get ATLAS to do all of those wonderful things, they certainly weren’t so robust to handle the variations that occur in the real world, or even in between the labs of two different researchers!

Another common error that researchers make is in describing how a particular agent works. For example, a robot with cameras will be described as “seeing” the world through its cameras, or if the robot is standing still performing computation to choose the next action it is often described as “thinking.” Yet the robot is neither seeing nor thinking, even if it is often easiest to describe the robot as doing such, as these are the concepts that everyone is familiar with. However, this is misleading because those who are not in the loop on the technology begin to interpret what they’ve been told in ways that are inconsistent with the real capabilities of the system. For example, after being told that a robot is thinking, it might be natural for a human to believe that a robot is capable of more general thought than it actually is. A flawed, but logical conclusion of this line of thought is that since a robot can perform one task better than a human, it can perform all tasks better than a human, when in reality it cannot.

Together the lack of adequate understanding of the human brain, some fundamental differences between living brains and A.I. systems, and exaggerated claims that don’t match with the current capabilities of our technology, it is clear that many limitations are left unstated in the media A.I. hype. 

The last common error that the hype machine makes is related to their judgement of the difficulty of a task. Chess and Go are great examples here. It can take a lifetime for a human being to become a chess master, or to learn and apply the principles of logic—both of which are prototypical examples of abstract symbolic worlds. However, the symbolic and abstract are worlds that computational agents excel in. Therefore, when we see a robot perform a task that is difficult for us to perform, we quite frequently draw the conclusion that because the robot can perform this task, then it must be able to perform the easy tasks that we perform all the time. It turns out to be quite the opposite in reality. Computers, robots, and A.I. all perform well in these symbolic, clearly defined worlds. A  robot on the assembly line is a great example of a robot in a tightly controlled world where symbols (car parts) and reality (car parts) match up, and thus the robot performs admirably. However, once an agent begins to work in the real world, where things are fuzzy, and not quite so black and white, they begin to lose such obvious advantages. In other words, while the robot may be able to beat your pants off at chess, or solve a complicated logical proof in under a second, it is unable to perform a simple task such as walking on a sidewalk in any major city.

Together the lack of adequate understanding of the human brain, some fundamental differences between living brains and A.I. systems, and exaggerated claims that don’t match with the current capabilities of our technology, it is clear that many limitations are left unstated in the media A.I. hype. Though I am a researcher in this field, it might at first seem puzzling that I would worry about all this attention, as it is sure to bring investment. For my own part, I point out that in most cases, every boom is followed by a bust: in the A.I. community, the last big bust was referred to as the A.I. winter. For almost a decade in the 70’s, very little progress was achieved, and it was largely due to the mania that had preceded it a decade earlier. Thus, in an attempt to smooth out these peaks and ensure that A.I. can make slow, steady progress, it is in my interest to make sure that the general public both understands and appreciates the limitations of the latest robot magic, or intelligent game agent.

Lastly, if there is anything to take away from this writing, it’s that neural networks are a very powerful and important tool for solving a specific type of difficult problem, but they are not human brains. The human brain as a computer has been a popular metaphor for the past 60 years, but on closer analysis it becomes clear that human brains are not computers. Ultimately, instead of fearing replacement by these wonderful machines, let’s invest all of this misplaced energy in devising ways that we can work with, and better utilize the strengths of these tools.

Legal Cannabis: Party Hearty or By Prescription Only?

Legal Cannabis: Party Hearty or By Prescription Only?

Exploring Innovation: Catalyzing Social Media into Community-Building

Exploring Innovation: Catalyzing Social Media into Community-Building