Artificial Intelligence: The Brain-Computer Controversy June 2010
At a conference during late 2009, IBM announced that it had used a supercomputer to simulate a brain with complexity similar to that of a cat's. The project simulated 1 billion neurons that share 10 trillion interconnections (synapses). But even with its abundant 144 terabytes of storage and some 147 000 microprocessors, the ½-petaflop-per-second computing grid ran the cat-brain model 83 times slower than a real cat's brain. (A petaflop is a million billion arithmetic operations per second). The simulation runs on Dawn, a grid of IBM Blue Gene supercomputers that consume a million watts of electricity and occupy a large data center. In comparison, the human brain contains at least tens of billions of neurons and operates on some 20 watts of power supplied by human nutrition. IBM and a number of research organizations that are collaborating on the simulation received funding from DARPA's Synapse project, which aims "to discover, demonstrate, and deliver algorithms of the brain via a combination of (computational) neuroscience, supercomputing, and nanotechnology." IBM's Dharmendra Mohda characterized the effort as "reverse engineering the brain."
Criticism followed a few days after IBM's announcement. Neuroscientist Henry Markram, director of the rival Blue Brain project led by Switzerland's Ecole Polytechnique Fédérale de Lausane, which also uses a Blue Gene supercomputer, called IBM's announcement "mass deception," "unethical," "a hoax and a PR stunt," and "light years away from a cat brain, not even close to an ant's brain." Markram himself is a neuroscientist who spent 15 years collaborating with colleagues to map the structure of the neocortex of the rat. In an open letter that many Internet sites published, Markram asserted that IBM's simulation relies on an obsolete and rudimentary approach to artificial neural networks and misuses his own team's findings. Markram also pointed out that simulating neurons inside a computer "has nothing to do with reverse engineering." Markram provided further detail in a comment he posted on the IEEE Spectrum: "Prof. Eugene Izhikevik...in 2005 has run a simulation with 100 billion such points," implying that, relative to prior art, the IBM achievement merely applied a more costly computing resource to run a simpler model. Reports indicate that Markram's Blue Brain project formerly collaborated with IBM during 2006, but Markram says he "cut all neuroscience collaboration with IBM" in 2007 because of IBM's claims of "mouse-scale simulations" at that time. After Markram's recent criticism, IBM defended the integrity of the Synapse project and implied that the criticism was unfair to IBM's collaboration partners, including several top universities and the Lawrence Livermore National Laboratory.
Various news media reported that IBM had produced a working "cat brain" or implied a similar breakthrough, and such reports were certainly exaggerations and without merit. Significantly, best-practice brain simulations require leading-edge understanding of both computer science and neuroscience. Very few people are experts in both fields, and hardly anyone is equipped to evaluate the competing claims of rival brain-computer researchers. In fact, some skeptics have criticized Henry Markram for his own claims at a 2009 TED (Technology, Entertainment, Design) conference that "It is not impossible to build a brain. And we can do it in ten years." One criticism of the approaches of both the Synapse and the Blue Brain projects is that both teams expect that their models of how neurons interconnect will scale up from rat- and cat-size brains to human-size brains—but such scalability is not proven. Human brains may be not only larger but fundamentally more complex than those of other mammals. (Conversely, the brain of a mammal such as a dolphin, which is larger than a human's, may be less complex than a human's). Moreover, neural simulation models arise from what we can measure and what we conjecture about brain function. We may be unable to confirm that these neural simulation models are correct until a supercomputer produces a result with speed and function comparable to those of a real brain.
Of course, the idea of an artificial thinking machine is attractive to many users and researchers. And considering humans' excellent ability to reason about commonsense topics and to recognize faces and the meanings of texts, stakeholders are clearly motivated to try to build electronic systems that function as well as a human brain at these and other tasks. Toward that end, researchers hope that neural simulations will yield a technical approach that will miniaturize an electronic brain more rapidly than will occur because of Moore's law alone. Assuming that the brain is simply a kind of machine, and assuming that an artificial machine can in theory reproduce any desired brain function, then researchers should one day be able to build a device that does what a brain does, is no larger than a human brain, and operates at the same low power levels as those of a human brain.
If we take IBM's benchmarks at face value and assume Moore's law will continue for another ten years, we will not have a portable human-brain computer in that time. Instead, we may have something like a cat-brain computer that runs in real time (instead of 83 times slower than real time) and that still requires a supercomputing center and a megawatt of power. Thus, stakeholders such as DARPA hope that brain-computer research will lead to a hardware architecture that is much more efficient than the standard model of computer operation (which computer scientists call a stored-program or Von Neumann architecture). A typical human brain does seem to outperform the standard model of computer operation for recognition tasks as well as commonsense reasoning—but obviously not for numerical calculation and graphics rendering, for example.
Can researchers find and model a hardware design that will outperform grids and clouds of microprocessors? Perhaps the billions of neurons in a brain each have computational capability that may be less than that of a microprocessor; but for the types of problems that humans excel at, a large collection of simulated neurons may have an advantage over an array of microprocessors having similar silicon area and complexity. If so, then future computers might benefit from next-generation neural networks that simulate the structures of biological brains. In comparison with existing computers, such a brain computer might provide "fundamentally different capabilities found in biological brains" (in the words of DARPA program manager Todd Hylton).
Markram expressed some additional hopes for the results of brain-computer research. If we have a working computer model of a brain, we can model brain diseases—what happens when a collection of neurons does not function—and build models of how we may treat such disorders. A working computer model of a brain could also be of importance to neuroscientists who study how we perceive and—presumably—construct a model of reality and to discover how well the simulated brain's "reality" corresponds with the human brain's reality. Markram even suggests that understanding how the brain works could lead to insight about the nature of physical reality itself, perhaps demonstrating how mind emerges from matter or leading to new hypotheses about the material world.
Part of the effort to build a brainlike computer will involve a period of machine learning. Real brains don't perform remarkable feats in a jar; they develop in a body that interacts with and learns from an environment. Simulations include a model of neuroplasticity, and the simulation's structure will change as it learns about a real or artificial environment. One research team, at the Neurosciences Institute in San Diego, California, used the principle of embodied learning by placing a brain simulation in a robot. That brain computer modified connections among its artificial neurons as the robot wandered around and learned to navigate in its environment. Perhaps a brain computer can learn to navigate and perform other tasks by operating in a virtual world, rather than the physical world. IBM researcher Paul Maglio hopes to place the company's brain simulator in virtual environments based on the video game Unreal Tournament and based on maps and photographs of Mars; the simulated brain may "roam around" in the body of a virtual robot in order to learn and modify its artificial-neural connections.
Markram sees a key scientific purpose in having the Blue Brain simulation encounter a real or simulated environment. If a simulated brain can build a model of the environment it encounters, this brain would prove one of the theories of how the brain works—that the developing human brain models the environment and does not rely too much on a preestablished model of the environment. (Conversely, cognitive scientists have found that ability to learn language is essentially prewired at birth). If so, Markram asserts, a supercomputing simulation may prove that this model of the brain has the propensity (as he put it, whether the brain has the "substance") to perform that environmental-modeling process and illustrate how the brain then "projects this version of the universe like a bubble all around us."
The tension between the Synapse and the Blue Brain projects is not the only controversy in computational neuroscience. Researchers in applied mathematics, economics, and neuromedicine have their own roadmaps, progress reports, and approaches to modeling the brain that can differ from those of the AI community at large. And a great many ad hoc proposals for achieving humanlike intelligence have come forward from independent researchers ranging from the well-known Ray Kurzweil, Jeff Hawkins, and Stephen Wolfram to obscure software developers who promote pet theories. Even within the AI community, not every researcher may favor the "big-science" approach of applying supercomputing resources to what may prove to be immature models of brain structure and function. If a simple and efficient neural-simulation circuit is feasible after all, such a conceptual breakthrough might emerge from programming a relatively simple computing cluster rather than a supercomputer.