[Skip to Navigation]
Sign In
Figure 1.  Cortical Surface Area of Spaun (Semantic Pointer Architecture Unified Network)
Cortical Surface Area of Spaun (Semantic Pointer Architecture Unified Network)

An approximation of the size of the neural components in Spaun (shown in red) vs the surface area of the brain (very conservatively assuming neurons at 1 million/cm2 of cortex). Subcortical neurons are shown in blue, plotted at the same scale for illustrative purposes.

Figure 2.  Functional and Anatomical Architecture of Spaun (Semantic Pointer Architecture Unified Network)
Functional and Anatomical Architecture of Spaun (Semantic Pointer Architecture Unified Network)

A, Functional architecture of Spaun. The working memory, visual input, and motor output components represent hierarchies that compress or decompress neural representations between different representational spaces. The action-selection component chooses which action to execute given the current state of the rest of the system. The 5 internal subsystems, from left to right, are used to (1) map visual inputs to conceptual representations, (2) induce relationships between representations, (3) associate input with reward, (4) map conceptual representations to motor actions, and (5) map motor actions to specific patterns of movement. B, Corresponding neuroanatomical architecture, with matching colors and line styles indicating corresponding components. AIT indicates anterior inferotemporal cortex; D1 and D2, dopamine receptors; DLPFC, dorsolateral prefrontal cortex; GPe, globus pallidus externus; GPi, globus pallidus internus; IT, inferotemporal cortex; M1, primary motor cortex; OFC, orbitofrontal cortex; PM, premotor cortex; PPC, posterior parietal cortex; SMA, supplementary motor area; SNc, substantia nigra pars compacta; SNr, substantia nigra pars reticulate; STN, subthalamic nucleus; Str, striatum; v, ventral; V1, primary visual cortex; V2, secondary visual cortex; V4, extrastriate visual cortex; VLPFC, ventrolateral prefrontal cortex; and VTA, ventral tegmental area. Reproduced with permission from Eliasmith et al.4

Figure 3.  Recordings From Spaun (Semantic Pointer Architecture Unified Network) Performing the Serial Recall Task
Recordings From Spaun (Semantic Pointer Architecture Unified Network) Performing the Serial Recall Task

Spaun's experimental environment. The internal processing of the model is shown in the thought bubbles as the spiking activity of a component overlaid with the information represented by those spikes. The brain surface itself shows spatially arranged firing rates. A, Spaun observes and processes inputs. B, Spaun moves its arm to draw a response. C, The neural spiking activity of various components of the model (detailed in Figure 2). In this particular trial, Spaun makes a mistake, forgetting the digit in the middle of the list (just as human subjects often do). The spiking activity in the dorsolateral prefrontal cortex (DLPFC) and the corresponding decoded information reveal the cause of this mistake, showing the representation of the 8 position decaying. Other abbreviations are defined in Figure 2.

1.
Markram  H.  The blue brain project.  Nat Rev Neurosci. 2006;7(2):153-160.PubMedGoogle ScholarCrossref
2.
Rajagopal  A, Modha  DS. Anatomy of a cortical simulator. In:  Proceedings of the 2007 ACM/IEEE Conference on Supercomputing. New York, NY: ACM Press; 2007.
3.
Izhikevich  EM, Edelman  GM.  Large-scale model of mammalian thalamocortical systems.  Proc Natl Acad Sci U S A. 2008;105(9):3593-3598.PubMedGoogle ScholarCrossref
4.
Eliasmith  C, Stewart  TC, Choo  X,  et al.  A large-scale model of the functioning brain.  Science. 2012;338(6111):1202-1205.PubMedGoogle ScholarCrossref
5.
Chris  E.  How to Build a Brain: A Neural Architecture for Biological Cognition. Oxford, England: Oxford University Press; 2013.
6.
Hagmann  P, Cammoun  L, Gigandet  X,  et al.  Mapping the structural core of human cerebral cortex.  PLoS Biol. 2008;6(7):e159. doi:10.1371/journal.pbio.0060159.PubMedGoogle ScholarCrossref
7.
Raven  J, Raven  JC, Court  JH.  Manual for Raven’s Progressive Matrices and Vocabulary Scales. Rev ed. San Antonio, TX: Harcourt Assessment; 2004.
8.
Merolla  PA, Arthur  JV, Shi  BE, Boahen  KA.  Expandable networks for neuromorphic chips.  IEEE Trans Circuits Syst I. 2007;54(2):301-311. doi:10.1109/TCSI.2006.887474.Google ScholarCrossref
9.
Khan  MM, Lester  DR, Plana  LA. SpiNNaker: mapping neural networks onto a massively-parallel chip multiprocessor. In:  Proceedings of the 2008 International Joint Conference on Neural Networks. 2008:2849-2856.
Clinical Implications of Basic Neuroscience Research
October 2013

Modeling Brain Function: Current Developments and Future Prospects

Author Affiliations
  • 1Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada
JAMA Neurol. 2013;70(10):1325-1329. doi:10.1001/jamaneurol.2013.3835
Abstract

We discuss work aimed at building functional models of the whole brain implemented in large-scale simulations of millions of individual neurons. Recent developments in this area demonstrate that such models can explain a variety of behavioral, neurophysiological, and neuroanatomical data. We argue that these models hold the potential to expand our understanding of the brain by connecting these levels of analysis in new and informative ways. However, current modeling efforts fall short of the target of whole-brain modeling. Consequently, we discuss different avenues of research that continue to progress toward that distant, but achievable, goal.

Large-scale neural models are an exciting new tool with which to study and understand the brain. By large-scale neural model, we mean models with more than 1 million action potential–generating (ie, spiking) neurons. In the past 2 decades, simulation of biophysically based models has become possible at this scale, although this number of cells still pales in comparison with the human cortex. The ultimate purpose of these models is to understand how single-neuron activity relates to high-level brain function and behavior. Unfortunately, most current models of this kind provide no link to observable animal or human behavior.1-3

However, late last year Eliasmith et al4 proposed a mechanistic functional model of the brain that uses 2.5 million spiking neurons and performs 8 different tasks. Mechanistic means that the model realizes neural circuits that perform specific information-processing operations. These models are in contrast to statistical models that match input and output data without seeking to explain how the transformation from input to output occurs in neurons. Functional means that the mechanisms specified recreate some of the interesting abilities of the brain, such as the ability to see, to act, to reason, and to remember. These models currently fall short of the power and scale of the human brain (Figure 1). However, functional mechanistic models have the potential to significantly advance our understanding of the brain because they can be used to test hypotheses about the organization of biological circuits, to mimic and analyze the effects of brain disorders or treatments, and to provide causal explanations linking neural and behavioral data.

In this report, we review recent work that seeks to expand functional mechanistic modeling to encompass the whole brain. We then discuss possible future directions for this field, highlighting the limitations of existing work and the potential to improve, expand, and make use of these kinds of models in the continuing effort to understand the brain.

Spaun

We refer to the large-scale neural model we have recently developed as Spaun (Semantic Pointer Architecture Unified Network). It is an example of a model implementing the more general Semantic Pointer Architecture.5 Spaun consists of 2.5 million leaky integrate-and-fire model neurons, each of which is simulated concurrently on a supercomputer. The physiological and tuning properties of the cells are statistically matched to the various anatomical areas included in the model. About 20 anatomical areas are accounted for (of the approximately 1000 typically identified6), organized, and connected to reflect the brain’s known anatomy (Figure 2). Four types of neurotransmitters are included in the model (γ-aminobutyric acid, α-amino-3-hydroxy-5-methylisoxazole-4-propionic acid, N-methyl-d-aspartate, and dopamine), and their known time constants and synaptic effects are simulated. This degree of biological detail is comparable to that of other large-scale models,2 although some other work has included more detailed models of single cells.1

Spaun’s functional abilities make it unique among large-scale neural models. Spaun receives input from the environment through its single eye, which is shown images of handwritten or typed digits and letters, and manipulates the environment by moving a physically modeled arm, which has mass, length, inertia, and so on. Spaun uses these natural interfaces in combination with internal cognitive processes to perceive visual input, remember information, reason using that information, and generate motor output (writing out numbers or letters). Spaun uses these abilities to perform 8 different tasks, ranging from perceptual-motor tasks (recreating the appearance of a perceived digit) to reinforcement learning (in a gambling task) to languagelike inductive reasoning (completing abstract patterns in observed sequences of digits). These tasks can be performed in any order, they are all executed by the same model, and no changes are made to the model between tasks. We will not discuss the 8 tasks here; we recommend watching the videos at http://nengo.ca/build-a-brain/spaunvideos for more detailed demonstrations of performance.

Our purpose in building such models is to understand brain function. As a result, we must demonstrate that such models are working in a brainlike manner. In the case of Spaun, we have compared the performance of the model to human and animal data in several ways. The model aligns with natural brain function along many metrics; for example, the model and the brain share (1) dynamics of firing rate changes in the striatum during the gambling task, (2) error rates as a function of position when reporting digits in a memorized list, (3) coefficient of variation of interspike intervals, (4) reaction time mean and variance as a function of sequence length in a counting task, (5) accuracy rates of recognizing unfamiliar handwritten digits, and (6) success rates when solving induction tasks similar to those found on the Raven’s Progressive Matrices7 (a standard test of human intelligence), among other measures. These comparisons, demonstrating the range and quality of matches between the model and real neural systems, make it plausible to suggest that Spaun is capturing some central aspects of neural organization and function.

One of the key benefits of this type of model is that it provides a comprehensive description of processes ranging from high-level behavior down to the level of single neurons. For example, at the highest level of the serial recall task, the model specifies a given input-output mapping (given a list of digits, the list is output in the same order). However, we are not limited to examining the model at that abstract level by, for example, noting its match to human error rates. We can look within the model to see how the functional components work together and process information to perform the necessary computations (Figure 3A and B). We can further map those functional components onto neuroanatomical ones (Figure 2B). In addition, we can examine the details of those components because they are implemented by specific neural circuits, allowing us to characterize the flow and processing of information at the level of neural ensembles. Furthermore, within any of those ensembles, we can record and analyze, for example, the dendritic inputs, the membrane voltage, or the spiking behavior of single neurons (Figure 3C). This process allows us to demonstrate the model’s match to electrophysiological data, such as the frequency spectrum shifts of single-neuron spike responses in monkeys performing a similar working memory task.

This fully mechanistic explanation allows the model to address interesting questions about the connections within and between these levels of analysis. For example, how do druglike perturbations to postsynaptic current dynamics affect the model’s ability to remember lists of digits? How does information encoding impairment (eg, damage to the anterior inferotemporal cortex) affect the model’s ability to reproduce an observed digit? We are still at the early stages of exploring such questions, and many more questions remain to be asked. We have made Spaun publicly available so that other researchers may ask their own questions and test the model as they see fit (http://models.nengo.ca/spaun).

Limitations

Spaun represents only a very early step toward the goal of building brain-scale functional models. Although it is large scale by today’s standards, at 2.5 million neurons Spaun is about 4 orders of magnitude smaller than the brain. This difference is illustrated in Figure 1, which shows the relative scale of Spaun compared with the surface area of the human brain, making the modest size of the cortex included in the model readily apparent. Furthermore, some aspects of neural function are entirely unaccounted for, such as the role of glial cells and neuromodulatory regulation. One of the main reasons for these limitations is the computational resources needed to simulate such a model; at present, 2.5 hours of simulation time are needed to simulate 1 second of Spaun’s behavior, placing practical restrictions on the addition of more complexity.

Given the limited biological scale, we should not be surprised that the functional capacity of the model also falls well short of that of the brain. Performing 8 tasks in a single model is unique for computational modeling of this sort but does not begin to approach the behavioral repertoire of simple animals, not to mention humans. In addition, the 8 tasks of Spaun are all prespecified; the model cannot be instructed to perform a new task, nor can it learn one on its own.

Future Work

Large-scale mechanistic neural models present many exciting possibilities for understanding the brain. A model such as Spaun allows us to take an interventionist approach to characterizing neural function, manipulating or interfering with different aspects of the model and examining the result. For example, we are currently exploring the effects of aging, examining how neurophysiological changes associated with age (eg, neuron loss or connectivity decreases) affect behavioral performance when we recreate them in the model. The detailed neuroanatomical mapping (Figure 2) also allows us to explore the functional impact of creating lesions in specific brain regions, mimicking the effects of strokes or other brain damage. These explorations are not limited to negative effects; for example, we can increase the working memory capacity of the model and examine how that affects performance across the 8 tasks. These questions are not new, but by performing these investigations using a mechanistic model, we can more easily draw causal—rather than correlational—conclusions because we are directly manipulating the variable of interest (which is often not possible in a real brain). In addition, such models allow for the investigation of systemic interactions across many levels of intervention (eg, direct brain stimulation, drug-based therapies, or behavioral interventions).

Given the limited behavioral repertoire of current models, we expect much future work to be directed at expanding such models’ functional capabilities. For example, we are interested in improving their behavioral flexibility, that is, giving models the ability to dynamically operate in new ways rather than having tasks prespecified. Our research in this area is evolving in 2 related directions. The first direction gives models the ability to follow task instructions. The model is given a description of a task (eg, add together the next 2 numbers you see, then subtract the third) and then performs that task. The second direction allows models to learn new tasks based on reward. In this case, the model is not given instructions on how to complete the task, only feedback on whether it perfomed the task correctly or incorrectly. The model uses that information to gradually learn to perform the new task successfully. These capabilities will allow models to develop larger and more fluid behavioral repertoires, helping to capture these sophisticated aspects of human behavior.

As mentioned above, one of the limitations on existing models is simulation speed. This limitation restricts the development of larger, more complex models and prevents Spaun from interacting in real time with its environment. These constraints have driven our interest in neuromorphic hardware—custom computing hardware designed to simulate millions or billions of neurons in real time. We are collaborating with other research groups8,9 to combine ideas from theoretical neuroscience with the power of neuromorphic hardware.

In summary, our recent Spaun model, while making significant advances in connecting our understanding of basic biological components to sophisticated behavior, is just a first step. Spaun provides tantalizing hints of the utility of large-scale mechanistic models for neuroscience, psychology, and artificial intelligence. In the coming years, we expect to see the potential of these models become more widely exploited, enhancing and expanding our basic understanding of the brain and helping in the diagnosis and treatment of a wide variety of brain disorders.

Section Editor: Hassan M. Fathallah-Shaykh, MD, PhD
Back to top
Article Information

Accepted for Publication: June 12, 2013.

Corresponding Author: Chris Eliasmith, PhD, PEng, Centre for Theoretical Neuroscience, University of Waterloo, 200 University Ave, West Waterloo, ON N2L 3G1 Canada (celiasmith@uwaterloo.ca).

Published Online: August 26, 2013. doi:10.1001/jamaneurol.2013.3835.

Author Contributions:Study concept and design: All authors.

Analysis and interpretation of data: All authors.

Drafting of the manuscript: All authors.

Critical revision of the manuscript for important intellectual content: All authors.

Obtained funding: Eliasmith.

Administrative, technical, and material support: All authors.

Study supervision: Eliasmith.

Conflict of Interest Disclosures: None reported.

Funding/Support: This study was supported by funding from the Natural Sciences and Engineering Research Council of Canada, Canada Research Chairs, The Canadian Foundation for Innovation, and Ontario Innovation Trust.

Additional Information: Portions of the work described herein are covered under Canadian Patent Application 2798529 and US Provisional Patent Application 61/733771.

References
1.
Markram  H.  The blue brain project.  Nat Rev Neurosci. 2006;7(2):153-160.PubMedGoogle ScholarCrossref
2.
Rajagopal  A, Modha  DS. Anatomy of a cortical simulator. In:  Proceedings of the 2007 ACM/IEEE Conference on Supercomputing. New York, NY: ACM Press; 2007.
3.
Izhikevich  EM, Edelman  GM.  Large-scale model of mammalian thalamocortical systems.  Proc Natl Acad Sci U S A. 2008;105(9):3593-3598.PubMedGoogle ScholarCrossref
4.
Eliasmith  C, Stewart  TC, Choo  X,  et al.  A large-scale model of the functioning brain.  Science. 2012;338(6111):1202-1205.PubMedGoogle ScholarCrossref
5.
Chris  E.  How to Build a Brain: A Neural Architecture for Biological Cognition. Oxford, England: Oxford University Press; 2013.
6.
Hagmann  P, Cammoun  L, Gigandet  X,  et al.  Mapping the structural core of human cerebral cortex.  PLoS Biol. 2008;6(7):e159. doi:10.1371/journal.pbio.0060159.PubMedGoogle ScholarCrossref
7.
Raven  J, Raven  JC, Court  JH.  Manual for Raven’s Progressive Matrices and Vocabulary Scales. Rev ed. San Antonio, TX: Harcourt Assessment; 2004.
8.
Merolla  PA, Arthur  JV, Shi  BE, Boahen  KA.  Expandable networks for neuromorphic chips.  IEEE Trans Circuits Syst I. 2007;54(2):301-311. doi:10.1109/TCSI.2006.887474.Google ScholarCrossref
9.
Khan  MM, Lester  DR, Plana  LA. SpiNNaker: mapping neural networks onto a massively-parallel chip multiprocessor. In:  Proceedings of the 2008 International Joint Conference on Neural Networks. 2008:2849-2856.
×