Using Nanoscale Technologies to Understand and Replicate the Human Brain

HOW DOES THE BRAIN COMPUTE? Can we emulate the brain to create supercomputers far beyond what currently exists? And will we one day have tools small enough to manipulate individual neurons -- and if so, what might be the impact of this new technology on neuroscience?

Recently, three renowned researchers – one neuroscientist and two nanoscientists – discussed how their once diverging disciplines are now joining to understand how the brain works at its most basic cellular level, and the extraordinary advances this merger seems to promise for fields ranging from computer technology to health.

Neuroscientist and KIBM Co-Director Nicholas Spitzer leads a discussion on the intersection of neuroscience and nanoscience with two nanoscience pioneers – Stanford University's Kwabena Boahen and Harvard University's Hongkun Park
  • Nicholas Spitzer (Host), Professor of Neurobiology and Co-Director of the Kavli Institute for Brain and Mind at the University of California, San Diego, has pursued groundbreaking studies into the activity and development of neurons and neuronal networks for more than four decades.
  • Kwabena Boahen, Associate Professor of Bioengineering at Stanford University, is using silicon integrated circuits to emulate the way neurons compute, bridging electronics and computer science with neurobiology and medicine. At Stanford, his research group is developing "Neurogrid," a hardware platform that will emulate the cortex’s inner workings.
  • Hongkun Park, Professor of Chemistry and of Physics at Harvard University, is known for his work in developing computing technology modeled after the human brain and nervous system. Park is pushing the frontiers of nanotechnology by developing devices capable of probing and manipulating individual neurons.

NICHOLAS SPITZER: Hongkun and Kwabena, thanks for joining me for this conversation about the intersection of neuroscience and nanoscience. Here in La Jolla [California] we have a joke – that the most rapidly growing area of physics is neuroscience. But it actually makes sense, because a lot of exciting discoveries have been made at the interface of fields that haven’t been in really close contact before. Certainly from the neuroscience side this interface between nanoscience and neuroscience is very attractive, because neuroscience wants to apply tools from nanoscience. I hope that this is a two-way street and that the nanoscientists are glad of the applications of their work to neuroscience.

Hongkun, I’ll start with you. How did you become interested in neuroscience? This is different from your background in chemistry and physics. What drew you toward this field?

HONGKUN PARK: It started about five years ago, as a part of what you might call a “mid-career crisis.” At the time I had just been promoted, and I was looking to do something quite different from what I was doing. I still have a program in physics and nanoscience – about half of my group is working on that – but I missed being on a steep learning curve, and in order to revive that feeling, I wanted to learn something new. It seemed like neuroscience was the perfect thing. A lot of interesting and exciting developments were happening, but it also seemed that we could contribute by providing new tools that could perturb and probe complex neuronal networks. So that’s when I started getting into this particular field.

KIBM Co-Director, Nicholas Spitzer
Nicholas Spitzer, Professor of Neurbiology and Co-Director of the Kavli Institute for Brain and Mind at the University of California, San Diego (Courtesy: N. Spitzer)

NS: Kwabena, how about you? What got you excited about neuroscience?

KWABENA BOAHEN: I was one of those people who started out with an interest in engineering from an early age. When I was young I liked to take apart stuff and build things. I also hated biology – I couldn’t memorize to save my life. I got my first computer as a teenager, and I went to a library and figured out how it worked and I was kind of really turned off. I thought computers were very “brute force” and I thought there must be a more elegant way to compute. I didn’t really know anything about the brain, because of my lack of biology. I actually discovered more about how brains work when I was an undergrad. This was in the late ‘80s when there was a lot of hoopla about neural networks, which were mathematical abstractions about how the brain works. And so that’s how I got into it. As I learned more about biology I discovered that it had very elegant ways of computing, and I got deeper and deeper into neuroscience.

NS: As I understand it, Kwabena, you want to bridge experiment, theory and computation by building what you call “an affordable supercomputer” that not only works like a brain but also helps us to understand how the brain works. How are those goals going to complement one another?

KB: As I mentioned, when I first learned about computers I thought they were very “brute force.” When I was an undergrad I learned about how much energy the brain used and how much computation it did with that energy, and it was orders and orders of magnitude -- six orders of magnitude or more -- more efficient than a computer. So this is what got me interested. I said -- hey, why don’t we just design chips that that are based on these neural circuits and neural systems. That has culminated in the “Neurogrid” project that we’re doing right now, where we build models of the components of neurons and synapses, and various parts of the brain, directly with transistors, and do it very efficiently. This allows us to make the kind of calculation that it would normally take a supercomputer to do, but with only a few chips. Instead of using megawatts of power we use just one watt. We’re trying to make it easier and more affordable to do large-scale simulations, on the order of a million neurons.

Kwabena Boahen, Associate Professor of Bioengineering at Stanford University
Kwabena Boahen, Associate Professor of Bioengineering at Stanford University (Credit: Michael Halaas)

NS: Kwabena, you’re doing this by working with what are called “neuromorphic” chips that were pioneered by Carver Mead. But when Mead developed them, he assembled silicon neurons in a hard-wired manner, and you have really broken that open with soft-wiring of the neurons on a chip. Can you talk a little bit about that?

KB: I did my Ph.D. with Carver Mead at Caltech, and at the time he was working on the silicon retina. The way that was built on a chip was to hard-wire these transistors together to match the connectivity between the neurons and the retina, and also essentially to pre-design how the individual neurons behave. In order to use these neuromophic chips as a programmable platform for doing simulations, we wanted to make it possible to reconfigure the connections as well as to simulate different neurons with the same circuit. And so we’ve been able to come up with a technique called soft-wiring, which works similarly to the Internet. By giving each neuron an address, we can send a spike from any neuron to any other neuron, just like you can send email from one computer to any other computer based just on its IP address, with no direct physical connection between your computer and that computer. By using this same approach we can actually make these connections configurable, and we call them “soft-wired.” For the neuronal properties themselves, we built a circuit that solves the Hodgkin-Huxley equations that are used to describe how any type of ion channel behaves. Once I have those circuits I can model those equations and therefore I can model any type of ion channel.

NS: And my understanding is that with your Neurogrid supercomputer you are modeling a million neurons at this point. Am I correct?

KB: Yes. We are modeling a million neurons connected by something like 6 billion connections.

Hongkun Park, Professor of Chemistry and of Physics at Harvard University
Hongkun Park, Professor of Chemistry and Physics at Harvard University (Courtesy, H. Park)

NS: That’s really quite amazing. I’m going to come back to you in a moment with further questions about what we’re learning from the Neurogrid project. But let me turn to Hongkun. You developed a remarkable vertical nanowire arrays that have contributed to our understanding of the design principles of cellular networks. How were you inspired to invent this technology? What was the background that led you to develop this?

HP: A lot of people are working on nanostructure-cell interactions, and there have been some studies showing that vertical nanostructures can support cell growth. When I first saw that, I thought, “That cannot be true… How can a cell be happy on top of those vertical needles?” So we tried to see whether neurons and other cells could be supported on these vertical silicon nanowires that we grow with various techniques, and started think about what we could do with those needles. We soon found, amazingly enough, when these cells -- whether they were neurons, stem cells or what have you -- are put on top of these needles, they actually like to be poked by these needles. Apparently they are not bothered by them – they function normally, and continue to divide and differentiate. So what we are trying to do now is to use that unique interface between vertical nanowires and living cells, cellular networks or even tissues to poke, perturb, and probe them in a cell-specific fashion.

NS: This is really quite remarkable. What are the dimensions and densities of the nanowires?

HP: Typically, we use nanowires with ~100 nm in diameter. Their dimensions can vary quite a bit. For, say, perturbation experiments, we use 1.5-micron or 2-micron length nanowires, but these can be longer. One of the reasons we have been using 2- micron long nanowires is that longer nanowires pierce cultured cells. Since these nanowires are prepared using standard semiconductor processing techniques, we can prepare, within, say, an hour or so, 6-inch wafers full of nanowires with varying densities, varying lengths and varying diameters.

Scanning electron microscope image (false color) of a rat hippocampal neuron on a bed of vertical silicon nanowires. Nanowires penetrate the cell membrane without affecting cell viability, and can be used to efficiently deliver a wide variety of molecules into the cell's cytoplasm.
Scanning electron microscope image (false color) of a rat hippocampal neuron on a bed of vertical silicon nanowires. Nanowires penetrate the cell membrane without affecting cell viability, and can be used to efficiently deliver a wide variety of molecules into the cell's cytoplasm.Scanning electron microscope image (false color) of a rat hippocampal neuron on a bed of vertical silicon nanowires. Nanowires penetrate the cell membrane without affecting cell viability, and can be used to efficiently deliver a wide variety of molecules into the cell's cytoplasm. (Courtesy: H. Park)

NS: It’s really impressive to see the way you not only record electrical and biochemical activity, but also introduce molecular probes using coated nanowires. Our audience will remember that the development of a technique for recording the electrical activity of spinal cord cells and neurons – the patch clamp technique -- was such a valuable tool that it led to the award of a Nobel Prize a number of years ago. So one is always very interested in new tools for this kind of study.

Kwabena, let me come back to you. One recognizes that the ultimate test of understanding of a process is the ability to reconstruct it. This is of course precisely what you’re doing -- to reconstruct the behavior of the nervous system. I want to ask what you’re learning from these remarkable emulations of the nervous system.

KB: One of the main things we want to use this system to model is the feedback between cortical areas. In the visual system alone, there are about three dozen representations of the visual world. These are called cortical areas. And there’s a massive amount of feedback. Pretty much every area that talks to another area gets feedback from it. About half the connections are feedback connections. Feedback is a real problem to deal with because you have to sort of break the loop to control the input that’s going to a particular area so that you can do a virtual experiment. And then you somehow have to put that loop back together to try to get the system to operate in the way that it’s supposed to. The solution goes back to Alan Hodgkin and Andrew Huxley, who figured out how action potentials are generated. They used the voltage clamp technique to fix the voltage of the neuron and measure the current carried by all the various ion channels. So they broke the loop; they didn’t allow the neuron to spike. But then they demonstrated in a model – that was one of the first computational neuroscience models --that when they put these currents together they could generate a spike mathematically by simulating the equations that they had derived from their experiments.

Basically we wanted to do the same thing at the system level, by characterizing what each cortical area does, and then hooking them together with the feedback in the model to try to understand top-down effects like attention. We are basically at the stage where we are dealing with single layers of neurons modeled on a single chip. We haven’t yet hooked multiple chips together in various layers of the different cortical areas. At the single-layer level, we’ve been able to model brain rhythms. One of these that is important for attention is called the gamma rhythm. This is in the 40 hertz range. It’s nested with a slower rhythm called a theta rhythm. We’ve been able to reproduce these nested rhythms in this model that we’ve programmed this chip to do.

NS: Hongkun, let me turn to you and ask what are some of the things that you and your colleagues are learning from these remarkable vertical nanowires, either in recordings or in the perturbation experiments that you’re doing?

Sixteen of these 162 mm^2 chips will be assembled on a printed circuit board to build Neurogrid, the first hardware system to simulate over one million neurons in real-time while consuming less than 10 Watts.
Sixteen of these 162 mm^2 chips will be assembled on a printed circuit board to build Neurogrid, the first hardware system to simulate over one million neurons in real-time while consuming less than 10 Watts. Credit: Rodrigo Alvare

HP: What we have shown so far is that we can indeed introduce a variety of different biochemicals into neurons, neuronal networks and now even tissues using these nanowires in a spatially selected fashion. With our recording tools we take multiples of neurons -- our initial goal is 16 neurons -- record their activities, and correlate them with the connectivity of these neurons that we map optically. We are also collaborating with Sebastian Seung and Russ Tedrake at MIT, who will model the activities of the neurons that we record and come up with a model that can explain the behavior of these neurons and can be used to predict a control scheme.

NS: Let me ask about the targeting of specific cells in the network, Hongkun. In principle, one might be able to apply the concept of system identification that is used in other fields, such as engineering, to identify different layers in the circuit. I wonder if you can do that with the vertical nanowires. Are they addressable? Can one address individual vertical nanowires to provide identification, for example, of the neurons in your cultures?

HP: Yes, we are certainly aiming to do that. With our chip, which can electrically as well as chemically interface with neurons, we can individually address individual nanowires or nanowire bundles that are penetrating the cells so that we can record from individual cells. We can also couple that with microfluidics to administer neuromodulators, hormones and other molecules and see how site-selective administration of the chemicals modifies neural activity. One of the things we want to test is the model that Sebastian Seung came up with, the hedonistic neuron model, where in order to generate, for example, memory, you require not only persistent electrical activity but also chemical rewards such as dopamine. I think that our experimental platform is well suited to test these models.

NS: Kwabena, let me come back to you. Earlier in our conversation you pointed out the dramatic difference in the power requirements for the brain on the one hand, and something like a large computer, a classical computer, such as the IBM Blue Gene computer on the other hand. It is striking that the human brain requires only 10 watts whereas the Blue Gene requires a megawatt. What is the way in which the brain is achieving this wonderful energetic economy?

KB: Well, we don’t know, do we? This is really a very interdisciplinary question, because neuroscientists who study the brain usually don’t measure or try to understand what makes the brain efficient. At the same time, engineers and physicists who know about energy and things like that don’t study actual neurons. If you come and you tell me that you’ve figured out how the brain works, the first thing I’m going to ask you is, “How does it do it with 10 watts?” But if you do some calculations based on 10 watts, you can get an idea of its style of computation. Knowing how much current a single ion channel passes when it’s open, you can calculate how many ion channels can be open at the same time. When you divide that by the number of neurons in the brain, it turns out that only about a hundred to a thousand ion channels per neuron can be open at one time. And that’s an amazingly small number.

Confocal microscope section of rat hippocampal neurons on a bed of vertical silicon nanowires. Fluorescently labeled peptide were delivered into these cells using nanowires.
Confocal microscope section of rat hippocampal neurons on a bed of vertical silicon nanowires. Fluorescently labeled peptide were delivered into these cells using nanowires. (Courtesy: Hongkun Park

Given that a neuron has 10,000 synapses, and something on the order of 10,000 ion channels are required to generate an action potential, a neuron has to be operating most of the time with most of its synapses inactive and most of ion channels closed. This style of computation, where you’ve got a hundred stochastic elements opening and closing randomly, is going to be very noisy and probabilistic. So we know we have to find a style of computation that works in that regime.

NS: I remember reading about the model retina that you developed. Were there additional things that came out of the modeling – insights, perhaps, into how that part of the nervous system works so effectively?

KB: Two things. One has to do with how you deal with this kind of heterogeneity or variability between neurons. Normally people describe the retina as a kind of input-output device. And they say there’s a difference of Gaussians operation that’s performed on the input to generate the output. So it’s sort of a black box model. If you take that model and implement it on a chip, you end up with a very crummy device. But if you go and look at the circuitry, what the individual cells are doing, and how they are connected, and if you base your chip on that design, you get something that’s very robust to the heterogeneity of the devices on the chip. So this is saying something about how you translate that function into a circuit, and the kinds of constraints the circuit is solving in addition to performing that function.

So that’s more on the engineering side of things. On the neuroscience side of things, going through that process of translating this function into a circuit, we come up with specific predictions of what identified cell types in the retina are doing. Those predictions can be tested to show we can assign specific functions to a specific cell type. This is another one of the things that came out of that modeling work.

NS: One thing that I wanted to ask both of you about – and I will start with you, Kwabena – is the various commercial applications of the work that you are doing. For example, I remember reading with fascination a few years ago a book by Jeff Hawkins, the inventor of the Palm Pilot, called “On Intelligence.” He described an effort that he is making trying to use neuronal architectures to build computers that he would like to patent and bring to the marketplace. Do you see an opportunity in this domain?

KB: I built my first neuromorphic chip – an associative memory – when I was an undergrad. I published a paper on it in 1989. So I’ve been in this business for 20 years. The picture has gotten more complex since then. This is part of the motivation to build Neurogrid. We have to advance our fundamental scientific understanding before we can turn this thing into a technology. To help accelerate that process, we said, well, the kinds of things we learned from the biology, in terms of modeling things like the retina and so on, we can turn into a computer that works like the brain and that will be much more efficient at doing these simulations, which would make them more affordable. And then we could have enough computational power to really do something.

We really have to understand how the computation arises, all the way down to the ion channel level, to understand how it’s done efficiently. These multi-scale simulations require enormous amounts of computation.

NS: Hongkun, let me throw this ball to you. Do you see, either in the near future or down the road, applications beyond the scientific understanding of the technology that you’ve developed and are continuing to develop?

HP: My group is primarily interested in fundamental neuroscience, but some have shown interest in utilizing our technology beyond these studies. The things that we have demonstrated – such as the fact that the vertical nanowires can deliver any biological effector to any cell type in a spatially selected fashion – have drawn interest from many different people, and we have been working with stem cell institutes and others to demonstrate the unique utility of this particular platform. For example, one thing that we are doing is to try to differentiate individual pluripotent stem cells into a particular cell type with a shotgun approach. We can simultaneously introduce many effectors, say micro-RNAs and nuclear factors, into the same cell by simply co-depositing these molecules onto the nanowires, and we can do this in a massively parallel arrayed fashion.

Epifluorescence image of HeLa S3 cells grown on top of a two-molecule microarray. The array, printed on the nanowire substrate with a 400 micron pitch, consists of siRNAs targeted against the intermediate filament vimentin (Pink) and a nuclear histone H1 protein (Green).
Epifluorescence image of HeLa S3 cells grown on top of a two-molecule microarray. The array, printed on the nanowire substrate with a 400 micron pitch, consists of siRNAs targeted against the intermediate filament vimentin (Pink) and a nuclear histone H1 protein (Green). (Courtesy: Hongkun Park)

So in terms of parallel bio-assay, I think the technologies we are developing can have an impact, although the actual impact remains to be seen. And it turns out that the chip we are developing can be small enough that it might be able to be implanted into live animals. We are working together with electrical engineers to demonstrate the feasibility of such an approach. As an example, we are exploring, with some electronics companies, how to develop the backside CMOS circuitry so that you can record signals remotely. Once we’re able to do that we will have an implantable chip that can interface with neurons and other types of cellular networks.

NS:That’s a great vision of the future. Hongkun -- and Kwabena also -- I’m now going to pose some more general questions. Let me ask Hongkun: What are one or two of the big conceptual challenges that face you in developing these tools and perhaps testing them as well? What are the conceptual hurdles that you have to overcome in this work?

HP: I’m not so sure you can call this a conceptual hurdle, but one thing that struck me as a physicist about the biology of neuronal networks is that we seem to lack a framework for how to really think about the problem. And, in my opinion, it, at least partially, stems from the fact that we lack the appropriate tools. Let’s take an example: say I give you a small piece of rat brain and then tell you that it must have been very, very important because I took it out and the rat died when I took it out. Can we discern the function of that particular neuronal circuitry? Currently we don’t know how to answer these types of questions. From my perspective, the reason why we cannot do so is because we lack the tools that can perturb the system in a specific fashion and then record the signals globally, the type of tools that engineers use for system identification. So the one thing that I am trying to do while I’m learning this new, fascinating field is to try to contribute in that direction, that is, by developing such tools.

NS: Kwabena, I want to ask you the same question. What conceptual challenges are the biggest problems for you and your colleagues?

KB: I think that the biggest one is the way in which engineers are trained to get precision from individual components. If they have a chip and all the different devices on the chip are behaving differently, they’re going to try to make every device identical before they move to the next step of doing something with it. I think that that is a big stumbling block. Exposing engineers to a little more biology would help them see how the brain works, that it works despite all this heterogeneity, and that it’s able to get precision at the system level from imprecise components at the device level. The way that technology is going right now, as transistors are getting down to the nanoscale, we are getting a lot of variability between the transistors – eventually they will get so small that electrons are going to be flowing down the channel single-file, and when an electron gets trapped, the current is going to turn off and is going to turn on stochastically, just like an ion channel. At that point your digital logic is going to fail and yet you still have to try to compute. Through these neurobiologic approaches, if we can figure out how the brain is doing it, we will be able to come up with a solution for the next generation of technology.

NS: Let me throw another question to the two of you. I’ll start with you, Kwabena and then come to you, Hongkun. This is about how you interact with neuroscientists. Your research concerns are very closely related to those of neuroscientists. How do you interact with them?

KB: My lab is in the “Bio-X” center at Stanford, and the whole point of “Bio-X” is that X stands for anything. All these biologists and engineers and physicists and mathematicians and all these guys are cheek by jowl together in the same place. I’m part of the “neuro” cluster, which consists of six different “neuro” people – including myself, Krishna Shenoy, who studies the motor cortex, and Tirin Moore, who studies attention. Tirin and I have a collaboration with a student who is recording from multiple layers of cortex at the same time so that we can constrain our model by looking at activities of different layers of cortex. I’m also collaborating with Eric Knudsen – he’s in the building next door. He studies the optic tectum but he’s also interested in attention – basically how you direct your gaze to different things that come up or to interesting targets. The tectum is a little brain by itself. It gets sensory input at the superficial layers and has motor output from the deep layers, and it closes the loop. We are able to record from the various layers in brain slices and we can also do behavioral experiments with chicks. And the tectum is more accessible than the cortex because the cells we are particularly interested in are in separate nuclei. That’s another area of collaboration – we share a student who’s modeling the tectum and doing experiments in vitro and in vivo.

NS: That’s great. I’ve certainly heard a lot about Bio-X. It’s a really nice incubator for bringing people together and encouraging collaboration. Hongkun, how is this working for you? How do you keep in close touch with neuroscientists interested in similar problems?

HP: I’m certainly blessed by the wonderful colleagues and the wonderful collaborators that I have. As I said, I really knew nothing about neurobiology, and without them I could not have started this endeavor. When I first started, I learned a lot from Sebastian Seung, Venkatesh Murthy and Markus Meister, who are all close by. Venky Murthy and Markus Meister are the colleagues at Harvard who are affiliated with the Center Brain Science, which I am part of. Sebastian Seung at MIT taught me how to think about neural networks and what the important questions are. I also learned from Clay Reid at Harvard Medical School about the power of imaging tools. All these interactions helped me greatly in terms of identifying the problems that they care about. I think collaboration is the crucial part or this particular endeavor and I know that, without them, I would not be where I am.

NS: My last question is this: If you could be granted instantly the answer to a question – let’s imagine that it was from an authority that had the ability to do this – what would the question be? I’ll start with you, Kwabena.

KB: The question would be, “How come when I look out there I see a single world when there are three dozen representations of the world inside my brain?”

NS: That’s a very interesting question, because of course the inputs that come in are disparate, and the way in which they are fused to give us a single perspective of reality is a fascinating problem. Hongkun, what is the question you would most like have answered?

HP: We start from a single cell, and then become a fully functional biological organism with complex organs such as the brain. I’d love to know how this wonderful “self assembly” happens.

NS: That’s another wonderful question. Gentlemen, this has been a real treat for me. I have enjoyed it tremendously. I look forward to seeing this interface between neuroscience and nanoscience develop further, and I know the two of you will be right on the cutting edge there, pushing this forward. Thanks very much.

KB & HP: And thank you very much.

(The teleconference was held on January 19, 2010.)