The Future of Neurotechnology: A Roundtable Discussion

by Lindsay Borthwick

Four Kavli neuroscientists reflect on the major obstacles in brain research today and the remarkable new technologies that may soon overcome them.

The Author

Lindsay Borthwick

The Researchers

Rafael Yuste
Liam Paninski
Darcy Peterka
Ken Shepard

The brain remains a mystery largely because scientists lack the tools to study it. That is beginning to change, however, due to rapid progress in the development of so-called "neurotechnologies" that are helping them make better measurements of the brain. These new tools could lead to an era of discovery in neuroscience as profound as those that followed the launch of the Hubble Space Telescope in astronomy and the invention of DNA sequencers in genetics.

But to get there, radical new ideas are needed to overcome the longstanding technical challenges that have limited scientists’ efforts to study the brain, such as the ability to record the activity of the vast networks of neurons that mediate the way we think, feel and remember.

Fortunately, scientists are redoubling their efforts to tackle those challenges, spurred in part by President Obama’s BRAIN (Brain Research for Advancing Innovative Neurotechnologies) Initiative. For example, two Kavli neuroscience institutes, at New York's Columbia University and the University of California San Diego, have launched research centers focused on the development of neurotechnologies. And Columbia’s NeuroTechnology Center annually hosts a Kavli Futures Symposium that unites some of the nation’s leading experts in neuroscience, nanoscience, genetics, engineering and computer science, each of whom is working to help unravel the brain’s complexity.

The Kavli Foundation sat down with some of Columbia's neurotechnologiests to discuss the remarkable new tools that are poised to transform the science of the brain.

The participants were:

  • Rafael Yuste – professor of biological sciences and neuroscience at Columbia University, director of the NeuroTechnology Center and co-director of the Kavli Institute for Brain Science. Yuste is a world leader in the development of optical methods for brain research.
  • Liam Paninski – professor of statistics at Columbia University in New York, co-director of the NeuroTechnology Center and of the Grossman Center for the Statistics of the Mind. Using statistics, he is studying how information is encoded in the brain.
  • Darcy Peterka – research scientist at Columbia University and director of technologies at the NeuroTechnology Center. Peterka is working on developing novel methods for imaging and controlling activity in the brain.
  • Ken Shepard – professor of electrical engineering and biomedical engineering at Columbia University and co-director of the NeuroTechnology Center. His research is focused on combining components of biological and electronic systems to create bioelectronic devices.

The following is an edited transcript of a roundtable discussion. The participants have been provided the opportunity to amend or edit their remarks.

THE KAVLI FOUNDATION (TKF): "New directions in science are launched by new tools much more often than by new concepts.” So said Cori Bargmann, who spearheaded the advisory panel for the BRAIN Initiative, during her kick-off presentation at the Symposium. Do you agree?

Rafael Yuste: I do. In fact, we used that exact quote, from the physicist Freeman Dyson, in a white paper we wrote for the Brain Activity Map project, which evolved into the BRAIN Initiative.

Normally, people think that revolution in science is as simple as having a new bright idea. But if you dig deeper, most of the major revolutions have happened because of new tools. Much of the work we heard about over the past two days was about new methods, and once we as a community develop new methods, the next generation of scientists will be able to see things no one has seen before.

Liam Paninski: There is a long history of theoretical and computational ideas in neuroscience that have percolated for years, even decades, but they have been waiting for the tools to come along to test them out. And that’s what’s really exciting about where the field is today.

TKF: Can you give me an example?

Paninski: Sure. I saw a talk by a neuroscientist the other day who has done some beautiful work on understanding the motion detection system of the fly: essentially, how a fly figures out which way it’s going. Theories about this have been around since the 1950s, but it’s only in the past year that people have been actually able to test these theories in detail, by mapping the brain circuits involved in detecting motion.

There are also a handful of theories about how information propagates through neural circuits or how memories are encoded in the structure of neural networks that we’re now able to test due to new brain research tools.

Rafael Yuste
Rafael Yuste is developing optical methods for studying brain circuits in the cerebral cortex.

Yuste: Today, Sebastian Seung, a computational neuroscientist at Princeton, gave a similar example for direction selectivity in the retina of mammals. He argued that it took 50 years for people to figure this out, and that the critical advances came with the introduction of new techniques. So that’s a very clear example of how with new tools we’re beginning to solve these long-standing questions in neuroscience.

Darcy Peterka: I think in some ways, however, the distinction between tools and ideas depends on your perspective. The things that become tools for neuroscientists are sometimes fundamental discoveries in other fields such as chemistry or physics. People may not have realized at first the value of these discoveries outside of those fields, but the merger of ideas across disciplines often creates opportunities to apply fundamental discoveries in new ways.

TKF: Rafa, in your wrap-up, you called the Kavli Futures Symposium “a dazzling feast of exciting ideas and new data.” What did you hear that you’re feasting on?

Yuste: I was very excited by things that I’d never seen before, like the deployable electronics that Charles Lieber, a chemist at Harvard, is working on. He’s embedding nanoscale electrical recording devices in a flexible material that can be injected into the brain. I thought it was just a spectacular example of a nanotool that could transform our ability to record the activity of networks of neurons.

In terms of new imaging tools, I’d never seen the type of microscopy that the physicist Jerome Mertz, from Boston University, was showing: phase-contrast microscopy in vivo. He has transformed a relatively simple microscope, the kind that most of us used in school, into a tool to look at thick tissue in vivo, including brain tissue. It was like a sip of fresh water.

On the computational side, I thought Konrad Kording’s work on neural connectivity was very refreshing. Kording is the neuroscientist at Northwestern University who showed that by using mathematics to analyze the connections between nerve cells in the worm c. elegans, a widely used model organism, you can distinguish the different cell types that make up its nervous system. I’ve worked on that problem myself, but I never looked at it from the angle he proposed.

Overall, I felt a little bit like a kid in a candy store where all the candy was new!

Liam Paninski
Liam Paninski is using statistics to study the electrical signals that carry information in the brain.

Paninski: The talk by George Church, who helped to kick-start the Human Genome Project and the Brain Activity Map Project with Rafa, was just a wonderland of exciting new things. He’s obviously done some radical science in his career, but the technique he talked about – FISSEQ, for fluorescent in situ RNA sequencing – was really exciting. It’s a way of looking at all the genes that are expressed, or turned on, in living cells. It has all kinds of applications in neuroscience. If he gets the technique working reliably, it will be huge.

Peterka: Jerome Mertz also introduced us to a technology that is really interesting because it brings together two fields – optical communication and biological imaging – that haven’t before been combined very powerfully before. He has developed an incredibly thin, flexible microscope that can be inserted deep into the brain. To get it working, he had to figure out how to transmit a lot of spatial information, carried by light through an optical fiber, from one end of the fiber to the other without degrading the image. The telecommunications industry has already solved this problem in cell phones and he has adapted the solution for optical imaging.

Ken Shepard: What stood out for me is the continued scaling of technologies designed to make electrical recordings of brain activity. We’re seeing the development of higher and higher electrode counts, which lets us record from more and more cells.

TKF: Ken, as you just pointed out, one of the major themes of the symposium was finding ways to observe the activity of more neurons – a goal that is shared by the BRAIN Initiative. Michael Roukes, from the Kavli Nanoscience Institute at California Institute of Technology, has lamented that existing tools for making electrical recordings can only monitor a couple hundred neurons at once. Where is that technology moving?

Ken Shepard
Ken Shepard has a BRAIN Initiative grant to build nanoscale sensors for brain mapping.

Shepard: One of the issues is that solid-state electronics and the brain have different form factors. One of them is hard and flat; the other is round and squishy. The challenge is to reconcile those two things to make tools that are as non-invasive as possible. The less invasive they are, the less tissue damage they cause and the longer you can leave them in the brain.

There are two ways of doing this: One is to try to make the solid-state stuff as small as possible, so tool developers are trying to make the shanks that contain the electrodes and are inserted into the brain very thin. Tim Harris, director of applied physics at Janelia Research Campus, part of the Howard Hughes Medical Institute, said yesterday that you’d better make them 10 microns – that’s 10 millionths of a meter – thin if you can. The second way is to make the electronics flexible, as Charles Lieber is doing. The idea is that if the device is more conformal, it will be more acceptable to the tissue.

As we saw yesterday, nanotechnologists are moving both of these approaches forward and trying to scale them up to record simultaneously from more neurons.

TKF: But there is a limit to the number of neurons that can be recording electrically, isn’t there? I think Michael Roukes argued that limit is 100,000 neurons, after which neuroscience will need a new paradigm.

Shepard: Yes. One of the problems with electrical recording, which I think Michael explained really nicely, is proximity. You have to get the electrodes very close to the neurons that you’re trying to record from, which means that if you're trying to record from a lot of cells you need an incredible density of electrodes. Beyond 100,000 neurons, it’s just not practical.

So what can we use instead? Michael argued that optical tools could take over from there. In fact, I’m working with him on a tool we call “integrated neurophotonics.” We received one of the first BRAIN Initiative grants to develop it. Basically, we’re aiming to put the elements of an imaging system – emitter pixel and detector pixel arrays – in the brain. We’ll still be sticking probes in the brain but they’ll be much smaller and therefore less invasive. And because they’ll detect light rather than electrical signals, they don’t require the same proximity. We think that 25 probes will be enough to record the simultaneously activity of 100,000 neurons.

Paninski: If you can solve the computational problem, demixing the signals.

Shepard: Absolutely. I saw you light up when Michael was showing all that stuff. It’s going to be an incredible computational problem.

TKF: The other big challenge in neurotechnology is the problem of depth. Even the best optical tools we have can’t see more than about a millimeter into the brain. Why is that?

Peterka: The problem is that a beam of light doesn’t travel very far in brain tissue without being scattered out of focus. People are working to overcome this by developing ways to see through opaque materials, but the devices they’ve developed are still too slow to be of practical use to neuroscientists.

Paninski: Astronomers have developed techniques to solve this scattering problem that correct the images taken by ground-based telescopes for atmospheric disturbances. They call this adaptive optics and there’s lots of interest in using these same techniques in biology. But the research is still in the early stages.

Darcy Peterka
Darcy Peterka is director of technologies for the NeuroTechnology Center.

Peterka: I would say there are two types of adaptive optics. There’s traditional adaptive optics, from astronomy. For example, imagine looking through a Coke bottle. The image you see is distorted, but you can still make it out. Now imagine that you’re looking through an eggshell or a piece of paper. You would see light but no form or structure. That’s closer to the problem neuroscientists face when trying to image the brain. Until recently, people considered the problem too difficult to solve. But in the last couple of years, some researchers have found ways to focus light scattered by a slice of chicken breast. They’ve also imaged through eggshell and a mouse ear. It’s pretty remarkable.

Yuste: Essentially, there are enough pieces in place that we can actually imagine solving a problem that seemed impossible just two or three years ago. And this is due to the interaction of completely disparate fields: physicists working in optics, engineers building very fast modulators of light and computer scientists developing mathematical approaches to reconstructing images and cancelling out aberrations. So the solution is not here, but the path toward it is starting to be clear.

TKF: The third challenge – and the third focus of the symposium – is computation, which Janelia's Tim Harris has underlined. He has talked about how difficult it is to handle the data coming from an electrode with just a few hundred channels. Are experimental neuroscientists running ahead of those who are thinking about how to handle the data and what it all means?

Paninski: I think that’s a huge bottleneck. There are massive datasets becoming available, and the people who build the computational tools are catching up, but there needs to be a lot more investment and focus in that area. We saw the same thing in systems biology and in genomics, right? First the data came, and then people started figuring out how to deal with them. We’re at the first stage now in neuroscience, and I think we’re just beginning to build up the computational and statistical infrastructure we need.

Peterka: Another hindrance to the dissemination and analysis of the data is a lack of standardization. Geneticists figured out a way to store and share DNA sequence data, but in neuroscience there is still very little standardization.

Paninski: That’ll come eventually. I don’t think that’s the major roadblock. What I see as lacking right now are students and post-docs who are fluent in both languages: computation and neuroscience.

TKF: Liam, do you think the catch-up will just happen in time, or do there need to be incentives in place to move things along?

Paninski: The objective is in place, and as neuroscientists generate more and more data, they are becoming more and more desperate to work with computational scientists. And that brings more funding into the computational realm. But on the other hand, I’m starting to lose trainees to Google and Facebook, which need people who can analyze big data.

Yuste: One of the most popular majors in college is computer science. I think that will be good for neurotechnology because we’ll have students who learned how to code when they were in middle school or high school. They’ll be completely fluent by the time they get to the lab, and I think they’ll lead the synthesis between computer science and neuroscience that has to happen.

TKF: At the symposium, we heard a lot about new efforts to identify the different types of cells that make up the brain. I think most people would be surprised to learn that we don’t really have a good handle on that. Why is there a renewed focus on this?

Yuste: Neuroscientists worked a lot on this issue of cell types in the past, and it reminds me of an old idea from Georg Hegel, the German philosopher, who argued that history progresses in an iterative way. He called that the dialectic method. You end up circling back to a problem but at a higher level, like a spiral.

With the problem of how many cell types there are in the brain, we’re sort of going back to the beginning of neuroscience, except we’re doing it in a more quantitative way. Neuroanatomists working 100 years ago identified many cell types, but we don’t have numbers associated with them. Now, we can visit this question anew with the full power of mathematics and computer science. We’ll probably confirm what we already know and swing up this Hegelian spiral to another level in which we’ll discover new things that people didn’t see before because they didn’t have these computational tools.

The tool issue is an important one because the only difference between us and the 19th-century neuroanatomists is that we have better tools, which give us more complete data about the brain. We are not smarter than they were.

Paninski: These cell types are serving as footholds to deeper questions about brain function. Sure, if I hand you piles and piles of data about different cells, computation can help you answer certain questions, such as what does it mean to be a different cell type? How many different cell types are there? What are these cell types useful for? But to me, cell type is just a starting point, a tool that allows you to do more interesting research, rather than the end goal.

TKF: The circuits that traffic information through the brain have been even more of a mystery than cell types. Are we starting to glean some patterns in the way that brains are organized or how circuits operate?

Yuste: There was a talk in this meeting, by Chris Harvey, a neuroscientist from Harvard, that touched on a model for how neural circuits operate called the attractor model. It’s still debated whether it applies to brain circuits or not, but if it does, this is the kind of model that would apply widely to neural circuits in pretty much any animal. Still, it’s very difficult to test whether the attractor model is true or not because doing so would require the acquisition of data from every neuron in a circuit and the ability to manipulate the activity of these neurons. That’s not something we can do right now.

Flexible optical fiber
Using a single, flexible optical fiber, physicist Jerome Mertz has imaged white light objects such as the letters "BU." A tool such as this one may help neuroscientists to see deep inside the brain.

Paninski: You can count on one hand the neural circuits we understand. So I think it’s just too early right now to really make any conclusions about whether circuits in the retina actually look like those in the cortex, for example. Maybe we will be able to in a couple more years as some of these new methods for monitoring and manipulating large numbers of neurons come online.

TKF: What about human applications of neurotechnology? How closely connected are the tools for basic neuroscience research and those aimed at treating brain disorders such as Parkinson’s or paralysis?

Peterka: In general, most of the neurotechnologies being used in humans are a little bit bigger than those being used in the lab and lag behind them because of the approval process. But some multielectrode arrays, such as those that John Donoghue implants in people with paralysis to restore mobility, are pretty similar to what people are using in cutting-edge neuroscience labs to study rats or primates.

Yuste: Donoghue’s laboratory has both nanoscientists who are building these cutting-edge tools and a team that works with human patients. So there are places where these technologies are being rapidly developed or adopted to treat brain disorders or to restore lost function.

Paninski: At the moment, I think there are about 20 technologies that can interact with the different parts of the brain in specific medical contexts. John talked about cochlear implants for assisting with hearing loss, deep brain stimulation for Parkinson’s disease and retinal implants for blindness, and in all of these cases there are related basic science questions that people are working hard to tackle. For example, to understand what deep brain stimulation is doing, you really need to understand subcortical circuits. So in some cases medicine is driving basic research that probably wouldn’t be done if it wasn’t for the potential health impact.

I started in John’s lab when he was just getting into multielectrode recording. That’s what set me on the path toward statistics, because it was very clear that you needed good statistical models of neural activity to develop useful neural prosthetics.

Read More