Extreme Machines: What Science Needs from Computers
Computers are the workhorses of science. Without their power to crunch numbers, control instruments, turn raw data into intelligible patterns or pictures and test theories with simulations, most of what we now know about ourselves and the universe might still be a mystery. But computing technology as we know it is reaching its limits, even as scientists look forward to new discoveries that are possible only with a great leap in computing power.
How can computing make that leap? That was the subject of the Kavli Futures Symposium, “Real Problems for Imagined Computers,” a meeting of minds between leading computer experts and scientists in disciplines such as cosmology and neuroscience that use computers to process huge quantities of data. “Computing is something important that we have in common,” says lead organizer Roger Blandford, head of the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at Stanford University.
A Rich New Dialogue
The Kavli Futures Symposia is an opportunity for small groups of leading scientists to envision new directions in the fields of astrophysics, nanoscience, neuroscience and theoretical physics. A uniquely focused and creative retreat, one of the keys to the symposia is gathering scientists in a setting where they can freely exchange ideas while together imagining the future of their disciplines.
The roster of 22 participants (plus MIT astrophysicist Edward Farhi, joining by teleconference) represented three of the major fields studied by the 15 Kavli Institutes -- neuroscience, nanoscience and astrophysics – as well as some of the most advanced work in computer science and engineering.
Held over three days in the resort town of Muelle, Costa Rica, in January 2009, this was the second in the Futures series. Like the first, held in 2007 in Ilulissat, Greenland, on the interaction of biology and nanoscience, it brought different disciplines together to spur networking and sketch out a tentative road map for future thought and research.
Blandford and other leaders of the symposium, Larry Abbott of Columbia University and Michael Roukes of the California Institute of Technology, say the conference was a success in surprising ways. Conceived as mostly an intellectual one-way street, with the users of computing looking for answers from the computer specialists, it turned into a something much richer; a dialogue in which both sides learned as well as taught.
“I think everybody came in with a fair level of ignorance,” says Blandford. The computer users – astrophysicists, nanoscientists and neuroscientists – had their eyes opened to the real nature (and limitations) of computing power. “I wasn’t aware of all the computer issues that we faced,” says Abbott, a professor of neuroscience at Columbia and a member of the university’s Kavli Institute for Brain Science. Abbott says he learned more about the many the challenges associated with the shift to parallelism – an architecture in which multiple processors work simultaneously on a problem to boost computer power. Afterwards, he “went and bought a more parallel computer than I currently had.”
Roukes, who co-directs Caltech’s Kavli Nanosciences Institute, says the symposium may have provided a reality check for scientists who were underestimating their future computing needs and overestimating the supply of computing power. “There are many disconnected areas of research that are extrapolating where they want to be tomorrow based on the computing power that they assume will be available to them,” says Roukes. He says the symposium helped point the way toward “a commonality of approach to assess realistically what we need.”
The computer scientists got an education as well. “Arguably the most interesting connection was the one that was made between some of the computer scientists and the neuroscientist,” says Blandford. “It got the computer scientists to ask what really goes on in the brain … A lot of them hadn’t been exposed to this.” Abbott says they got “some idea of how neurosystems do [computation] differently,” and this “was intriguing to everyone.”
Computing that Meets the Demands of Science
The roster of 22 participants (plus MIT astrophysicist Edward Farhi, joining by teleconference) represented three of the major fields studied by the 15 Kavli Institutes -- neuroscience, nanoscience and astrophysics – as well as some of the most advanced work in computer science and engineering. Several of the participants had backgrounds that bridged the gap between computer users and computer scientists. There was Tim Cornwall, for instance, a software designer for the Australia Telescope National Facility. Stanford’s Tom Abel brought expertise in both astrophysics and supercomputing. Jacek Becla, also from Stanford, covered astrophysics and information systems. Several neuroscientists, including Abbott, Terry Sejnowski of San Diego’s Salk Institute and Xiao-Jing Wang of Yale, billed their specialty as “neurocomputation.”
As these labels suggest, computer science has already made inroads into sciences that study the workings of the human brain and the physical laws that govern the universe. Fittingly, these are fields that make heavy demands on computers. They need plenty of computing power for simulations of the early universe or of neural circuits involving millions of neurons and synapses. Modern telescopes also create enormous data processing requirements. Blandford says the 8.4-meter Large Synoptic Surface Telescope (LSST) proposed for construction in Chile will produce three petabytes of data (more than 3 million gigabytes) in its first three years of operation.
Nanoscientists are computer users as well, but their most crucial role at the intersection of computing and sciences may be as developers of new, ever smaller physical structures for computing – such as single-atom memory and single-electron logic gates. “Nanoscience represents the next phase of miniaturization of electronic hardware that can be used for computation purposes,” says Roukes.
Such devices may one day be the basis for so-called “extreme” computers – a thousand times more powerful, by one definition, than today’s machines – but not before some daunting problems are solved. One of these is the challenge of just “moving data around,” as Abbott puts it. Even if new processors were a thousand times faster, computers as designed today could not take advantage of that speed. The wattage used in moving data to and from the processors at such a blinding pace would require having an electric power plant next door.
One of the symposium participants, Stanford University computer scientist Bill Dally, summed up this conundrum with a slogan: “FLOPS are free; free the FLOPS.” As Roukes explains, the speed of processors (measured in floating-point operations per second, or FLOPS) is more than adequate. They can work so fast that FLOPS are virtually free. “The problem is that they are not capable of operating to their capacity because they are starved for local access to memory,” he says. In effect, the FLOPS are confined by an architecture that needs too much energy to send, receive and store data.
One way around this rock in the road is via software that makes better use of the available hardware. “The computer scientists were in fact quite hardnosed about this,” says Blandford. “They are very involved in integrating the hardware and the software.” The future lies in finesse – enabled by advances in software – rather than raw power.
The Human Brain and Computer Architecture
Another route, not open today but maybe available in the longer run, is to change computer architecture. And the presence of neuroscientists at the symposium naturally brought up the architecture of the human brain. No artificial computer does so much with so little energy – 15 watts -- as the brain does. The brain’s tasks are different from the work that scientists demand of computers, and it’s very slow at number-crunching. But it was a fascinating subject for the computer experts.
The brain’s architecture is radically different from the standard computer model, with memory dispersed amid the millions of neuronal processors (most likely in the connections between neurons) rather than salted away in a dedicated space. In concept, Abbott says, this resembles an architecture called processor-in-memory (PIM), once proposed for computers but abandoned of the standard design that separates memory from the central processing unit. To mimic the brain, he says, “you would have to arrange processor and memory on every chip so that the processor and data it needs are in proximity.”
Two of the discussions at the symposium approached this issue from the neuroscience side. Astrid Prinz of Emory University, a specialist in biocomputation, discussed the modeling of neurons and networks. Yale’s Xiao-Jing Wang spoke on cortical dynamics. There was plenty to think about in contemplating the brain as a computing model. But modeling anything after the brain is problematic because there are so many unsolved mysteries about the brain itself. There is no “standard model” of neuroscience, comparable to what exists in astrophysics, for example. Scientists are still working their way toward a shared view of how the brain works at the basic biochemical and electrical level. This makes neuroscience an exciting field, but it also means scientists have yet to decode the secret of brain computing so that it can be translated into new technology.
Roukes says getting to the right question “is sometimes 90 percent of the battle” in science. To him and others, the symposium seemed to confirm that the question it started with – how shall science get the computing power it needs? – was indeed the right one, even if it has no clear answer yet. The symposium also highlighted the need for collaboration between communities that historically have done their work more or less separately. Sejnowski of the Salk Institute and Microsoft software architect Blaise Agueras y Arcas showed how scientists can bridge the divide between academia and commercial applications. The conference also showed how knowledge needs to flow across the gap between the computer users and the computer scientists. The scientists need to know exactly what the users require (not just any new computer will do). They can also pick up potentially fruitful ideas from the users’ own research.
The extreme machines of the future may differ radically from the computers of today; and what shape they take is not yet known. But that was the point of the symposium. As Blandford points out, it was meant to start a process of thought and invention, not just to move computer design a step forward. “We took the charge to try and be visionary,” he says, “not to extrapolate from what we’re doing at the moment.”