Pasadena, Calif. — Getting a handful of electronic components to mimic the way a garden slug learns to avoid bitter food may not sound like much of an achievement. But if the theories of John J. Hopfield, professor of biophysics at the California Institute of Technology, are right, it is an important milestone on the road toward making computers that can handle information in a manner more like people than like glorified adding machines. Although his work is still at an early stage, it holds promise of providing a basis for computers that efficiently recognize patterns such as faces and speech, reconcile conflicting information, and find optimal solutions to complex problems beyond today's devices.
His work is also providing a different approach to understanding how the brain works, and thus has attracted the attention of a number of scientists and engineers around the country.
Since 1980, Dr. Hopfield has been trying to determine what characteristics of thinking and memory, if any, arise from the fundamental structure of the neural networks that make up the brain. He was fascinated by the fact that although we understand how an individual nerve cell works, the story of what happens when a person looks across the room is not at all well understood as yet.
From his background in physics, Hopfield knew that when you take a number of small components and combine them into a big system, whole new properties are frequently exhibited. So he set out to discover what characteristics biological neural networks exhibit simply because they involve a large number of interconnected components.
First, he looked at biological systems and began asking what they seemed to do spontaneously. There is a general problem that occurs all through biology: the way memories are stored and recalled.
In an ordinary computer, memory is stored in a specific location. If you want to read it out, you go to a particular place on a tape, a diskette, or in a particular computer chip. To find it, you need the right ``address''.
Biological memory, on the other hand, is associative. It seems to be made up of a lot of different chunks, somehow tied together. Give a person someone's name, and it evokes not only an image of that individual, but also his profession, memories of past meetings, etc.
``A lot of what passes for intelligence is associative memory,'' Hopfield says. ``A lot of our power to work with new situations arises from the ability to bring to bear all similar situations we have experienced before.''
From a ``wiring'' perspective, one fundamental difference between today's digital circuits and biological ones is connectivity. Electrical engineers do everything they can to make sure that there are a minimum number of connections between components. In biological systems, on the other hand, each nerve cell makes hundreds of connections with other cells.
To see if aspects of associative memory result from this basic structure, Hopfield developed a mathematical caricature of a nerve cell and programmed it into a computer. These ``are not realistic models of neurons, but they catch the basic spirit,'' he explains. Next, in a computer simulation, he began connecting these ``nerves'' in various ways to study how they behave.
Hopfield discovered that such networks exhibit a number of the basic characteristics of biological systems. For example, each memory is spread over a large part of the system. When such networks were built out of electronic components, he found that none of the memories were lost when portions of the circuit were destroyed, but all got ``a little bad.''
This ``failsoft'' characteristic has attracted the attention of engineers at the Jet Propulsion Laboratory (JPL). It could reduce one of the major problems with spacecraft computer systems: radiation damage. They have constructed and are testing a system with 100 effective neurons, using resistors and amplifiers.
The way that Hopfield's circuits make decisions is also unusual. Normal digital circuits reduce decision-making to a series of small steps, made extremely fast. Associative memory circuits, on the other hand, settle comparatively slowly to a global decision. Such a decision-making process, it turns out, is particularly good at solving optimization problems. These are problems where there are some rules and data and an optimum solution is desired.
Associative memory circuits have a number of interesting characteristics. For example, they do not work in a strictly logical fashion. In fact, if you make them more logical, they perform badly, Hopfield has found. Also, when memories are put into such a circuit, the information combines in such a way that extra stable points are created; these act something like a rudimentary form of inspiration, the scientist says. Further, the networks appear to suppress some information. It is as if they deliberately forget little bits of information and this actually seems to improve their performance.
The garden slug is among the simplest of animals that demonstrate all the factors of Pavlonian response. It learns to avoid bitter tastes. If you put quinine in an apple and it eats the apple, from then on it will avoid apple. Then, if you feed the slug apple and banana, it learns to avoid the banana as well. On the other hand, when a slug is fed quinine, followed by apple followed by banana, it will continue to avoid the apple but not the banana.
In order to mimic this type of learning, Hopfield had to incorporate a time sequence into the circuitry. So perhaps our sense of chronology is rooted in the nerves themselves, he suggests.
The scientist foresees some useful devices coming from his work. One application, already mentioned, is spacecraft computers. Another possibility is specialized memory for digital computers that could be used for purposes such as speech recognition.
Before applications of this sort become possible, however, components that allow such circuits to be programmed must be developed. Right now, memories are hard wired into the circuit and cannot be easily changed. But engineers at JPL are trying to find materials with the right characteristics for programmable associative memory circuits.