One of the hardest things to do in the field of artificial intelligence is to interpret what exactly a neural network is doing and why it's doing it. Artificial neural networks (ANNs) are deterministic in the sense that, given adequate information about one's state, it is always possible to exactly predict what it will do. However, due to their complexity and chaotic nature it is almost always difficult to go backwards from an end state to discover how the various parts of the neural network contributed to that end state. By "difficult" I mean that it is theoretically possible but practically impossible; the math says it can be done, and we know how to do it, but the calculations are so onerous that it basically can't be done. (Very simplified explanation.)

For this reason, it is very difficult to determine where in a given ANN a certain piece of knowledge is stored. We can put in inputs and get out outputs that indicate that the ANN knows something, and we can probe for real-time data, but it's difficult to determine which neuron (or connection between neurons) or set of neurons (or set of connections) stores a given piece of knowledge just by looking at the structure. Sometimes, in simple networks, neurons can be analyzed and isolated to demonstrate their function, but most of the time a given piece of knowledge is distributed across many neurons, all of which contribute towards generating the appropriate output. One advantage of distributed storage is that if a single neuron breaks the output will only degrade slightly (in theory).

Human brains are far more complex than artificial neural networks, but they're similar in many respects (or so it would appear). Recent research from UCLA and Caltech indicates that knowledge may not be as distributed across neurons as was previously thought. Earlier speculation was that no single neuron was responsible for any specific piece of knowledge, but that everything we consider "memories" is distributed across billions of neurons. However, now there's some evidence that, even if no specific piece of knowledge exists only on a single neuron, some neurons are tied to a specific piece of knowledge.

In the current issue of the journal Nature, a research team led by neuroscientists at UCLA and Caltech has rather haphazardly located a neuron that "looks for all the world like a 'Jennifer Aniston' cell," writes Charles Connor of John Hopkins University. Conner was not involved in the study.

The cell in question was found in the brain of one subject as part of an epilepsy study. When the person was shown 87 images of various celebrities, well-known buildings, animals and objects, the neuron only fired for seven separate snapshots of the Friends actress.

It may be that many other neurons that weren't probed also fired for Jennifer Aniston. It may also be that the researchers simply couldn't find another thought that would trigger that neuron, but that such a thought does actually exist. Either way, these results are somewhat surprising.

However, no one is claiming that there is only one cell in the brain for Jennifer Aniston, the Eiffel Tower, and your grandmother.

"One straightforward objection to this idea is that we don't have enough neurons in the brain to represent each object in the world," said Connor.

Well, the adult human brain has around 100 billion neurons, which would seem to be more than enough to devote one to each person you know or know of, plus more than enough for every other proper noun conceivable. Futher, there are far more connections than there are neurons, and connections certainly play a role in thought and memory, perhaps even more of a role than the neurons themselves. (In ANNs, the connections between neurons are where all the work is generally done.)

"Sparseness has its advantages, especially for memory, because compact coding maximizes total storage capacity," Connor said.

Actually though, the method of knowledge distribution (sparse or dense) probably has little effect on how much capacity a given piece of knowledge requires to store. There may be some overhead associated with distributing knowledge across neurons and weights, but given the highly parallel nature of the human brain it's not likely to be very much.



Email blogmasterofnoneATgmailDOTcom for text link and key word rates.

Site Info