in ,

Hidden Computational Power Found in the Arms of Neurons, Hacker News

Hidden Computational Power Found in the Arms of Neurons, Hacker News


neuroscience
By

Jordana Cepelewicz

************** (January) , 2880

**********************

The dendritic arms of some human neurons can perform logic operations that once seemed to require whole neural networks.

******************** (**************************************

********************

Micrograph of a cortical neuron, showing its many dendrites.Thin dendrites resembling a plant’s roots radiate in all directions from the cell body of this cortical neuron. Individual dendrites may process the signals they receive from adjacent neurons before passing them along as inputs to the cell’s overall response.

Imre Vida (NeuroCure Cluster, Charité – Universitätsmedizin Berlin)

******************************************

(**************************************** (********************** The information-processing capabilities of the brain are often reported to reside in the trillions of connections that wire its neurons together. But over the past few decades, mounting research has quietly shifted some of the attention to individual neurons, which seem to shoulder much more computational responsibility than once seemed imaginable.

The latest in a long line of evidence comes from scientists’ discovery of a new type of electrical signal in the upper layers of the human cortex. Laboratory and modeling studies have already shown that tiny compartments in the dendritic arms of cortical neurons can each perform complicated operations in mathematical logic. But now it seems that individual dendritic compartments can also perform a particular computation – “exclusive OR” – that mathematical theorists had previously categorized as unsolvable by single-neuron systems.

“I believe that we’re just scratching the surface of what these neurons are really doing,” said Albert Gidon

, a postdoctoral fellow at Humboldt University of Berlin and the first author of the paper that (presented these findings inMicrograph of a cortical neuron, showing its many dendrites.************** (Scienceearlier this month.

The discovery marks a growing need for studies of the nervous system to consider the implications of individual neurons as extensive information processors. “Brains may be far more complicated than we think,” saidKonrad Kording, a computational neuroscientist at the University of Pennsylvania, who did not participate in the recent work. It may also prompt some computer scientists to reappraise strategies for artificial neural networks, which have traditionally been built based on a view of neurons as simple, unintelligent switches.

The Limitations of Dumb NeuronsIn the 1940 s and ‘ s, a picture began to dominate neuroscience: that of the “dumb” neuron, a simple integrator, a point in a network that merely summed up its inputs. Branched extensions of the cell, called dendrites, would receive thousands of signals from neighboring neurons – some excitatory, some inhibitory. In the body of the neuron, all those signals would be weighted and tallied, and if the total exceeded some threshold, the neuron fired a series of electrical pulses (action potentials) that directed the stimulation of adjacent neurons. At around the same time, researchers realized that a single neuron could also function as a logic gate, akin to those in digital circuits (although it still isn’t clear how much the brain really computes this way when processing information). A neuron was effectively an AND gate, for instance, if it fired only after receiving some sufficient number of inputs.

Networks of neurons could therefore theoretically perform any computation. Still, this model of the neuron was limited. Not only were its guiding computational metaphors simplistic, but for decades, scientists lacked the experimental tools to record from the various components of a single nerve cell. “That’s essentially the neuron being collapsed into a point in space,” said Bartlett Mel, a computational neuroscientist at the University of Southern California. “It didn’t have any internal articulation of activity.” The model ignored the fact that the thousands of inputs flowing into a given neuron landed in different locations along its various dendrites. It ignored the idea (eventually confirmed) that individual dendrites might function differently from one another. And it ignored the possibility that computations might be performed by other internal structures.

But that started to change in the 1982 s. Modeling work by the neuroscientistChristof Kochand others, later supported by benchtop experiments, showed that single neurons did not express a single or uniform voltage signal. Instead, voltage signals decreased as they moved along the dendrites into the body of the neuron, and often contributed nothing to the cell’s ultimate output.This compartmentalization of signals meant that separate dendrites could be processing information independently of one another. “This was at odds with the point-neuron hypothesis, in which a neuron simply added everything up regardless of location,” Mel said.

That prompted Koch and other neuroscientists, including Gordon Shepherdat the Yale School of Medicine, to model how the structure of dendrites could in principle allow neurons to act not as simple logic gates, but as complex, multi-unit processing systems. They simulated how dendritic trees could host numerous logic operations, through a series of complex hypothetical mechanisms.

Later, Mel and several colleagues looked more closely at how the cell might be managing multiple inputs within its individual dendrites. What they found surprised them: The dendrites generated local spikes, had their own nonlinear input-output curves and had their own activation thresholds, distinct from those of the neuron as a whole. The dendrites themselves could act as AND gates, or as a host of other computing devices.

Mel, along with his former graduate student Yiota Poirazi(now a computational neuroscientist at the Institute of Molecular Biology and Biotechnology in Greece), realized that this meant that they could conceive of a single neuron as a two-layer network). The dendrites would serve as nonlinear computing subunits, collecting inputs and spitting out intermediate outputs. Those signals would then get combined in the cell body, which would determine how the neuron as a whole would respond.

Whether the activity at the dendritic level is actually influenced the neuron’s firing and the activity of other neurons was still unclear. But regardless, that local processing might prepare or condition the system to respond differently to future inputs or help wire it in new ways, according to Shepherd.

Whatever the case, “the trend then was, ‘OK, be careful, the neuron might be more powerful than you thought , ‘”Mel said.

Shepherd agreed. “Much of the power of the processing that takes place in the cortex is actually subthreshold,” he said. “A single-neuron system can be more than just one integrative system. It can be two layers, or even more. ”In theory, almost any imaginable computation might be performed by one neuron with enough dendrites, each capable of performing its own nonlinear operation.In the recent Science

paper, the researchers took this idea one step Further: They suggested that a single dendritic compartment might be able to perform these complex computations all on its own.

Unexpected Spikes and Old Obstacles**************************************************** Matthew Larkum, a neuroscientist at Humboldt, and his team started looking at dendrites with a different question in mind. Because dendritic activity had been studied primarily in rodents, the researchers wanted to investigate how electrical signaling might be different in human neurons, which have much longer dendrites. They obtained slices of brain tissue from layers 2 and 3 of the human cortex, which contain particularly large neurons with many dendrites. When they stimulated those dendrites with an electrical current, they noticed something strange.

They saw unexpected, repeated spiking – and those spikes seemed completely unlike other known kinds of neural signaling. They were particularly rapid and brief, like action potentials, and arose from fluxes of calcium ions. This was noteworthy because conventional action potentials are usually caused by sodium and potassium ions. And while calcium-induced signaling had been previously observed in rodent dendrites, those spikes tended to last much longer.

Stranger still, feeding more electrical stimulation into the dendrites lowered the intensity of the neuron’s firing instead of increasing it. “Suddenly, we stimulate more and we get less,” Gidon said. “That caught our eye.”

To figure out what the new kind of spiking might be doing, the scientists teamed up with Poirazi and a researcher in her lab in Greece,Athanasia Papoutsi, who jointly created a model to reflect the neurons’ behavior.

The model found that the dendrite spiked in response to two separate inputs – but failed to do so when those inputs were combined. This was equivalent to a nonlinear computation known as exclusive OR (or XOR), which yields a binary output of 1 if one (but only one) of the inputs is 1.

This finding immediately struck a chord with the computer science community. XOR functions were for many years deemed impossible in single neurons: In their (book) ******************************************* Perceptrons , the computer scientists Marvin Minsky and Seymour Papert offered a proof that single-layer artificial networks could not perform XOR. That conclusion was so devastating that many computer scientists blamed it for the doldrums that neural network research fell into until the (s.) *******************

Neural network researchers did eventually find ways of dodging the obstacle that Minsky and Papert identified, and neuroscientists found examples of those solutions in nature. For example, Poirazi already knew XOR was possible in a single neuron: Just two dendrites together could achieve it. But in these new experiments, she and her colleagues were offering a plausible biophysical mechanism to facilitate it – in a single dendrite.

“For me, it’s another degree of flexibility that the system has,” Poirazi said. “It just shows you that this system has many different ways of computing.” Still, she points out that if a single neuron could already solve this kind of problem, “why would the system go to all the trouble to come up with more complicated units inside the neuron? ”

Processors Within Processors

Certainly not all neurons are like that. According to Gidon, there are plenty of smaller, point-like neurons in other parts of the brain. Presumably, then, this neural complexity exists for a reason. So why do single compartments within a neuron need the capacity to do what the entire neuron, or a small network of neurons, can do just fine? The obvious possibility is that a neuron behaving like a multilayered network has much more processing power and can therefore learn or store more. “Maybe you have a deep network within a single neuron,” Poirazi said. “And that’s much more powerful in terms of learning difficult problems, in terms of cognition.”

Perhaps, Kording added, “a single neuron may be able to compute truly complex functions. For example, it might, by itself, be able to recognize an object. ”Having such powerful individual neurons, according to Poirazi, might also help the brain conserve energy.

Larkum’s group plans to search for similar signals in the dendrites of rodents and other animals, to determine whether this computational ability is unique to humans. They also want to move beyond the scope of their model to associate the neural activity they observed with actual behavior. Meanwhile, Poirazi now hopes to compare the computations in these dendrites to what happens in a network of neurons, to suss out any advantages the former might have. This will include testing for other types of logic operations and exploring how those operations might contribute to learning or memory. “Until we map this out, we can’t really tell how powerful this discovery is,” Poirazi said.

Though there’s still much work to be done, the researchers believe these findings mark a need to rethink how they model the brain and its broader functions. Focusing on the connectivity of different neurons and brain regions won’t be enough.

The new results also seem poised to influence questions in the machine learning and artificial intelligence fields. Artificial neural networks rely on the point model, treating neurons as nodes that tally inputs and pass the sum through an activity function. “Very few people have taken seriously the notion that a single neuron could be a complex computational device,” said Gary Marcus, a cognitive scientist at New York University and an outspoken skeptic of some claims made for deep learning.

though theScienceis but one finding in an extensive history of work that demonstrates this idea, he added, computer scientists might be more responsive to it because it frames the issue in terms of the XOR problem that dogged neural network research for so long. “It’s saying, we really need to think about this,” Marcus said. “The whole game – to come up with how you get smart cognition out of dumb neurons – might be wrong.”

“This is a super clean demonstration of that,” he added. “It’s going to speak above the noise.”

********************************

**********************

******************************************************Brave Browser

**********************************************

************************************** Read More (********************************************************

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

5 Reasons Dow Will Finish The Week On an Incredible High, Crypto Coins News

5 Reasons Dow Will Finish The Week On an Incredible High, Crypto Coins News

Incident: Ethiopian B737 at Dire Dawa on Jan 9th, swarm of grasshoppers, hacker news