For years, the brain was thought of as a biological computer that processes information through traditional circuitry, with data passing directly from cell to cell. While this model is still accurate, a new study led by Professor Salk Thomas Albright and scientist Sergei Gepshtein shows that there is also a second, very different way for the brain to analyze information: through the interactions of waveforms. neural activity. The findings, published in Scientists progress on April 22, 2022, to help researchers better understand how the brain processes information.
“We now have a new understanding of how the computing machinery of the brain works,” says Albright, Conrad T. Prebys Chair in Vision Research and director of the Salk Vision Center Laboratory. “The model helps explain how the underlying state of the brain can change, affecting people’s attention, concentration, or ability to process information.”
Researchers have long known that waves of electrical activity exist in the brain, both during sleep and wakefulness. But the underlying theories about how the brain processes information — particularly sensory information, like the sight of a light or the sound of a bell — have revolved around information sensed by cells. specialized brain cells, then transferred from one neuron to another. like a relay.
This traditional model of the brain, however, could not explain how a single sensory cell can react so differently to the same thing under different conditions. A cell, for example, may activate in response to a rapid flash of light when an animal is particularly alert, but will remain inactive in response to the same light if the animal’s attention is focused on something else.
Gepshtein compares the new understanding to wave-particle duality in physics and chemistry – the idea that light and matter have properties of both particles and waves. In some situations, light behaves as if it were a particle (also called a photon). In other situations, it behaves as if it were a wave. The particles are confined to a specific location and the waves are spread over many locations. Both views of light are necessary to explain its complex behavior.
“The traditional view of brain function describes brain activity as an interaction of neurons. Since each neuron is confined to a specific location, this view is akin to describing light as a particle,” says Gepshtein , director of the Collaboratory for Adaptive Sensory at Salk. Technologies. “We found that in some situations, brain activity is best described as an interaction of waves, which is similar to describing light as a wave. Both views are needed to understand the brain.”
Some properties of sensory cells observed in the past were not easy to explain given the “particle” approach to the brain. In the new study, the team observed the activity of 139 neurons in an animal model to better understand how cells coordinate their response to visual information. Together with physicist Sergey Savel’ev from Loughborough University, they created a mathematical framework to interpret neuron activity and predict new phenomena.
The best way to explain the behavior of neurons, they found, was the interaction of microscopic waves of activity rather than the interaction of individual neurons. Rather than a flash of light activating specialized sensory cells, the researchers showed how this creates distributed patterns: waves of activity across many neighboring cells, with alternating peaks and troughs of activation – like the waves oceanic.
When these waves are generated simultaneously in different places in the brain, they inevitably collide with each other. If two peaks of activity meet, they generate even higher activity, while if a trough of low activity meets a peak, it can cancel it out. This process is called wave interference.
“When you’re out in the world, there are many, many entrances and so all these different waves are generated,” says Albright. “The brain’s net response to the world around you has to do with how all of these waves interact.”
To test their mathematical model of how neural waves occur in the brain, the team designed an accompanying visual experiment. Two people were asked to detect a thin faint line (“probe”) located on a screen and flanked by other light patterns. According to the researchers, the quality of the execution of this task depended on the location of the probe. The ability to detect the probe was high in some places and decreased in others, forming a spatial wave predicted by the model.
“Your ability to see this probe at each location will depend on how the neural waves overlap at that location,” says Gepshtein, who is also a fellow at the Salk Center for the Neurobiology of Vision. “And we have now proposed how the brain handles this.”
Discovering how neural waves interact goes far beyond explaining this optical illusion. The researchers hypothesize that the same types of waves are generated – and interact with each other – in every part of the cerebral cortex, not just the part responsible for analyzing visual information. This means that waves generated by the brain itself, by subtle signals from the environment or internal moods, can modify the waves generated by sensory inputs.
This may explain how the brain’s response to something can change from day to day, researchers say.
Other co-authors of the paper include Ambarish Pawar from Salk and Sunwoo Kwon from the University of California, Berkeley.
The work was supported in part by the Salk Institute’s Sloan-Swartz Center for Theoretical Neurobiology, the Kavli Institute for Brain and Mind, the Conrad T. Prebys Foundation, the National Institutes of Health (R01-EY018613, R01-EY029117), and the Engineering and Physical Sciences Research Council (EP/S032843/1).