|Home- - -History-- -Bibliography- -Pregnancy Timeline- --Prescription Drugs in Pregnancy- -- Pregnancy Calculator- --Female Reproductive System- News Alerts -Contact|
Click weeks 0 - 40 and follow fetal growth
Women at Low Risk Can Safely Choose Birth Style
Finger (Mal)formation Function of Desert DNA
Key Molecular Switch for Telomere Extension Found
New Role for Gene in Maintaining Steady Weight
New Facts About Stuttering
Preventing Preemie Brain Injury
Short Stature May Be Due To a 'Shortage' of Genes
Intestinal Disorder, Preemies and AB Blood Type
Babies Fed Fish Before 9 Months Wheeze Less
Physical Activity Improves Quality Of Sleep
Critical Molecules For Hearing/Balance Discovered
Tweaking One Gene Makes Muscles Twice As Strong
Fruit Fly Intestine Holds Secret to Fountain of Youth
Nerve Cells Key to making Sense of All of Our Senses
Discovery of A New Muscle Repair Gene
Immune System Governs Stem Cell Regeneration
The human brain is bombarded with a cacophony of information from the eyes, ears, nose, mouth and skin. Now a team of scientists at the University of Rochester, Washington University in St. Louis, and Baylor College of Medicine has unraveled how the brain manages to process those complex, rapidly changing, and often conflicting sensory signals to make sense of our world.
The answer lies in a relatively simple computation performed by single nerve cells, an operation that can be described mathematically as a straightforward weighted average. The key is that the neurons have to apply the correct weights to each sensory cue, and the authors reveal how this is done.
The study, to be published online Nov. 20 in Nature Neuroscience, represents the first direct evidence of how the brain combines multiple sources of sensory information to form as accurate a perception as possible of its environment.
The discovery may eventually lead to new therapies for people with Alzheimer's disease and other disorders that impair a person's sense of self-motion, says study coauthor Greg DeAngelis, professor and chair of brain and cognitive sciences at the University of Rochester.
This deeper understanding of how brain circuits combine different sensory cues could also help scientists and engineers to design more sophisticated artificial nervous systems such as those used in robots, he adds.
The brain is constantly confronted with changing and conflicting sensory input, says DeAngelis. For example, during IMAX theater footage of an aircraft rolling into a turn "you may find yourself grabbing the seat," he says. The large visual input makes you feel like you are moving, but the balance cues conveyed by sensors in your inner ear indicate that your body is in fact safely glued to the theater seat. So how does your brain decide how to interpret these conflicting inputs?
The study shows that the brain does not have to first "decide" which sensory cue is more reliable. "Indeed, this is what's exciting about what we have shown," says DeAngelis.
The study demonstrates that the low-level computations performed by single neurons in the brain, when repeated by millions of neurons performing similar computations, accounts for the brain's complex ability to know which sensory signals to weight as more important.
"Thus, the brain essentially can break down a seemingly high-level behavioral task into a set of much simpler operations performed simultaneously by many neurons," explains DeAngelis.
The study confirms and extends a computational theory developed earlier by brain and cognitive scientist Alexandre Pouget at the University of Rochester and the University of Geneva, Switzerland and a coauthor on the paper.
The theory predicted that neurons fire in a manner predicted by a weighted accumulation rule - largely confirmed by the neural data. Surprisingly, however, the weighted results were slightly off target from the theoretical predictions.
That difference could explain why behavior varies slightly from subject to subject.
"Being able to predict these small discrepancies establishes an exciting connection between computations performed at the level of single neurons [as opposed to] detailed aspects of behavior," says DeAngelis.
To gather data, researchers designed a virtual-reality system presenting subjects with two directional cues, (1) a visual pattern of moving dots on a computer screen to simulate traveling forward, and (2) physical movement of the subject created through a platform.
Researchers varied the amount of randomness in the motion of the dots to change how reliable the visual cues were relative to the motion of the platform. At the end of each trial, subjects indicated which direction they were heading, to the right or to the left.
The experiments were conducted at Washington University, and the team included Christopher Fetsch, now a post-doctoral fellow at the University of Washington, and Dora Angelaki, now chair of the Department of Neuroscience at Baylor College of Medicine. The research was supported by funding from the National Institutes of Health, the National Science Foundation, the Multidisciplinary University Research Initiative, and the James McDonnell foundation.
The University of Rochester (http://www.rochester.edu) is one of the nation's leading private universities; its College, School of Arts and Sciences, and Hajim School of Engineering and Applied Sciences are complemented by its Eastman School of Music, Simon School of Business, Warner School of Education, Laboratory for Laser Energetics, School of Medicine and Dentistry, School of Nursing, Eastman Institute for Oral Health, and the Memorial Art Gallery.
Original article: http://www.eurekalert.org/pub_releases/2011-11/uor-nck111811.php