Welcome to The Visible Embryo
  o
 
The Visible Embryo Birth Spiral Navigation
   
Google  
Fetal Timeline--- -Maternal Timeline-----News-----Prescription Drugs in Pregnancy---- Pregnancy Calculator----Female Reproductive System

   
WHO International Clinical Trials Registry Platform

The World Health Organization (WHO) has a Web site to help researchers, doctors and patients obtain information on clinical trials.

Now you can search all such registers to identify clinical trial research around the world!






Home

History

Bibliography

Pregnancy Timeline

Prescription Drug Effects on Pregnancy

Pregnancy Calculator

Female Reproductive System

News

Disclaimer: The Visible Embryo web site is provided for your general information only. The information contained on this site should not be treated as a substitute for medical, legal or other professional advice. Neither is The Visible Embryo responsible or liable for the contents of any websites of third parties which are listed on this site.


Content protected under a Creative Commons License.
No dirivative works may be made or used for commercial purposes.

 

Pregnancy Timeline by SemestersDevelopmental TimelineFertilizationFirst TrimesterSecond TrimesterThird TrimesterFirst Thin Layer of Skin AppearsEnd of Embryonic PeriodEnd of Embryonic PeriodFemale Reproductive SystemBeginning Cerebral HemispheresA Four Chambered HeartFirst Detectable Brain WavesThe Appearance of SomitesBasic Brain Structure in PlaceHeartbeat can be detectedHeartbeat can be detectedFinger and toe prints appearFinger and toe prints appearFetal sexual organs visibleBrown fat surrounds lymphatic systemBone marrow starts making blood cellsBone marrow starts making blood cellsInner Ear Bones HardenSensory brain waves begin to activateSensory brain waves begin to activateFetal liver is producing blood cellsBrain convolutions beginBrain convolutions beginImmune system beginningWhite fat begins to be madeHead may position into pelvisWhite fat begins to be madePeriod of rapid brain growthFull TermHead may position into pelvisImmune system beginningLungs begin to produce surfactant
CLICK ON weeks 0 - 40 and follow along every 2 weeks of fetal development




 
Developmental biology - Brain function

Making sense of sight and sound

Georgetown University finds brain makes sense of two stimuli in the same way...


Although sight is a much different sense than sound, Georgetown University Medical Center neuroscientists find our human brain learns to make sense of both stimuli in a similar way.

Researchers found the brain uses a two-step process receiving from neurons in one area of the brain sight and sound stimuli, and then categorizing them separately in another area in order to ascribe each a separate meaning. Similarly, when a child learns a new word, it first has to learn the new sound and then, in a second process, learn to understand that different versions (accents, pronunciations, etc.) of the same word spoken by different individuals, all mean the same thing and need to be categorized as the same.
"A computational advantage of this scheme is that it allows the brain to easily build on previous content to learn novel information."

Maximilian Riesenhuber PhD, the study's senior investigator,and a professor at Georgetown University School of Medicine, Department of Neuroscience. Study co-authors include first author, Xiong Jiang PhD, graduate student Mark A. Chevillet, and Josef P. Rauschecker PhD, all Georgetown neuroscientists.

Their study, published in Neuron, is the first to provide strong evidence that learning along vision and auditory paths follows similar principles. "We have long tried to make sense of senses, studying how the brain represents our multisensory world," says Riesenhuber.
In 2007, the same investigators were first to describe the two-step model in human learning of visual categories, and the new study now shows that the brain appears to use the same kind of learning mechanisms across sensory modalities.

Joseph Rauschecker PhD, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, and Institute for Advanced Study, Technische Universität München, Germany, one of the co-authors, believes their findings could help scientists devise new approaches to restore sensory deficits. Rauschecker: "Knowing how senses learn the world may help us devise workarounds in our very plastic brains," he says. "If a person can't process one sensory modality, say vision due to blindness, there could be substitution devices that allow visual input to be transformed into sound. So one disabled sense would be processed by another sensory brain center."

The 16 participants in this study were trained to categorize monkey communication calls - sounds that mean something to monkeys, but are alien in meaning to humans. The investigators divided the sounds into two categories labeled with nonsense names, based on prototypes from two categories: so-called "coos" and "harmonic arches." Using an auditory morphing system, the investigators were able to create thousands of monkey call combinations from the prototypes, including some very similar calls that required the participants to make fine distinctions between the calls. Learning to correctly categorize the novel sounds took about six hours.

Before and after training, fMRI data obtained from volunteers helped investigate changes in neuronal tuning induced by categorization training. Advanced fMRI techniques, functional magnetic resonance imaging rapid adaptation (fMRI-RA) and multi-voxel pattern analysis, were used along with conventional fMRI and functional connectivity analyses. In this way, researchers could distinguish sets of changes: a representation of monkey calls in the left auditory cortex, and tuning analysis that leads to category selectivity of different calls in the lateral prefrontal cortex.
"In our study, we used four different techniques, in particular fMRI-RA and MVPA, to independently and synergistically provide converging results. This allowed us to obtain strong results even from a small sample."

Xiong Jiang PhD, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA.

Processing sound requires discrimination in acoustics and tuning changes at the auditory cortex level, a process that researchers say is the same in humans as in animal communication systems. Using monkey calls instead of human speech forced participants to categorize sounds purely on the basis of acoustics rather than meaning.

"At an evolutionary level, humans and animals need to understand who is friend and who is foe, and sight and sound are integral to these judgments," Riesenhuber points out.

Highlights
• Human subjects learned to categorize morphed monkey calls • Training sharpened neural selectivity to auditory features in left auditory cortex • Training induced auditory category selectivity in lateral prefrontal cortex • This indicates similar principles of learning in the visual and auditory domains

Summary
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain.

Authors:Xiong Jiang, Mark A. Chevillet, Josef P. Rauschecker, Maximilian Riesenhuber, Maximilian Riesenhuber


The work was supported by grant from the National Science Foundation (BCS 0749986).

Authors report having no personal financial interests related to the study.

About Georgetown University Medical Center
Georgetown University Medical Center (GUMC) is an internationally recognized academic medical center with a three-part mission of research, teaching and patient care (through MedStar Health). GUMC's mission is carried out with a strong emphasis on public service and a dedication to the Catholic, Jesuit principle of cura personalis -- or "care of the whole person." The Medical Center includes the School of Medicine and the School of Nursing & Health Studies, both nationally ranked; Georgetown Lombardi Comprehensive Cancer Center, designated as a comprehensive cancer center by the National Cancer Institute; and the Biomedical Graduate Research Organization, which accounts for the majority of externally funded research at GUMC including a Clinical and Translational Science Award from the National Institutes of Health. Connect with GUMC on Facebook (Facebook.com/GUMCUpdate), Twitter (@gumedcenter) and Instagram (@gumedcenter).


Return to top of page

Apr 27, 2018   Fetal Timeline   Maternal Timeline   News   News Archive




Sixteen participants in this study were trained to categorize monkey communication calls - sounds that mean something to monkeys, but are alien in meaning to humans. Using two other techniques in addition to fMRI-RA and MVPA, the study was able to obtain strong results even from their small sample.
Image credit: Public domain composite.


Phospholid by Wikipedia