Understanding How the Auditory Cortex of the Brain Processes Complex Sounds
Prof. Jonathan Z. Simon
|Dr. Jonathan Simon
A significant challenge in auditory neuroscience is to understand how speech and other natural sounds are analyzed and encoded in the auditory cortex of the human brain. A major finding is that perception and speech processing are crucially affected by temporal modulations in the acoustic signal. However, identifying the physiological mechanisms that underlie perceptually-relevant temporal modulations presents a considerable technical challenge.
Prof. Jonathan Simon's broad research goal is to understand how the auditory cortex processes complex sounds such as speech and other natural sounds (often in a noisy enviroment). Specifically, Dr. Simon seeks to understand how acoustic modulations, the building blocks of speech and other natural sounds are encoded in the auditory cortex.
Dr. Simon's research topics span these general areas:
Because of this focus on speech and higher order processing, his research primarily uses human rather than animal subjects. To record and analyze real-time neural processing in humans, Simon uses magnetoencephalography (MEG), a non-invasive tool suitable for use in humans that records high-speed neural signals from the entire brain. MEG measures the tiny magnetic fields generated by neurons in the brain when they are active (and therefore carrying electrical current). MEG has a temporal resolution of milliseconds and a spatial resolution of millimeters, and it is especially well-suited for measuring neural activity in auditory cortex.
Human auditory responses to speech: Using independent component analysis (a non-linear method of identifying sources when only their mixtures can be measured), the primary neural generators of responses to speech can be isolated and localized. But how are the response patterns determined by the speech stimuli?
Human auditory responses to speech-like modulations: How does human auditory cortex respond, millisecond by millisecond, to modulated sounds at rates & bandwidths prevalent in speech (< 20 Hz and > 1 octave)? How does human auditory cortex respond, millisecond by millisecond, to sounds co-modulated in both amplitude and frequency (AM/FM), as is common in speech and natural sounds?
Auditory scene analysis: How does the brain determine that a complex set of acoustical features is actually due to a single auditory source? Why and how are identical sounds encoded differently depending on whether the sound is perceived as a foreground vs. background sound?
Spatial Hearing: How does the brain decide what the spatial location of a sound source is? Assuming we have an internal map of the outside world, how do we assign a location to a sound based on its spectral and temporal properties? More fundamentally, how is that internal map computed in the first place?
Connections between human data and animal data: What are the properties of the neural networks within auditory cortex that allow the large scale processing of sounds? What are the single neuron and network correlates of measurements made in humans, and vice versa?
Dr. Simon was awarded a grant from the National Institutes of Health (NIH) for his research, titled "The Neural Basis of Perceptually-Relevant Auditory Modulations in Humans." The five-year grant is worth approximately $1.2 million.
More information about this research can be found at: http://www.isr.umd.edu/Labs/CSSL/simonlab/
return to spotlight on research