A.V. Williams 2209
Jonathan Z. Simon is an Associate Professor at the University of Maryland College Park, jointly in both the Department of Electrical and Computer Engineering and the Department of Biology. He is a member of the Program in Neuroscience and Cognitive Science (NACS) and the Program in Bioengineering , and an affiliate member of the Institute for Systems Research. His expertise is applied and theoretical neuroscience. He earned his doctorate in physics from the University of California, Santa Barbara, and did postdoctoral research in theoretical general relativity (University of Wisconsin-Milwaukee, and University of Maryland-College Park) before embracing the field of neuroscience.
My broad research goal is to understand how the auditory cortex processes complex sounds such as speech and other natural sounds. Because of my focus on speech and higher order processing, my research uses human rather than animal subjects. To non-invasively record and analyze real-time neural processing in humans, I use magnetoencephalography (MEG), because of its high temporal resolution (milliseconds) and reasonable spatial resolution (millimeters).
Human auditory responses to speech: Using independent component analysis, the primary neural generators of responses to speech can be isolated and localized (this has never before been done for humans). But how are the response patterns determined by the speech stimuli?
Human auditory responses to speechlike modulations: What are the temporal response properties of human auditory cortex to sounds modulated at rates bandwidths prevalent in speech (< 20 Hz and > 1 octave)? What are the temporal response properties of human auditory cortex to sounds co-modulated in both amplitude and frequency (AM/FM), as is common in speech and natural sounds?
Connection between Human data and animal data: What are the properties of the neural networks within auditory cortex that allow the large scale processing of sounds? What are the single neuron and network correlates of measurements made in humans, and vice versa?
Auditory scene analysis: Why and how are identical sounds encoded differently depending on whether the sound is perceived as a foreground vs. background sound.
Binaural processing in humans: How are sounds that are intrinsically binaural processed differently than those that are essentially monaural, even when the sounds are perceptually similar? What is the neural correlate of the detection of a binaural auditory object, and how is it different from the neural correlate of the disappearance of a binaural auditory object?
Spotlight on Research:
Understanding How the Auditory Cortex of the Brain Processes Complex Sounds
↑ Back to Top