Ph.D. Dissertation Defense:: Majid Mirbagheri

Monday, June 16, 2014
4:00 p.m.
2224 AVW
Maria Hoo
301 405 3681
mch@umd.edu

ANNOUNCEMENT: Ph.D. Dissertation Defense
 
Name: Majid Mirbagheri
 
Committee:
Professor Shihab Shamma, Chair
Professor Carol Espy Wilson
Professor Timothy Horiuchi
Professor Mounya Elhilali
Professor Ramani Duraisawami, Dean's Representative
 
Date/Time: Monday, June 16, 2014 at 4 pm
 
Location: 2224 AVW
 
Title: Speech Enhancement Inspired by Auditory System
Abstract
 
Enhancing quality of speech in noisy environments has been an active area of research due to the abundance of applications dealing with human voice and dependence of their performance on this quality. While original approaches in the field were mostly addressing this problem in a pure statistical framework in which the goal was to estimate speech from its sum with other independent processes (noise), during last decade, the attention of the scientific community has turned to the functionality of human auditory system. A lot of effort has been put to bridge the gap between the performance of speech processing algorithms and that of average human by borrowing the models suggested for the sound processing in the auditory system.
 
In this thesis, we will introduce algorithms for speech enhancement inspired by two of these models i.e. the cortical representation of sounds and the hypothesized role of temporal coherence in the auditory scene analysis problem. After an introduction to the auditory system and the general speech enhancement framework we will first show how traditional speech enhancement technics such as wiener-filtering can benefit on the feature extraction level from discriminatory capabilities of spectrotemporal representation of sounds in the cortex i.e. the cortical model.
 
We will next focus on the feature processing as opposed to the extraction stage in the speech enhancement systems by taking advantage of models hypothesized for human attention for sound segregation. We demonstrate a mask-based enhancement method in which the temporal coherence of features is used as a criterion to elicit information about their sources and more specifically to form the masks needed to suppress the noise.
 
Lastly, we explore how the above two ideas for feature extraction and manipulation can be merged into one framework to separate noise from speech. We will do this through the use of regularized non-negative matrix factorization (NMF) to optimize the feature extraction and simultaneously account for temporal dynamics to separate noise from speech.
 
 
 

Audience: Graduate  Faculty 

remind we with google calendar

 

April 2024

SU MO TU WE TH FR SA
31 1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 1 2 3 4
Submit an Event