Clark School Home UMD
ECE

Events Calendar

Event Information

Ph.D. Dissertation Defense: Majid Mirbagheri
Thursday, June 26, 2014
3:00 p.m.
2224 AVW
For More Information:
Maria Hoo
301 405 3681
mch@umd.edu

ANNOUNCEMENT: Ph.D. Dissertation Defense
 
Name: Majid Mirbagheri
 
Committee:
Professor Shihab Shamma, Chair
Professor Carol Espy Wilson
Professor Timothy Horiuchi
Professor Mounya Elhilali
Professor Yiannis Aloimonos, Dean's Representative
 
Date/Time: Thursday, June 26, 2014 at 3 pm
 
Location: 2224 AVW
 
Title: Single-Microphone Speech Enhancement Inspired by Auditory System
 
Abstract
 
Enhancing quality of speech in noisy environments has been an active area of research due to the abundance of applications dealing with human voice and dependence of their performance on this quality. While original approaches in the field were mostly addressing this problem in a pure statistical framework in which the goal was to estimate speech from its sum with other independent processes (noise), during last decade, the attention of the scientific community has turned to the functionality of human auditory system. A lot of effort has been put to bridge the gap between the performance of speech processing algorithms and that of average human by borrowing the models suggested for the sound processing in the auditory system.
 
In this thesis, we will introduce algorithms for speech enhancement inspired by two of these models i.e. the cortical representation of sounds and the hypothesized role of temporal coherence in the auditory scene analysis problem. After an introduction to the auditory system and the general speech enhancement framework we will first show how traditional speech enhancement technics such as wiener-filtering can benefit on the feature extraction level from discriminatory capabilities of spectro-temporal representation of sounds in the cortex i.e. the cortical model.
 
We will next focus on the feature processing as opposed to the extraction stage in the speech enhancement systems by taking advantage of models hypothesized for human attention for sound segregation. We demonstrate a mask-based enhancement method in which the temporal coherence of features is used as a criterion to elicit information about their sources and more specifically to form the masks needed to suppress the noise.
 
Lastly, we explore how the two blocks for feature extraction and manipulation can be merged into one in a manner consistent with our knowledge about auditory system. We will do this through the use of regularized non-negative matrix factorization (NMF) to optimize the feature extraction and simultaneously account for temporal dynamics to separate noise from speech.
 
 

This Event is For: Graduate • Faculty

Browse Events By Calendar

Calendar Home

« Previous Month    Next Month »

December 2014
SU M TU W TH F SA
1 2 3 4 5 6 w
7 8 9 10 11 12 13 w
14 15 16 17 18 19 20 w
21 22 23 24 25 26 27 w
28 29 30 31 w
Search Events

Events

Events Calendar
Submit Event

News

Newsroom
News Search
News Archives