M.S. (Master of Science)
Department of Electrical Engineering
The purpose of this thesis is to improve and implement the existing speaker recognition system. The proposed system will be cooperate with automatic speech recognition for automatically generating a script file for a recorded dialog file. Today's speech recognition technology has been widely used in many application areas; however, speaker recognition technology needs to be improved for such a purpose in noisy environment. In the process of a recorded file including multi-person conversation, manual speaker identification is often required. The proposed technique can automatically find the ID of each speaker and save manpower. It has potential applications in many other fileds.
This thesis attempts to use active noise cancellation and Wiener filter to enhance the speech signal. Several feature extraction methods, such as short-term autocorrelation function, mel-frequency cepstral coefficient (MFCC) and linear prediction coefficient (LPC) have been used to extract patterns from the enhanced signal. Then the features have been used by classifiers: Gaussian mixture speaker classifier, artificial neural network (ANN) and innovative nearest sliding neighborhood.
We use the real data collected from human subjects to test the performance of the proposed system. Wiener filter and iterative adaptive noise cancellation can successfully reduce the ambient noise by 21dB. The experimental results show that the nearest sliding neighborhood algorithm has the highest classification rate of 93.04% for gender recognition and recognition rate of 76.32% for speaker recognition.
Wang, Yunlong, "Implementation and Improvement of Common Text-independent Speaker Identification" (2020). Graduate Research Theses & Dissertations. 7764.
Northern Illinois University
Rights Statement 2
NIU theses are protected by copyright. They may be viewed from Huskie Commons for any purpose, but reproduction or distribution in any format is prohibited without the written permission of the authors.