ISBN-13: 9781461463597 / Angielski / Miękka / 2013 / 118 str.
ISBN-13: 9781461463597 / Angielski / Miękka / 2013 / 118 str.
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 1 1.2 Emotion from psychological perspective . . . . . . . . . . . . . . . . . . . .. . 2 1.3 Emotion from speech signal perspective . . . . . . . . . . . . . . .. . . . . . . 3 1.3.1 Speech production mechanism . . . . . . . . . . . . . . . . . . . . . . . . .... 4 1.3.2 Source features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.3 System features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.4 Prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Emotional speech databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Applications of speech emotion recognition . . . . . . . . . . . . . . . . . . . 8 1.6 Issues in speech emotion recognition . . . . . . . . . . . . . . . . . . . . . . . 8 1.7 Objectives and scope of the work . . . . . . . . . . . . . . . . . . . . . . . . .. 9 1.8 Main highlights of research investigations . . . . . . . . . . . . . . . . . . . 10 1.9 Brief overview of contributions in this book . . . . . . . . . . . . . . .. . . 10 1.9.1 Emotion recognition using spectral features extracted from sub-syllabic regions and pitch synchronous analysis . . . . . . . 10 1.9.2 Emotion recognition using global and local prosodic features extracted from words and syllables . . . . . . . . . . . . . . 11 1.9.3 Emotion recognition using combination of features . . . . . . . . 11 1.9.4 Emotion recognition on real life emotional speech database . 11 1.10 Organization of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2 Robust Emotion Recognition using Pitch Synchronous and
Sub-syllabic Spectral Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1 Introduction . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . 15 2.2 Emotional speech corpora . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1 Indian Institute of Technology Kharagpur-Simulated Emotional Speech Corpus: IITKGP-SESC . . . . . . . . . . . . . . . 18 2.2.2 Berlin Emotional Speech Database: Emo-DB . . . . . . . . . . . . . 20 2.3 Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3.1 Linear prediction cepstral coefficients (LPCCs) . . . . . . . . . . . 21 2.3.2 Mel frequency cepstral coefficients (MFCCs) . . . . . . . . . . . . . 22 2.3.3 Formant features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.4 Extraction of sub-syllabic spectral features . . . . . . . . . . . . . . . 25 2.3.5 Pitch synchronous analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4 Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.4.1 Gaussian mixture models (GMM) . . . . . . . . . . . . . . . . . . . . . . 30 2.4.2 Auto-associative neural networks . . . . . . . . . . . . . . . . . . . . . . . 31 2.5 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3 Robust Emotion Recognition using Word and Syllable Level Prosodic Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.1 Introduction . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 45 3.2 Prosodic features: Importance in emotion recognition . . . . . . . . . . 46 3.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.4 Extraction of global and local prosodic features . . . . . . . . . . . . . . . 51 3.4.1 Sentence level features . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 51 3.4.2 Word and syllable level features . . . . . . . . . . . . . . . . . . . . . . . . 52 3.5 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5.1 Emotion recognition systems using sentence level prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5.2 Emotion recognition systems using word level prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 60 3.5.3 Emotion recognition systems using syllable level prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 68 4 Robust Emotion Recognition using Combination of Excitation Source, Spectral and Prosodic Features . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.1 Introduction . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . 71 4.2 Feature combination: A study . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3 Emotion recognition using combination of excitation source and vocal tract system features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.4 Emotion recognition using combination of vocal tract system and prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.5 Emotion recognition using combination of excitation source and prosodic features . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 78 4.6 Emotion recognition using combination of excitation source, system and prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 83 5 Robust Emotion Recognition using Speaking Rate Features . . . . . . . . . 87 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3 Two stage emotion recognition system . . . . . . . . . . . . . . . . . . . . . 92 5.4 Gross level emotion recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.5 Finer level emotion recognition . . . . . . . . . . . . . . . . . . . . .. . . . . . 94 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6 Emotion Recognition on Real Life Emotions . . . . . . .. . . . . . . . . . . . . 97 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.2 Real life emotion speech corpus . . . . . . . . . . . . . . . . . .. . . . . . . . . 98 6.3 Recognition performance on real life emotions . . . . . . . . . . . . . . . . 99 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7 Summary and Conclusions . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 103 7.1 Summary of the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.2 Contributions of the present work . . . . . . . . . . . . . . . . . . .. . . . . 105 7.3 Conclusions from the present work . . . . . . . . . . . . . . . . . . . . . . . 106 7.4 Scope for future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 A MFCC Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 B Gaussian Mixture Model (GMM) . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 B.1 Training the GMMs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 B.1.1 Expectation Maximization (EM) Algorithm . . . . . . . . . . . . . . 116 B.1.2 Maximum a posteriori (MAP) Adaptation . . . . . . . . . . . . . . . 117 B.2 Testing . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . 119 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
K. Sreenivasa Rao is at Indian Institute of Technology, Kharagpur, India.
Shashidhar G, Koolagudi is at Graphic Era University, Dehradun, India.
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner.
The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
1997-2025 DolnySlask.com Agencja Internetowa