Utterance Clustering Using Stereo Audio Channels
Open Access
- 25 September 2021
- journal article
- research article
- Published by Hindawi Limited in Computational Intelligence and Neuroscience
- Vol. 2021, 1-8
- https://doi.org/10.1155/2021/6151651
Abstract
Utterance clustering is one of the actively researched topics in audio signal processing and machine learning. This study aims to improve the performance of utterance clustering by processing multichannel (stereo) audio signals. Processed audio signals were generated by combining left- and right-channel audio signals in a few different ways and then by extracting the embedded features (also called d-vectors) from those processed audio signals. This study applied the Gaussian mixture model for supervised utterance clustering. In the training phase, a parameter-sharing Gaussian mixture model was obtained to train the model for each speaker. In the testing phase, the speaker with the maximum likelihood was selected as the detected speaker. Results of experiments with real audio recordings of multiperson discussion sessions showed that the proposed method that used multichannel audio signals achieved significantly better performance than a conventional method with mono-audio signals in more complicated conditions.Keywords
Funding Information
- Army Research Institute for the Behavioral and Social Sciences (W911NF-17-1-0221)
This publication has 21 references indexed in Scilit:
- Speaker Diarization Using Convolutional Neural Network for Statistics Accumulation RefinementPublished by International Speech Communication Association ,2017
- VoxCeleb: A Large-Scale Speaker Identification DatasetPublished by International Speech Communication Association ,2017
- Speaker Recognition Using Wavelet Packet Entropy, I-Vector, and Cosine Distance ScoringJournal of Electrical and Computer Engineering, 2017
- Automatic Speaker Recognition for Mobile Forensic ApplicationsMobile Information Systems, 2017
- An Investigation of Wavelet Average Framing LPC for Noisy Speaker Identification EnvironmentMathematical Problems in Engineering, 2015
- Cost-Sensitive Learning for Emotion Robust Speaker RecognitionThe Scientific World Journal, 2014
- Deep neural networks for small footprint text-dependent speaker verificationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2014
- Front-End Factor Analysis for Speaker VerificationIEEE Transactions on Audio, Speech, and Language Processing, 2010
- DISTBIC: A speaker-based segmentation for audio data indexingSpeech Communication, 2000
- Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentencesIEEE Transactions on Acoustics, Speech, and Signal Processing, 1980