Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks
- 1 April 2015
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
Separation of speech embedded in non-stationary interference is a challenging problem that has recently seen dramatic improvements using deep network-based methods. Previous work has shown that estimating a masking function to be applied to the noisy spectrum is a viable approach that can be improved by using a signal-approximation based objective function. Better modeling of dynamics through deep recurrent networks has also been shown to improve performance. Here we pursue both of these directions. We develop a phase-sensitive objective function based on the signal-to-noise ratio (SNR) of the reconstructed signal, and show that in experiments it yields uniformly better results in terms of signal-to-distortion ratio (SDR). We also investigate improvements to the modeling of dynamics, using bidirectional recurrent networks, as well as by incorporating speech recognition outputs in the form of alignment vectors concatenated with the spectral input features. Both methods yield further improvements, pointing to tighter integration of recognition with separation as a promising future direction.Keywords
This publication has 14 references indexed in Scilit:
- Discriminatively trained recurrent neural networks for single-channel speech separationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2014
- Mask-based enhancement for very low quality speechPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2014
- Deep neural networks for single channel source separationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2014
- Deep learning for monaural speech separationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2014
- Ideal ratio mask estimation using deep neural networks for robust speech recognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2013
- The second ‘chime’ speech separation and recognition challenge: Datasets, tasks and baselinesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2013
- Regularized nonnegative matrix factorization using Gaussian mixture priors for supervised single channel source separationComputer Speech & Language, 2012
- Super-human multi-talker speech recognition: A graphical modeling approachComputer Speech & Language, 2010
- On the optimality of ideal binary time–frequency masksSpeech Communication, 2009
- Long Short-Term MemoryNeural Computation, 1997