Skip to main content
placeholder image

Encoding navigable speech sources: a psychoacoustic-based analysis-by-synthesis approach

Journal Article


Download full-text (Open Access)

Abstract


  • This paper presents a psychoacoustic-based analysis-by-synthesis approach for compressing navigable speech sources. The approach targets multi-party teleconferencing applications, where selective reproduction of individual speech sources is desired. Based on exploiting sparsity of speech in the perceptual time-frequency domain, multiple speech signals are encoded into one mono mixture signal, which can be further compressed using a standard speech codec. Using side information indicating the active speech source for each time frequency instant enables flexible decoding and reproduction. Objective results highlight the importance of considering perception when exploiting the sparse nature of speech in the time-frequency domain. Results show that this sparsity, as measured by the preserved energy level of perceptually important time-frequency components extracted from mixtures of speech signals, is similar in both anechoic and reverberant environments. The proposed approach is applied to a series of simulated and real reverberant speech recordings, where the resulting speech mixtures are compressed using a standard speech codec operating at 32 kbps. The perceptual quality, as judged both by objective and subjective evaluations, outperforms a simple sparsity approach that does not consider perception as well as the approach that encodes each source separately. While the perceptual quality of individual speech sources is maintained, subjective tests also confirm the approach maintains the perceptual quality of the spatialized speech scene. © 2012 IEEE.

Publication Date


  • 2013

Citation


  • X. Zheng, C. Ritz & J. Xi, "Encoding navigable speech sources: a psychoacoustic-based analysis-by-synthesis approach," IEEE Transactions on Audio, Speech and Language Processing, vol. 21, (1) pp. 29-38, 2013.

Scopus Eid


  • 2-s2.0-84867976224

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=2204&context=eispapers&unstamped=1

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/1195

Number Of Pages


  • 9

Start Page


  • 29

End Page


  • 38

Volume


  • 21

Issue


  • 1

Abstract


  • This paper presents a psychoacoustic-based analysis-by-synthesis approach for compressing navigable speech sources. The approach targets multi-party teleconferencing applications, where selective reproduction of individual speech sources is desired. Based on exploiting sparsity of speech in the perceptual time-frequency domain, multiple speech signals are encoded into one mono mixture signal, which can be further compressed using a standard speech codec. Using side information indicating the active speech source for each time frequency instant enables flexible decoding and reproduction. Objective results highlight the importance of considering perception when exploiting the sparse nature of speech in the time-frequency domain. Results show that this sparsity, as measured by the preserved energy level of perceptually important time-frequency components extracted from mixtures of speech signals, is similar in both anechoic and reverberant environments. The proposed approach is applied to a series of simulated and real reverberant speech recordings, where the resulting speech mixtures are compressed using a standard speech codec operating at 32 kbps. The perceptual quality, as judged both by objective and subjective evaluations, outperforms a simple sparsity approach that does not consider perception as well as the approach that encodes each source separately. While the perceptual quality of individual speech sources is maintained, subjective tests also confirm the approach maintains the perceptual quality of the spatialized speech scene. © 2012 IEEE.

Publication Date


  • 2013

Citation


  • X. Zheng, C. Ritz & J. Xi, "Encoding navigable speech sources: a psychoacoustic-based analysis-by-synthesis approach," IEEE Transactions on Audio, Speech and Language Processing, vol. 21, (1) pp. 29-38, 2013.

Scopus Eid


  • 2-s2.0-84867976224

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=2204&context=eispapers&unstamped=1

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/1195

Number Of Pages


  • 9

Start Page


  • 29

End Page


  • 38

Volume


  • 21

Issue


  • 1