Speech foundation models have demonstrated exceptional capabilities in speech-related tasks. Nevertheless, these models often struggle with non-verbal audio data, such as vocalizations, baby crying, etc., which are critical for various real-world applications. Audio foundation models well handle non-speech data but also fail to capture the nuanced features of non-verbal human sounds. In this work, we aim to overcome the above shortcoming and propose a novel foundation model, termed voc2vec, specifically designed for non-verbal human data leveraging exclusively open-soruce non-verbal audio datasets. We employ a collection of 10 datasets covering around 125 hours of non-verbal audio. Experimental results prove that voc2vec is effective in non-verbal vocalization classification, and it outperforms conventional speech and audio foundation models. Moreover, voc2vec consistently outperforms strong baselines, namely OpenSmile and emotion2vec, on six different benchmark datasets. To the best of the authors’ knowledge, voc2vec is the first universal representation model for vocalization tasks.

Koudounas, A., La Quatra, M., Siniscalchi, S.M., Baralis, E. (2025). voc2vec: A Foundation Model for Non-Verbal Vocalization. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (pp. 1-5). Institute of Electrical and Electronics Engineers Inc. [10.1109/ICASSP49660.2025.10890672].

voc2vec: A Foundation Model for Non-Verbal Vocalization

Siniscalchi S. M.;
2025-01-01

Abstract

Speech foundation models have demonstrated exceptional capabilities in speech-related tasks. Nevertheless, these models often struggle with non-verbal audio data, such as vocalizations, baby crying, etc., which are critical for various real-world applications. Audio foundation models well handle non-speech data but also fail to capture the nuanced features of non-verbal human sounds. In this work, we aim to overcome the above shortcoming and propose a novel foundation model, termed voc2vec, specifically designed for non-verbal human data leveraging exclusively open-soruce non-verbal audio datasets. We employ a collection of 10 datasets covering around 125 hours of non-verbal audio. Experimental results prove that voc2vec is effective in non-verbal vocalization classification, and it outperforms conventional speech and audio foundation models. Moreover, voc2vec consistently outperforms strong baselines, namely OpenSmile and emotion2vec, on six different benchmark datasets. To the best of the authors’ knowledge, voc2vec is the first universal representation model for vocalization tasks.
2025
9798350368741
Koudounas, A., La Quatra, M., Siniscalchi, S.M., Baralis, E. (2025). voc2vec: A Foundation Model for Non-Verbal Vocalization. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (pp. 1-5). Institute of Electrical and Electronics Engineers Inc. [10.1109/ICASSP49660.2025.10890672].
File in questo prodotto:
File Dimensione Formato  
voc2vec_A_Foundation_Model_for_Non-Verbal_Vocalization.pdf

Solo gestori archvio

Tipologia: Versione Editoriale
Dimensione 306.39 kB
Formato Adobe PDF
306.39 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/694130
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact