We propose a novel language-universal approach to end-to-end automatic spoken keyword recognition (SKR) leveraging upon (i) a self-supervised pre-trained model, and (ii) a set of universal speech attributes (manner and place of articulation).Specifically, Wav2Vec2.0 is used to generate robust speech representations, followed by a linear output layer to produce attribute sequences.A non-trainable pronunciation model then maps sequences of attributes into spoken keywords in a multilingual setting.Experiments on the Multilingual Spoken Words Corpus show comparable performances to character-and phoneme-based SKR in seen languages.The inclusion of domain adversarial training (DAT) improves the proposed framework, outperforming both character-and phoneme-based SKR approaches with 13.73% and 17.22% relative word error rate (WER) reduction in seen languages, and achieves 32.14% and 19.92% WER reduction for unseen languages in zero-shot settings.

Yen H., Ku P.-J., Siniscalchi S.M., Lee C.-H. (2024). Language-Universal Speech Attributes Modeling for Zero-Shot Multilingual Spoken Keyword Recognition. In INTERSPEECH 2024 (pp. 342-346). International Speech Communication Association [10.21437/Interspeech.2024-1342].

Language-Universal Speech Attributes Modeling for Zero-Shot Multilingual Spoken Keyword Recognition

Siniscalchi S. M.
Methodology
;
2024-09-01

Abstract

We propose a novel language-universal approach to end-to-end automatic spoken keyword recognition (SKR) leveraging upon (i) a self-supervised pre-trained model, and (ii) a set of universal speech attributes (manner and place of articulation).Specifically, Wav2Vec2.0 is used to generate robust speech representations, followed by a linear output layer to produce attribute sequences.A non-trainable pronunciation model then maps sequences of attributes into spoken keywords in a multilingual setting.Experiments on the Multilingual Spoken Words Corpus show comparable performances to character-and phoneme-based SKR in seen languages.The inclusion of domain adversarial training (DAT) improves the proposed framework, outperforming both character-and phoneme-based SKR approaches with 13.73% and 17.22% relative word error rate (WER) reduction in seen languages, and achieves 32.14% and 19.92% WER reduction for unseen languages in zero-shot settings.
1-set-2024
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
Yen H., Ku P.-J., Siniscalchi S.M., Lee C.-H. (2024). Language-Universal Speech Attributes Modeling for Zero-Shot Multilingual Spoken Keyword Recognition. In INTERSPEECH 2024 (pp. 342-346). International Speech Communication Association [10.21437/Interspeech.2024-1342].
File in questo prodotto:
File Dimensione Formato  
yen24_interspeech.pdf

Solo gestori archvio

Descrizione: Il testo pieno dell’articolo è disponibile al seguente link: https://www.isca-archive.org/interspeech_2024/yen24_interspeech.pdf
Tipologia: Versione Editoriale
Dimensione 276.15 kB
Formato Adobe PDF
276.15 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/670044
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact