In this work, we devise a parameter-efficient solution to bring differential privacy (DP) guarantees into adaptation of a cross-lingual speech classifier. We investigate a new frozen pretrained adaptation framework for DP-preserving speech modeling without full model fine-tuning. First, we introduce a noisy teacher-student ensemble into a conventional adaptation scheme leveraging a frozen pre-trained acoustic model and attain superior performance than DP-based stochastic gradient descent (DPSGD). Next, we insert residual adapters (RA) between layers of the frozen pre-trained acoustic model. The RAs reduce training cost and time significantly with a negligible performance drop. Evaluated on the open-access Multilingual Spoken Words (MLSW) dataset, our solution reduces the number of trainable parameters by 97.5% using the RAs with only a 4% performance drop with respect to fine-tuning the cross-lingual speech classifier while preserving DP guarantees.

Ho C.-W., Yang C.-H.H., Siniscalchi S.M. (2023). Differentially Private Adapters for Parameter Efficient Acoustic Modeling. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2023 (pp. 839-843). International Speech Communication Association [10.21437/Interspeech.2023-551].

Differentially Private Adapters for Parameter Efficient Acoustic Modeling

Siniscalchi S. M.
Ultimo
Writing – Original Draft Preparation
2023-01-01

Abstract

In this work, we devise a parameter-efficient solution to bring differential privacy (DP) guarantees into adaptation of a cross-lingual speech classifier. We investigate a new frozen pretrained adaptation framework for DP-preserving speech modeling without full model fine-tuning. First, we introduce a noisy teacher-student ensemble into a conventional adaptation scheme leveraging a frozen pre-trained acoustic model and attain superior performance than DP-based stochastic gradient descent (DPSGD). Next, we insert residual adapters (RA) between layers of the frozen pre-trained acoustic model. The RAs reduce training cost and time significantly with a negligible performance drop. Evaluated on the open-access Multilingual Spoken Words (MLSW) dataset, our solution reduces the number of trainable parameters by 97.5% using the RAs with only a 4% performance drop with respect to fine-tuning the cross-lingual speech classifier while preserving DP guarantees.
2023
Ho C.-W., Yang C.-H.H., Siniscalchi S.M. (2023). Differentially Private Adapters for Parameter Efficient Acoustic Modeling. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2023 (pp. 839-843). International Speech Communication Association [10.21437/Interspeech.2023-551].
File in questo prodotto:
File Dimensione Formato  
ho23_interspeech.pdf

Solo gestori archvio

Descrizione: Il testo pieno dell’articolo è disponibile al seguente link: https://www.isca-archive.org/interspeech_2023/ho23_interspeech.html
Tipologia: Versione Editoriale
Dimensione 422.04 kB
Formato Adobe PDF
422.04 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/637524
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact