We propose a novel decentralized feature extraction approach in federated learning to address privacy-preservation issues for speech recognition. It is built upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction, and a recurrent neural network (RNN) based end-to-end acoustic model (AM). To enhance model parameter protection in a decentralized architecture, an input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram, and the corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters. The encoded features are then down-streamed to the local RNN model for the final recognition. The proposed decentralized framework takes advantage of the quantum learning progress to secure models and to avoid privacy leakage attacks. Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95.12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features. We conduct an in-depth study of different quantum circuit encoder architectures to provide insights into designing QCNN-based feature extractors. Neural saliency analyses demonstrate a high correlation between the proposed QCNN features, class activation maps, and the input Mel-spectrogram. We provide an implementation(1) for future studies.

Yang, C.H., Qi, J., Chen, S.Y., Chen, P., Siniscalchi, S.M., Ma, X., et al. (2021). Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition. In ICASSP (pp. 6523-6527). IEEE [10.1109/ICASSP39728.2021.9413453].

Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition

Siniscalchi, Sabato Marco
Writing – Original Draft Preparation
;
2021-01-01

Abstract

We propose a novel decentralized feature extraction approach in federated learning to address privacy-preservation issues for speech recognition. It is built upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction, and a recurrent neural network (RNN) based end-to-end acoustic model (AM). To enhance model parameter protection in a decentralized architecture, an input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram, and the corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters. The encoded features are then down-streamed to the local RNN model for the final recognition. The proposed decentralized framework takes advantage of the quantum learning progress to secure models and to avoid privacy leakage attacks. Testing on the Google Speech Commands Dataset, the proposed QCNN encoder attains a competitive accuracy of 95.12% in a decentralized model, which is better than the previous architectures using centralized RNN models with convolutional features. We conduct an in-depth study of different quantum circuit encoder architectures to provide insights into designing QCNN-based feature extractors. Neural saliency analyses demonstrate a high correlation between the proposed QCNN features, class activation maps, and the input Mel-spectrogram. We provide an implementation(1) for future studies.
2021
978-1-7281-7605-5
Yang, C.H., Qi, J., Chen, S.Y., Chen, P., Siniscalchi, S.M., Ma, X., et al. (2021). Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition. In ICASSP (pp. 6523-6527). IEEE [10.1109/ICASSP39728.2021.9413453].
File in questo prodotto:
File Dimensione Formato  
yang2021.pdf

Solo gestori archvio

Tipologia: Versione Editoriale
Dimensione 2.12 MB
Formato Adobe PDF
2.12 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
2010.13309v2.pdf

accesso aperto

Tipologia: Pre-print
Dimensione 614.75 kB
Formato Adobe PDF
614.75 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/636633
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 108
  • ???jsp.display-item.citation.isi??? 63
social impact