Recently, model compression that aims to facilitate the use of deep models in real-world applications has attracted considerable attention. Several model compression techniques have been proposed to reduce computational costs without significantly degrading the achievable performance. In this paper, we propose a multimodal framework for speech enhancement (SE) by utilizing a hierarchical extreme learning machine (HELM) to enhance the performance of conventional HELM-based SE frameworks that consider audio information only. Furthermore, we investigate the performance of the HELM-based multimodal SE framework trained using binary weights and quantized input data to reduce the computational requirement. The experimental results show that the proposed multimodal SE framework outperforms the conventional HELM-based SE framework in terms of three standard objective evaluation metrics. The results also show that the performance of the proposed multimodal SE framework is only slightly degraded, when the model is compressed through model binarization and quantized input data.

Hussain T., Tsao Y., Wang H.-M., Wang J.-C., Siniscalchi S.M., Liao W.-H. (2019). Compressed multimodal hierarchical extreme learning machine for speech enhancement. In 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2019 (pp. 678-683). Institute of Electrical and Electronics Engineers Inc. [10.1109/APSIPAASC47483.2019.9023012].

Compressed multimodal hierarchical extreme learning machine for speech enhancement

Siniscalchi S. M.;
2019-01-01

Abstract

Recently, model compression that aims to facilitate the use of deep models in real-world applications has attracted considerable attention. Several model compression techniques have been proposed to reduce computational costs without significantly degrading the achievable performance. In this paper, we propose a multimodal framework for speech enhancement (SE) by utilizing a hierarchical extreme learning machine (HELM) to enhance the performance of conventional HELM-based SE frameworks that consider audio information only. Furthermore, we investigate the performance of the HELM-based multimodal SE framework trained using binary weights and quantized input data to reduce the computational requirement. The experimental results show that the proposed multimodal SE framework outperforms the conventional HELM-based SE framework in terms of three standard objective evaluation metrics. The results also show that the performance of the proposed multimodal SE framework is only slightly degraded, when the model is compressed through model binarization and quantized input data.
2019
Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni
978-1-7281-3248-8
Hussain T., Tsao Y., Wang H.-M., Wang J.-C., Siniscalchi S.M., Liao W.-H. (2019). Compressed multimodal hierarchical extreme learning machine for speech enhancement. In 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2019 (pp. 678-683). Institute of Electrical and Electronics Engineers Inc. [10.1109/APSIPAASC47483.2019.9023012].
File in questo prodotto:
File Dimensione Formato  
Compressed_Multimodal_Hierarchical_Extreme_Learning_Machine_for_Speech_Enhancement.pdf

Solo gestori archvio

Tipologia: Versione Editoriale
Dimensione 988.67 kB
Formato Adobe PDF
988.67 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/641415
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact