In this paper, we propose two techniques, namely joint modeling and data augmentation, to improve system performances for audio-visual scene classification (AVSC). We employ pre-trained networks trained only on image data sets to extract video embedding; whereas for audio embedding models, we decide to train them from scratch. We explore different neural network architectures for joint modeling to effectively combine the video and audio modalities. Moreover, data augmentation strategies are investigated to increase audio-visual training set size. For the video modality the effectiveness of several operations in RandAugment is verified. An audio-video joint mixup scheme is proposed to further improve AVSC performances. Evaluated on the development set of TAU Urban Audio Visual Scenes 2021, our final system can achieve the best accuracy of 94.2% among all single AVSC systems submitted to DCASE 2021 Task 1b.

Wang Q., Du J., Zheng S., Li Y., Wang Y., Wu Y., et al. (2022). A Study on Joint Modeling and Data Augmentation of Multi-Modalities for Audio-Visual Scene Classification. In International Symposium on Chinese Spoken Language Processing (ISCSLP) (pp. 453-457). NEW YORK : IEEE [10.1109/ISCSLP57327.2022.10038206].

A Study on Joint Modeling and Data Augmentation of Multi-Modalities for Audio-Visual Scene Classification

Siniscalchi S. M.
Membro del Collaboration Group
;
2022-01-01

Abstract

In this paper, we propose two techniques, namely joint modeling and data augmentation, to improve system performances for audio-visual scene classification (AVSC). We employ pre-trained networks trained only on image data sets to extract video embedding; whereas for audio embedding models, we decide to train them from scratch. We explore different neural network architectures for joint modeling to effectively combine the video and audio modalities. Moreover, data augmentation strategies are investigated to increase audio-visual training set size. For the video modality the effectiveness of several operations in RandAugment is verified. An audio-video joint mixup scheme is proposed to further improve AVSC performances. Evaluated on the development set of TAU Urban Audio Visual Scenes 2021, our final system can achieve the best accuracy of 94.2% among all single AVSC systems submitted to DCASE 2021 Task 1b.
2022
979-8-3503-9796-3
Wang Q., Du J., Zheng S., Li Y., Wang Y., Wu Y., et al. (2022). A Study on Joint Modeling and Data Augmentation of Multi-Modalities for Audio-Visual Scene Classification. In International Symposium on Chinese Spoken Language Processing (ISCSLP) (pp. 453-457). NEW YORK : IEEE [10.1109/ISCSLP57327.2022.10038206].
File in questo prodotto:
File Dimensione Formato  
A_Study_on_Joint_Modeling_and_Data_Augmentation_of_Multi-Modalities_for_Audio-Visual_Scene_Classification.pdf

Solo gestori archvio

Descrizione: main document
Tipologia: Versione Editoriale
Dimensione 726.92 kB
Formato Adobe PDF
726.92 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/649273
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact