This technical report illustrates the system developed by the CHILab team for the competition HODI at EVALITA 2023. The key idea of the method we proposed for the HODI Subtask A - Homotransphobia detection, was to develop different systems arranged as suitable combinations of Pre-Trained Language Model (PTLM) for embedding extraction, neural architectures for further elaborations over the embeddings and a classifier. In particular dense layers, LSTM, BiLSTM and Transformers were used as neural architectures. The best performing system across the ones investigated in this report was made by embeddings extracted via AlBERTo coupled with a Transformer that reaches a macro-F1 score of 0.753.
Siragusa I., Pirrone R. (2023). CHILab at HODI: A minimalist approach. In M. Lai, S. Menini, M. Polignano, V. Russo, R. Sprugnoli, G. Venturi (a cura di), CEUR Workshop Proceedings. CEUR-WS.
CHILab at HODI: A minimalist approach
Siragusa I.
Primo
;Pirrone R.Secondo
2023-09-02
Abstract
This technical report illustrates the system developed by the CHILab team for the competition HODI at EVALITA 2023. The key idea of the method we proposed for the HODI Subtask A - Homotransphobia detection, was to develop different systems arranged as suitable combinations of Pre-Trained Language Model (PTLM) for embedding extraction, neural architectures for further elaborations over the embeddings and a classifier. In particular dense layers, LSTM, BiLSTM and Transformers were used as neural architectures. The best performing system across the ones investigated in this report was made by embeddings extracted via AlBERTo coupled with a Transformer that reaches a macro-F1 score of 0.753.File | Dimensione | Formato | |
---|---|---|---|
paper27.pdf
accesso aperto
Descrizione: Articolo principale
Tipologia:
Versione Editoriale
Dimensione
402.56 kB
Formato
Adobe PDF
|
402.56 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.