The main drawback of a detailed representation of visual content, whatever is its origin, is that significant features are very high dimensional. To keep the problem tractable while preserving the semantic content, a dimensionality reduction of the data is needed. We propose the Random Projection techniques to reduce the dimensionality. Even though this technique is sub-optimal with respect to Singular Value Decomposition its much lower computational cost make it more suitable for this problem and in particular when computational resources are limited such as in mobile terminals. In this paper we present the use of a 'conceptual' space, automatically induced from data, to perform automatic image annotation. Images are represented by visual features based on color and texture and arranged as histograms of visual terms and bigrams to partially preserve the spatial information [1]. Using a set of annotated images as training data, the matrix of visual features is built and dimensionality reduction is performed using the Random Projection algorithm. A new unannotated image is then projected into the dimensionally reduced space and the labels of the closest training images are assigned to the unannotated image itself. Experiments on large real collection of images showed that the approach, despite of its low computational cost, is very effective.

La Cascia M., Vassallo G., Gallo L., Pilato G., Vella F. (2018). Automatic Image Annotation Using Random Projection in a Conceptual Space Induced from Data. In Proceedings - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018 (pp. 464-471). Institute of Electrical and Electronics Engineers Inc. [10.1109/SITIS.2018.00077].

Automatic Image Annotation Using Random Projection in a Conceptual Space Induced from Data

La Cascia M.;Vassallo G.;Pilato G.;
2018-01-01

Abstract

The main drawback of a detailed representation of visual content, whatever is its origin, is that significant features are very high dimensional. To keep the problem tractable while preserving the semantic content, a dimensionality reduction of the data is needed. We propose the Random Projection techniques to reduce the dimensionality. Even though this technique is sub-optimal with respect to Singular Value Decomposition its much lower computational cost make it more suitable for this problem and in particular when computational resources are limited such as in mobile terminals. In this paper we present the use of a 'conceptual' space, automatically induced from data, to perform automatic image annotation. Images are represented by visual features based on color and texture and arranged as histograms of visual terms and bigrams to partially preserve the spatial information [1]. Using a set of annotated images as training data, the matrix of visual features is built and dimensionality reduction is performed using the Random Projection algorithm. A new unannotated image is then projected into the dimensionally reduced space and the labels of the closest training images are assigned to the unannotated image itself. Experiments on large real collection of images showed that the approach, despite of its low computational cost, is very effective.
978-1-5386-9385-8
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8693683
La Cascia M., Vassallo G., Gallo L., Pilato G., Vella F. (2018). Automatic Image Annotation Using Random Projection in a Conceptual Space Induced from Data. In Proceedings - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018 (pp. 464-471). Institute of Electrical and Electronics Engineers Inc. [10.1109/SITIS.2018.00077].
File in questo prodotto:
File Dimensione Formato  
2018 MIRA.pdf

Solo gestori archvio

Descrizione: Articolo principale
Tipologia: Versione Editoriale
Dimensione 509.28 kB
Formato Adobe PDF
509.28 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/389736
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact