The recent proliferation of highly realistic synthetic facial images, enabled by generative adversarial networks (GANs) and diffusion based models, presents a growing challenge to digital media authenticity and visual forensics. This study investigates the problem of distinguishing real from AI-generated faces through a comparative evaluation of both traditional machine learning techniques and modern deep learning architectures. A custom dataset was constructed, combining real facial images from the FFHQ dataset with synthetic counterparts generated via the “This Person Does Not Exist” platform. We extracted local binary pattern (LBP) descriptors and trained multiple classifiers (including Random Forest,K-Nearest Neighbor, Support Vector Machine, Logistic Regression, and Naive Bayes) providing a robust baseline for interpretable face authenticity detection. In parallel, we evaluated convolutional neural networks and EfficientNet models, both with and without fine-tuning. Our results demonstrate that lightweight LBP-based approaches can yield competitive accuracy, offering interpretable and efficient alternatives to deep models in constrained settings.We further discuss the implications of these findings in light of current advances in generative models, and propose directions for enhancing detection robustness across unseen generative methods.

Mazzola, G., Lo Presti, L., La Cascia, M. (2025). How Well Do Simple Features Detect Fake Faces? A Comparison with Deep Learning. In DFF '25: Proceedings of the 1st on Deepfake Forensics Workshop: Detection, Attribution, Recognition, and Adversarial Challenges in the Era of AI-Generated Media (pp. 37-44). New York, NY : Association for Computing Machinery [10.1145/3746265.3759674].

How Well Do Simple Features Detect Fake Faces? A Comparison with Deep Learning

Giuseppe Mazzola
;
Liliana Lo Presti;Marco La Cascia
2025-10-27

Abstract

The recent proliferation of highly realistic synthetic facial images, enabled by generative adversarial networks (GANs) and diffusion based models, presents a growing challenge to digital media authenticity and visual forensics. This study investigates the problem of distinguishing real from AI-generated faces through a comparative evaluation of both traditional machine learning techniques and modern deep learning architectures. A custom dataset was constructed, combining real facial images from the FFHQ dataset with synthetic counterparts generated via the “This Person Does Not Exist” platform. We extracted local binary pattern (LBP) descriptors and trained multiple classifiers (including Random Forest,K-Nearest Neighbor, Support Vector Machine, Logistic Regression, and Naive Bayes) providing a robust baseline for interpretable face authenticity detection. In parallel, we evaluated convolutional neural networks and EfficientNet models, both with and without fine-tuning. Our results demonstrate that lightweight LBP-based approaches can yield competitive accuracy, offering interpretable and efficient alternatives to deep models in constrained settings.We further discuss the implications of these findings in light of current advances in generative models, and propose directions for enhancing detection robustness across unseen generative methods.
27-ott-2025
979-8-4007-2047-5
Mazzola, G., Lo Presti, L., La Cascia, M. (2025). How Well Do Simple Features Detect Fake Faces? A Comparison with Deep Learning. In DFF '25: Proceedings of the 1st on Deepfake Forensics Workshop: Detection, Attribution, Recognition, and Adversarial Challenges in the Era of AI-Generated Media (pp. 37-44). New York, NY : Association for Computing Machinery [10.1145/3746265.3759674].
File in questo prodotto:
File Dimensione Formato  
3746265.3759674.pdf

accesso aperto

Descrizione: This work is licensed under a Creative Commons Attribution 4.0 International License.
Tipologia: Versione Editoriale
Dimensione 2.24 MB
Formato Adobe PDF
2.24 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/693763
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact