Hazard recognition on construction sites is crucial for ensuring worker safety. Traditional methods widely rely on expert assessments, on-site inspections, and checklists, which can be time-consuming and susceptible to human error. The integration of multimodal Large Language Models (LLMs), such as GPT-based systems, offers a promising opportunity to overcome these limitations. Therefore, this study evaluates the effectiveness of GPT-4o in recognizing workplace hazards from image inputs, with a specific focus on construction sites. The results indicate that the model can serve as a valuable decision-support tool for safety professionals by providing scalable and real-time insights. However, the study also highlights key limitations, including the model’s reliance on general visual features rather than domain-specific safety knowledge, and the continued need for human supervision. Additionally, ethical concerns, including bias in AI-generated hazard assessments, data privacy, and the risk of over-reliance on AI, must be carefully managed to ensure these tools contribute responsibly and effectively to proactive risk management strategies.
La Fata, C.M., Barone, G., Cammarata, M. (2026). Improving Construction Site Safety with Large Language Models: A Performance Analysis. INFORMATION, 17(2) [10.3390/info17020210].
Improving Construction Site Safety with Large Language Models: A Performance Analysis
La Fata, Concetta Manuela
Primo
;
2026-02-17
Abstract
Hazard recognition on construction sites is crucial for ensuring worker safety. Traditional methods widely rely on expert assessments, on-site inspections, and checklists, which can be time-consuming and susceptible to human error. The integration of multimodal Large Language Models (LLMs), such as GPT-based systems, offers a promising opportunity to overcome these limitations. Therefore, this study evaluates the effectiveness of GPT-4o in recognizing workplace hazards from image inputs, with a specific focus on construction sites. The results indicate that the model can serve as a valuable decision-support tool for safety professionals by providing scalable and real-time insights. However, the study also highlights key limitations, including the model’s reliance on general visual features rather than domain-specific safety knowledge, and the continued need for human supervision. Additionally, ethical concerns, including bias in AI-generated hazard assessments, data privacy, and the risk of over-reliance on AI, must be carefully managed to ensure these tools contribute responsibly and effectively to proactive risk management strategies.| File | Dimensione | Formato | |
|---|---|---|---|
|
information-17-00210-v2.pdf
accesso aperto
Descrizione: This is an open access article under the terms of the Creative Commons Attribution License
Tipologia:
Versione Editoriale
Dimensione
1.93 MB
Formato
Adobe PDF
|
1.93 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


