Human-robot or human-AI interaction systems require a high degree of autonomy, proactivity, and adaptvity. The decisions that intelligent systems must make are highly dependent on the application context and trust is an essential element in task assignment. Explainability and ethical introspection capabilities are important in building trust and understanding in artificial processes. In this paper, we present our ongoing work aimed at equipping robots with ethical introspection capabilities when interacting with humans by designing and implementing explainable and self-disclosure capabilities. Using a computational model of ethical introspection that incorporates theories of psychology, ethics, and AI, we built robots that examine and reflect on their actions to evaluate and validate them. We use the Belief-Desire-Intention (BDI) agent paradigm and related programming languages along with the speech act mechanism to improve and extend the robot’s ethical values to better guide its decision-making process and the impact it has on humans.

Valeria Seidita, Antonio Chella (2024). Explainability and self-disclosure for robot ethical introspection. In N. Brandizzi, Centeio Jorge C, R. Cipollone, F. Frattolillo, L. Iocchi, A. Ulfert-Blank (a cura di), Proceedings of the 2nd International Workshop on Multidisciplinary Perspectives on Human-AI Team Trust. CEUR-WS.

Explainability and self-disclosure for robot ethical introspection

Valeria Seidita;Antonio Chella
2024-01-01

Abstract

Human-robot or human-AI interaction systems require a high degree of autonomy, proactivity, and adaptvity. The decisions that intelligent systems must make are highly dependent on the application context and trust is an essential element in task assignment. Explainability and ethical introspection capabilities are important in building trust and understanding in artificial processes. In this paper, we present our ongoing work aimed at equipping robots with ethical introspection capabilities when interacting with humans by designing and implementing explainable and self-disclosure capabilities. Using a computational model of ethical introspection that incorporates theories of psychology, ethics, and AI, we built robots that examine and reflect on their actions to evaluate and validate them. We use the Belief-Desire-Intention (BDI) agent paradigm and related programming languages along with the speech act mechanism to improve and extend the robot’s ethical values to better guide its decision-making process and the impact it has on humans.
2024
Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni
Valeria Seidita, Antonio Chella (2024). Explainability and self-disclosure for robot ethical introspection. In N. Brandizzi, Centeio Jorge C, R. Cipollone, F. Frattolillo, L. Iocchi, A. Ulfert-Blank (a cura di), Proceedings of the 2nd International Workshop on Multidisciplinary Perspectives on Human-AI Team Trust. CEUR-WS.
File in questo prodotto:
File Dimensione Formato  
paper6.pdf

accesso aperto

Tipologia: Versione Editoriale
Dimensione 1.14 MB
Formato Adobe PDF
1.14 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/623694
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact