Trust is a critical issue in human-robot interactions: as robotic systems gain complexity, it becomes crucial for them to be able to blend into our society by maximizing their acceptability and reliability. Various studies have examined how trust is attributed by people to robots, but fewer have investigated the opposite scenario, where a robot is the trustor and a human is the trustee. The ability for an agent to evaluate the trustworthiness of its sources of information is particularly useful in joint task situations where people and robots must collaborate to reach shared goals. We propose an artificial cognitive architecture based on the developmental robotics paradigm that can estimate the trustworthiness of its human interactors for the purpose of decision making. This is accomplished using Theory of Mind (ToM), the psychological ability to assign to others beliefs and intentions that can differ from one's owns. Our work is focused on a humanoid robot cognitive architecture that integrates a probabilistic ToM and trust model supported by an episodic memory system. We tested our architecture on an established developmental psychological experiment, achieving the same results obtained by children, thus demonstrating a new method to enhance the quality of human and robot collaborations. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.

Vinanzi, S., Patacchiola, M., Chella, A., Cangelosi, A. (2019). Would a robot trust you? Developmental robotics model of trust and theory of mind. PHILOSOPHICAL TRANSACTIONS - ROYAL SOCIETY. BIOLOGICAL SCIENCES, 374(1771) [10.1098/rstb.2018.0032].

Would a robot trust you? Developmental robotics model of trust and theory of mind

Chella, Antonio;
2019-04-29

Abstract

Trust is a critical issue in human-robot interactions: as robotic systems gain complexity, it becomes crucial for them to be able to blend into our society by maximizing their acceptability and reliability. Various studies have examined how trust is attributed by people to robots, but fewer have investigated the opposite scenario, where a robot is the trustor and a human is the trustee. The ability for an agent to evaluate the trustworthiness of its sources of information is particularly useful in joint task situations where people and robots must collaborate to reach shared goals. We propose an artificial cognitive architecture based on the developmental robotics paradigm that can estimate the trustworthiness of its human interactors for the purpose of decision making. This is accomplished using Theory of Mind (ToM), the psychological ability to assign to others beliefs and intentions that can differ from one's owns. Our work is focused on a humanoid robot cognitive architecture that integrates a probabilistic ToM and trust model supported by an episodic memory system. We tested our architecture on an established developmental psychological experiment, achieving the same results obtained by children, thus demonstrating a new method to enhance the quality of human and robot collaborations. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
29-apr-2019
Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni
Vinanzi, S., Patacchiola, M., Chella, A., Cangelosi, A. (2019). Would a robot trust you? Developmental robotics model of trust and theory of mind. PHILOSOPHICAL TRANSACTIONS - ROYAL SOCIETY. BIOLOGICAL SCIENCES, 374(1771) [10.1098/rstb.2018.0032].
File in questo prodotto:
File Dimensione Formato  
rstb.2018.0032.pdf

accesso aperto

Tipologia: Versione Editoriale
Dimensione 1.35 MB
Formato Adobe PDF
1.35 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/346829
Citazioni
  • ???jsp.display-item.citation.pmc??? 9
  • Scopus 61
  • ???jsp.display-item.citation.isi??? 43
social impact