In human-robot interaction field, the robot is no longer considered as a tool but as a partner, which supports the work of humans. Environments that feature the interaction and collaboration of humans and robots present a number of challenges involving robot learning and interactive capabilities. In order to operate in these environments, the robot must not only be able to do, but also be able to interact and especially to ”understand”. This thesis proposes a unified probabilistic framework that allows a robot to develop basic cognitive skills essential for collaboration. To this aim we embrace the idea of motor simulation - well established in cognitive science and neuroscience - in which the robot reenacts in simulation its own internal models used for physically performing action. This particular view offers the possibility to unify apparently distinct cognitive phenomena such as learning, interaction, understanding and dialogue, just to name a few. Ideas presented here are corroborated by experimental results performed both in simulation and on a humanoid robotic platform. The first contribution in this direction is a robust Bayesian method to estimate (i.e. learn) the parameters of internal models by observing other skilled actors performing goal-directed actions. In addition to deriving a theoretically sound solution for the learning problem, our approach establishes theoretical links between Bayesian inference and gradient-based optimization methods. Using the expectation propagation (EP) algorithm, a similar algorithm is derived for multiple internal models scenario. Once learned, internal models are reused in simulation to ”understand” actions performed by other actors, which is a necessary precondition for successful interaction. We have proposed that action understanding can be cast as an approximate Bayesian inference in which the covert activity of internal models produces hypotheses that are tested in parallel through a sequential Monte Carlo approach. Here, approximate Bayesian inference is offered as a plausible mechanistic implementation of the idea of motor simulation making it feasible in real-time and with limited resources. Finally, we have investigated how the robot can learn a grounded language model in order to be bootstrapped into communication. Features extracted from the learned internal models, as well as descriptors of various perceptual categories, are fed into a novel multi-instance semi-supervised learning algorithm able to perform semantic clustering and associate words, either nouns or verbs, with their grounded meaning.
|Titolo:||BAYESIAN APPROACHES TO HUMAN-ROBOT INTERACTION: FROM LANGUAGE GROUNDING TO ACTION LEARNING AND UNDERSTANDING|
|Settore Scientifico Disciplinare:||Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni|
|Data di pubblicazione:||20-apr-2012|
|Citazione:||(2012). BAYESIAN APPROACHES TO HUMAN-ROBOT INTERACTION: FROM LANGUAGE GROUNDING TO ACTION LEARNING AND UNDERSTANDING. (Tesi di dottorato, , 2012).|
|Tipologia:||Tesi di dottorato|
|Appare nelle tipologie:||07 - Tesi di dottorato pre 2013|