Talks and Poster Presentations (without Proceedings-Entry):
"From mere reliance to the human condition - Philosophical perspectives on trust and trustworthiness in human-robot interactions";
Talk: Human Agent Interaction (HAI) 2018,
For a successful and robust diffusion of complex robotic systems within society, it is important to
understand that trust is essential to long-term acceptance, collaboration, and engagement. Studies on
trust within robotics have mainly been motivated by the literature on trust in automation and human-
computer Interaction (HCI), which operates with a conceptualization of trust as mere reliance.
However, with the introduction of autonomous and agent-like robotic systems in the roles of
teammates, assistants, and companions (rather than tools) questions of trust and trustworthiness
between humans and robots have become more complex. Therefore, in social robotics and human-
robot interaction (HRI) attempts have been made to apply the concept of interpersonal trust in
robotics to better analyze and evaluate how humans perceive trust and find robots trustworthy.
Especially in the discussion about the advantages and disadvantages of using a high degree of
anthropomorphism, the perceived trustworthiness of agent-like robots and their human-like
appearance or behavior have been explored extensively. In this paper I will discuss the relationship
between robot anthropomorphism and the experience of trust and perceived trustworthiness in
interactions between humans and robots by drawing on the philosophical method of conceptual
analysis. By examining the scope of anthropomorphism, I will, through concrete examples, argue
that it is necessary to broaden the concept of anthropomorphism to include not only appearance and
behavior but to extend the concept to aspects of personality, framing, discourse, and social roles.
Furthermore, I will argue that it is possible to empirically explore trust in a way that also addresses
the challenges of transparency when including these non-material aspects into the concept of
anthropomorphism. Differently from previous studies on trust and transparency, with their focus on
the increase of knowledge acquisition, I will argue that the combination of trust and transparency
through a broadened concept of anthropomorphism will allow for a non-rationalist approach to trust
in robotics. Due to the different challenges concerning agent-like robots intended for human
everyday life, I will argue that the conceptualization of trust relates closer to that of the human
condition in term of ambiguity rather than being considered only in relation to situations involving
risk and safety. Acknowledging that trust can sometimes be experienced more as a feeling or a leap
of faith, because people are not always in control of when and how they engage in trust-relations
with agent-like robotic systems, provides therefore an alternative theoretical framework to current
discussions of trust in robotics. By exploring the triangle of trust, anthropomorphism, and
transparency, I hope to show in the paper that to provide novel perspectives on trust and
trustworthiness we must be more sensitive to human experience and social subtleties. This is
important not only in the development and design of agent-like robots but also in the way we design
empirical studies on interactions between humans and robots.
Trust, Human-Robot Interaction
Created from the Publication Database of the Vienna University of Technology.