[Back]


Talks and Poster Presentations (without Proceedings-Entry):

I. Schwaninger:
"On the Interplay of Psychological Safety and Trust in Long Term Human-Robot Collaboration";
Talk: Human Agent Interaction (HAI) 2018, Southampton, UK; 2018-12-15 - 2018-12-18.



English abstract:
This abstract explores what the concept of psychological safety can contribute towards a framework for designing and evaluating trust in human-robot collaboration. Interpersonal trust and psychological safety reflect attitudes, values, cognitions and emotions held by group members about a group. They are products of both interactive team processes and team member experiences developing over time (Mayfield et al., 2016). Edmondson refers to team psychological safety as "a shared belief held by members of a team that the team is safe for interpersonal risk taking" (Edmondson, 1999: 350). Therein, trust is defined as the "expectation that othersī future actions will be favorable to oneīs interests, such that one is willing to be vulnerable to those actions" (Mayer et al., 1995; Robinson, 1996).

Relating this to human-robot interaction, both trust and psychological safety can be framed as a product of interactive processes and experience, with the type of vulnerability depending on the specific context. Strohkorb and Scassellati (2017), for example, study how a robotīs behavior could facilitate psychological safety in mixed human-robot teams to improve group functioning and overall performance. Drawing from research on vulnerable disclosure, illness support groups, and improvisational theater, they propose that robots can support team psychological safety by showing vulnerabilities, expressing emotions, supporting team members and celebrating failure (Strohkorb and Scassellati, 2017). Other studies have shown that reciprocal disclosure between two parties can engender trust, and a robot exposing itself as more vulnerable increases trust in short-term human-robot interactions (Martelaro et al., 2016).

However, when looking on trust on the long term, psychological safety may come with limits for framing trust. Given that the robot may have faulty behavior, trust further requires the robotīs conceptual model shaping the userīs expectations being close to the agentīs actual capabilities. Expectations are grounded on communication of the systemīs intentions and feedback, and they are essential not only for establishing trust with systems, but also for maintaining it on the long term (Norman, 2004: 142).

The issue of designing and measuring trust requires an understanding of trust as a product of interaction and experiences evolving and potentially changing over time. Questions that remain open for discussion are the following: In what way is psychological safety a useful concept for designing and measuring trust in robots? How important are the conceptual model and expectations of a robot compared to aspects like personal disclosure? Further, how relevant is the aspect of time and dynamic changes in cooperative scenarios? By discussing these questions, I aim to contribute to an advancement in the framing and evaluation of trust and the concepts related to it, especially in human-robot collaboration.

References

Edmondson, Amy. (1999). Psychological Safety and Learning Behavior in Work Teams. Administrative Science Quarterly, 44 (2). 350-383.

Martelaro, Nikolas et al. (2016). Tell Me More: Designing HRI to Encourage More Trust, Disclosure, and Companionship. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI '16). IEEE Press, Piscataway, NJ, USA, 181-188.

Mayfield, Cliff et al. (2016). Psychological Collectivism and Team Effectiveness: Moderating Effects of Trust and Psychological Safety. Journal of Organizational Culture, Communications and Conflict 20 (1). 78-94.

Norman, Donald A. (2004). Emotional Design: Why We Love (or Hate) Everyday Things. New York, Basic Books.

Robinson, Sandra L. (1996). Trust and Breach of the Psychological Contract. Administrative Science Quarterly. 41. 574-599.

Strohkorb, Sarah and Scassellati, Brian. (2017). Cultivating Psychological Safety in Human-Robot Teams with Social Robots. In Proceedings of the 2017 Workshop on Robots in Groups and Teams at the 20th ACM Conference on Computer-Supported Collaborative Work and Social Computing. Portland, OR, USA, February 26.

Keywords:
Trust, Human-Robot Collaboration, Psychological Safety

Created from the Publication Database of the Vienna University of Technology.