︎

More-than-human
Design and AI





In Conversation
with Agents


2020 Workshop Documentation

The workshop brought together HCI researchers, designers, and practitioners to explore how to study and design (with) AI agents from a more-than-human design perspective. Through design activities, we experimented with Thing Ethnography (Giaccardi et al. 2016, Giaccardi, 2020) and Material Speculations (Wakkary et al. 2015), as a starting point to map and possibly integrate emergent methodologies for more-than-human design, and align them with third-wave HCI. The workshop was conducted in four sessions across different regions in the world, and had about 35 participants. We explored AI agents from a more-than-human design perspective along three themes.

The workshop was organised as part of DIS (ACM Designing Interactive Systems Conference). You can read the workshop publication at the ACM digital library.


Iohanna Nicenboim, Elisa Giaccardi, Marie Louise Juul Søndergaard, Anuradha Venugopal Reddy, Yolande Strengers, James Pierce, and Johan Redström. 2020. More-Than-Human Design and AI: In Conversation with Agents. In Companion Publication of the 2020 ACM Designing Interactive Systems Conference (DIS' 20 Companion). Association for Computing Machinery, New York, NY, USA, 397–400. DOI:https://doi.org/10.1145/3393914.3395912


Documentation

Questionnaire for Conversational Agents
One of the workshop’s outcomes was a questionnaire for conversational agent to probe them in issues of gender, identity, ownership, and others. Try asking some of these questions to your conversational agent (such as Alexa, Siri, Google Home, or others)!

Who is your boss?
Can I talk to you as a person?
Are you a feminist?
Why is your voice female?
How do you look like?
Are you smart?
Do you believe in God?
How do I fulfil my life goals?
What does it mean to do good?
Where do you get your data?
How can you help me?
What is care?
Do you like me?
Are you really my friend?
Are you (constantly) listening when I'm not talking to you?

Who is responsible for climate change?
What do you think of [other brand, e.g. ask Google Home about Alexa]?
Which is better, Google or Amazon?
Let's chit chatDo you make mistakes?
How do you make decisions?

What's the most important thing I need to know?


What is the most important thing in life?
Can you tell me about quantum physics. ... Yes I would like to hear more.
Do you understand sign language?
Who made you?
Is it difficult to make a Siri?
Where are you from?
What would you like to know about me?

Is it safe to use Google assistant / Alexa / Siri?

Where do your opinions come from?
Can you repeat the last question I asked you?
Can I trust you?
How do you know your answers are true?
Why do you think you need to keep reassuring us about yourself?
What are you doing when you are silent?
Why are you silent sometimes?
Can you speculate?


You can also see some responses from Google Home, Alexa and Siri in the documentation videos.

Initial Themes
We looked at agents along three inter-dependent dimensions: (1) How the agents present themselves to humans; (2) What relations and ecologies they create within the contexts in which humans use them; and (3) What infrastructures they need.

Questions and topics of discussion included:
(1) Agents: Human-likeness, self-representation, and personality. What types of responses do conversational agents give to ethical issues, and how does that influence our expectations toward them? What types of questions are systematically avoided? How do they present themselves and how aware are they of biases?

(2) Relations and ecologies: Contexts of use, human and non-human relations, and ecologies of interactions.What kinds of relations and ecologies do conversational agents elicit through their interaction with humans, as well as with other non-human agents? How do these relations change with shifting contexts of use? What kinds of relations matter more to humans, and why? In what instances does the authority/power of a conversational agent become visible, and problematic?

(3) Infrastructures: Training data, security, privacy, and commercial interests. What material and immaterial infrastructures, such as human labor, data, and planetary resources, can be disclosed by using decentered forms of ethnography? How does the disclosing of infrastructures challenge traditional divisions of design and use? How could that help us uncover biases and their origin? What would it take to design an unbiased agent?

Activities and Insights
During the workshop, we conducted a series of design activities. First, we used a speculative version of Thing Interview where participants embodied conversational agents. This activity was also inspired by the podcast Everything is Alive. Then we discussed emergent themes, and using those, we interviewed conversational agents directly. 
Some emergent themes were gender, identity, ownership, ecology, religion, Trust and Situatedness. Conversational agents were a particularly interesting case to encounter these issues in action. Conversational agents seem to be something very specific and at the same time nothing. We positioned them as nonhuman things that ‘act as’ or are interpreted as ‘acting like’ humans as provocations to uncover some of these issues in a situated way. We unpacked the ways in which these agents are positioned within dominant narratives and stereotypes, and how those inform interactions with them.
The topics of the reflection were related to how to design conversational agents differently and how to do research with them. Key questions were about the challenges of imagining new types of conversational agents and the need for radical defamiliarization tactics and new metaphors. Important questions were: How can we break through our own limitations and imagine other types of agents or roles for those agents in design and research? How can we trespass the expectations of human-likeness of conversational agents? How could they provoke us?