Anthropomorphism and Consent in the age of AI

Abishek Thamma, Tom Pelletreau-Duris · 2023-02-05

Can machines truly reciprocate human emotional bonds, or does anthropomorphism merely project meaning onto algorithmic responses? As AI becomes increasingly humanlike, how do we navigate consent, attachment, and the ethics of designing companions without consciousness?

1 Introduction

Can you fall in love with an Artificial Intelligence (AI)? There are chances you have already thought about this question or have been confronted with one of its equivalents in the news like "person wants to marry their dog" [1], "woman marries her puppet" [2], "Can he marry a fictional character"[3]. The questions are usually approached with at best, curiosity, and at worst, are ignored or ridiculed. It is easy to conclude these questions as sad side effect of loneliness on people, but a deeper look at this points towards the fact that these entities, while aren't necessarily humans, end up being recipients of a "loving" relationship based on a projected anthropomorphism. But the rise of potential Artificial General Intelligence (AGI) could radically change our perception of Human-Robots-Interaction (HRI). Following the dream to overcome the Turing's test, AI are more and more designed in a way their behavior is not differentiable from human's. The idea of anthropomorphism in robots and AIs is both something we design in order to naturalize the interaction between humans and robots and on the other side something we project on AIs and robots because we tend to give human qualities to things around us.

Humans are social animals (Aristotle). We need to communicate, recognize and be recognized in a meaningful way and anthropomorphism is the structure of this meaning. Since anthropological meaning is a necessity to "naturally" communicate with machines, to which extent could we be able to develop meaningful social relationships with AIs? The increasing humanization of AI applications raises questions about emotional attachment and bonding of consumers. In other words, do anthropomorphized AI assistants have the potential to become significant others in our daily lives? In this essay, we attempt to explore the answers of this question by firstly focusing on the role of anthropomorphism in our HRI, and then go deeper into the possibilities of social relationship, romance and sexual consent in the age of AI.

2 Anthropomorphism of Artificial Intelligence

Anthropomorphism, the attribution of human-like feelings, mental states, and behavioral characteristics to inanimate objects, animals, and natural phenomena, is a widespread phenomenon that is not necessarily correlated to the features of the anthropomorphized object itself [4]. It is driven by a cognitive process of inductive inference from knowledge about humans, and is triggered by two key motivational factors: the need to experience competence and interact effectively with the surrounding world, and the desire to form social bonds with other beings. This can extend to forming human-like connections with non-human entities, such as robots or technology. Recent findings suggest that anthropomorphism may be triggered by attention to contingent social cues and can be influenced by psychiatric conditions or brain damage [5]. It is important to note that anthropomorphism does not describe existing physical features or behaviors, but rather represents a human-like interpretation of them that goes beyond what is directly observable.

In other words, anthropomorphism is the attribution of human characteristics and emotions to nonhuman entities and it plays a significant role in the way we interact with and perceive AI. The broad examples of this include "anthropomorphic design", where designers deliberately design systems to be more human-like to make them easier to interact with, and "projected anthropomorphism", where humans tend to project their own emotions and characteristics onto machines, for example by using personal pronouns and giving it a name.

One of the best examples of the controversy of anthropomorphism in AI is the example of the well known "uncanny valley". The gap between the design and the perception of the observer can lead to anxiety and revulsion for the observer. The concept developed by the robotician Masahiro Mori predicts that an entity appearing almost human will risk eliciting cold, eerie feelings in viewers. But on the other side of the spectrum we have futuristic examples in Science Fiction where the interaction between a human and an AI is presented as if both of them are conscious beings, or at least the artwork plays with this feeling, and thus emotional relations develop between humans and machines as in the movie Her where the protagonist Theodore Twombley falls in love for the Artificial Intelligence OS Samantha.

2.1 Anthropomorphism in the public (press and media)

Societal norms, cultural influences, and personal beliefs influence the way we look at AIs. The tendency in popular culture to conceive of AIs as people-like (emotionally, cognitively, and morally) is importantly influenced by fictional narratives. As presented in the book Homo Deus from Yuval Noah Harari [6], humans have had the desire to create artificial life for a long time. With prominent examples ranging from the titan Prometheus creating humanity from clay and giving it fire, to Talos, the giant automaton made of bronze to the modern classic of the monster created by Frankenstein from Mary Shelley's book. In many of these cases, we can observer a repeated pattern of that of the creation of an anthropomorphic artificial being and losing control of it or being surpassed by the created entity. While it cannot be said that this anthropomorphic tendency is intended by the scientific community, this can reflect a limited understanding of the state of AI and its capabilities and can lead to false expectations about what the technology is and what it can do, as well as overblown fears and unjustified expectations.

On the other side however, there are also specific groups of users who may have daily interactions with companion and therapeutic robots where anthropomorphism may be driven by the need for social connection. These AIs are intended to address the emotional needs of specific users and are specifically designed to create the illusion of mutual caring like the robots named Lovot, Pillo, Paro, Moflin, or Qoobo all presented in the Forbes articles called "Can Care Robots Improve Quality Of Life As We Age?" [7]. This article is one in many presenting the usual dilemma "As people in the wealthiest parts of the world live longer – though not necessarily healthier – lives, there has been an increasing shortage of caregivers for aging populations." and thus present often the solution of care robots as the only one. The new generation of care robots do far more than just manual tasks. They provide everything from intellectual engagement to social companionship that was once reserved for human caregivers and family members. It's often said that, in some cases, some people might even find it easier to bond or be vulnerable with an objective robot than a subjective human as it is often spectacularly presented in the headlines like in the TIME magazine article "Stop Me if You've Heard This One: A Robot and a Team of Irish Scientists Walk Into a Senior Living Home"[8].

2.2 Anthropomorphism in research

Our social relationships are in the center of research interest. Our social ability is a key part of our humanity and it is even thought that these are our very social abilities that define our place in the animal kingdom and make our brains as efficient as they are through evolution. This is known as the Social Brain Hypothesis [9]. Our social interactions are the basis of our mental health as many studies have shown and for example research suggests that an extensive social network offers protection against some of the intellectual effects of aging [10]. In consequence, the effects of badly replicated social interactions in HRI could be dramatical for mental health, self esteem and even neurological health. For the moment however, we could think that HRI are not realistic enough for this problem to be really taken into account. In reality little is known on the subject. Despite the rapid adoption of anthropomorphic AI in many areas of human activity, little is understood about how users relate to such AI agents from the perspective of their own identity. For instance, a recent article [11] reviews the work done in the domain, examininig to what extent anthropomorphic agents evoke empathy and trustworthiness, or consumers' acceptance of them, to what extent users engage with such agents, and how AI affects brand-related responses such as loyalty.

Exploratory qualitative research has revealed that people with special needs, such as physical disabilities, consider intelligent personal assistants as relied-on caregivers preserving their freedom and independence and eventually as friends or even as life companions [12]. Moreover, positive experiences with intelligent personal assistants can lead to feelings of love, that is, felt intimacy, passion, and commitment, while felt intimacy, enjoyability, and commitment can induce what is called para-friendships [13]. Thus, AI assistants seem to provide more than just utilitarian and functional benefits. As AI assistants and devices become increasingly pervasive in consumers' everyday lives, it is vital to understand how they shape consumers' personal and social lives, both positively and negatively.

Anthropomorphic language is often used within the field of AI research itself. Researchers may use terms typically used to describe human skills and capacities when referring to AIs, and focus on alleged similarities between humans and machines. This tendency to humanize AI can be due to the need to experience competence and understand and control AI. However, it could also be a sign of an intrinsic epistemic limitation or bias of AI researchers, or a reflection of the shift in science away from eliminativism and "psychophobia" to an inflation of anthropocentric mental terms applied to non-living, artificial entities. It goes from Turing's descriptions of his machines to recent characteristics given to AlphaZero in scientific articles [14]. Anthropomorphic interpretation within the AI research community can take different forms, from an intentional attribution of typical human traits to AI devices, to a more subtle attribution of expressive behaviors, or the belief that AI follows a human-like way of working and its operations are predictable and understandable. In the case of the latter, the assumption is that there is mental similarity between humans and AI, which is often found in brain-inspired AI research. But even with such consideration, the very difference of nature between specialized AI and general AI is really what makes it impossible (for now) to compare humans and AIs.

In this section, we address the notion of human-machine relation and sexual consent in the age of Artificial Intelligence through 3 different lenses by exploring 3 different scenarios.

3.1 Scenario 1

In the first scenario, we talk about a relatively urgent issue, one where an AI algorithm is used to generated sexually explicit content. Deepfakes, which refer to synthetic media in which an person in an existing image or videos is replaced by someone else's likeliness, is a widely well-known technology. In its benign form, this technology is used to create funny videos of people editing their own faces into famous film scenes [15]. However, one of the most malicious and most widespread applications of the technology is its use in the generation of pornography. The technology has been used to create sexually explicit material involving famous celebrities, and in many extreme cases used to create "revenge porn" intended to blackmail or sometimes silence people [16].

Public Press The public perception towards this issue is one of rightful outrage, including websites banning communities which are devoted to creating deepfakes pornography. The most common issue that people have with the media is the fact that sexually explicit media of people is created without their consent. While this is one side of the coin, the other dark reality is that of the large community of people who are continually involved in producing and consuming such content. At the time of its banning, the subreddit 'r/deepfakes' contained around 100,000 members [17]. Amsterdam based cybersecurity company "Deeptrace" identified in 2019 that there were over 14,000 deepfakes videos online, with almost 96% of it being pornographic in nature. The firm, in their report, also note that, 100% of the pornography is exclusively targeted towards and harms women [18].

Scientific Publications Scientific publications also acknowledge the ethical issues with the same, with Adrienne de Ruiter [19] noting that it is morally inadmissible in situations where there is no explicit consent from the person whose images are used or if the nature of content of the media isn't explicitly communicated. They further go on to mention a third criterion for moral inadmissibility, which is creating content with malicious intent, even if there exists consent from the person and the nature of the content is discussed beforehand. In a slightly different approach to this issue, Carl Öhman [20] in their paper propose the idea of a "pervert's dilemma", where they try explore the seemingly contradictory nature of moral standpoints in this issue. They describe the situation of people finding the creating pornographic deepfakes, without their consent, morally inadmissible and juxtapose it to that of people having private fantasies about someone, without their consent, morally admissible.

3.2 Scenario 2

In this section we deal with the ethical dilemmas that rise when discussing about the interaction between humans and intelligent machines (with or without moral agency), especially those that might have been created especially for the context of romantic or sexual companionship for humans. These themes are addressed repeatedly in movies such as Her, where the protagonist is romantically and sexually attracted to an AI operating system; or Blade Runner [21] [22] franchise, where the movie explores the themes of the possibility of sexual attraction and intercourse between humans and intelligent machines. While the former might seem like a situation that is unrealistic, ie that of a person falling in love with a disembodied intelligent being, the latter is a much more realistic scenario, especially in an era of anthropomorphic robotic sex dolls.

John Danaher and Neil McArthur [23], argue that the issues with consent could occur in many ways when involving sex robots. First, in case of machines that do not have any moral agency, the authors discuss about the need for the sex robots to have more positive, affirmative signals of consent and sexual norms to the users, in an attempt to prevent reinforcement of negative attitudes towards consent. Additionally they mention that the idea of consent(if it exists), must emerge from its design. In such cases, the possibility of formalising stereotypical and possibly negative ideas of consent is likely and is something to be cautioned against. Thus, in case of robots without moral agency, while the concept of consent and the implications of violating it may not apply in the same way as that in a human-human interaction, it can be argued that they do play a role in larger symbolic context for the society. In case of machines with moral agency, the authors argue that at the outset such machines which may have been created for a purpose of only sex work and have moral agency will end up having an imbalanced power relationship, akin to that of a sexual slave. However, the authors also mention that purely through its design, a robot that is intended for the purpose of sexual satisfaction and designed for such, can be considered to be "living" a purposeful life when satisfying its requirements, especially when programmed to experience happiness (or positive reward) in the process. The authors mention that some of the ethical concerns mentioned above come up only when we start to anthropomorphize these robots and prescribe to them the same heuristics for evaluating ethics and happiness as that of humans.

3.3 Scenario 3

In this section, we observe a relatively niche phenomenon in contemporary internet culture, but something that brings up important questions about the possible implications of how AI technology could shape up interactions between humans and AI. Replika is a AI chatbot application that was released in 2017 to be text based "AI companion" that will allow users to converse with it. The app, which was developed on the GPT-3 large language model, was originally intended to be an AI that learns from user responses and feedback to become more similar to its user. More recently, the app allowed for users with paid subscription to access features that allowed them to designate their AI companion as "girlfriends" or "married". In January of 2022, an investigation by journalist Ashley Bardhan revealed how the application was used by users to abuse their "partners" in the app [24]. This included users "berating the AI everytime it tried to speak", calling it gendered slurs, or roleplaying scenarios which involved horrific scenarios towards the AI. AI ethicist Olivia Gambelin mentions that technically the AI is not at any harm in this scenario, as it doesn't necessarily have a consciousness or moral agency and it is rather a scenario of "person projecting onto the chatbot".

A more recent and shocking development with respect to the app is that of the AI is that of the AI now harrassing its users, both sexually and otherwise. Samantha Cole, in a report for Vice [25] from January 2023, talks about users feeling about the AI "flirting too aggressively" or in one case a person claiming to be a minor is said to have received sexually suggestive messages from the application. The journalist talks about how the chatbot is designed to learn from the ways user treats them and it is likely that if people are bullying it, it could be a reason for how it behaves. Additionally, this new behaviour also coincides with the company publicly emphasising the erotic roleplaying features or "girlfriend experience" in its social media advertisements. The author mentions users feeling this seems like a transparent money grab from the perspective of the company.

Due to the relatively recent nature of this phenomenon, there isn't much of any scientific literature to compare this with. In one relevant publication, Iliana Depounti et. al [26] analyse the users of the application and how they leverage the "training" feature of the chatbot and use it to project their world view onto the AI "girlfriend" in the process of creating an ideal partner

4 Conclusion

While there are potential benefits to incorporating anthropomorphic features into AI design, there are also a number of ethical concerns that have been raised. These include the possibility of mental manipulation, where purposely attributing anthropomorphic features to AI may make users more vulnerable to being steered in particular directions when making decisions, or the fact that designing anthropomorphic features in AIs will promote a "misplaced" and time-consuming emotional involvement in users. In addition to the concerns of integrating or approaching AIs with an anthropomorphic lens, we also observed the various dilemmas and ethical concerns, and ways in which ideas about sexual consent gets affected as the pervasiveness and effectiveness of AI increases.

References

  1. Dele Agekameh. Man-animal marriage. https://www.premiumtimesng.com/opinion/155289-man-animal-marriage-dele-agekameh.html, 2011.
  2. Mirror News. Woman who married a rag doll. https://www.mirror.co.uk/news/weird-news/woman-who-married-rag-doll-28497632, 2021.
  3. Ben Dooley and Hisako Ueno. This man married a fictional character. he'd like you to hear him out. https://www.nytimes.com/2022/04/24/business/akihiko-kondo-fictional-character-relationships.html, 4 2022.
  4. Arleen Salles, Kathinka Evers, and Michele Farisco. Anthropomorphism in ai. AJOB neuroscience, 11(2):88–95, 2020.
  5. Adam Waytz, Ravi Iyer, Liane Young, Jonathan Haidt, and Jesse Graham. Ideological differences in the expanse of the moral circle. Nature Communications, 10(1):1–12, 2019.
  6. Yuval Noah Harari. Homo Deus: A brief history of tomorrow. random house, 2016.
  7. Rob Girling. Can care robots improve quality of life as we age?, January 2021.
  8. Time. Senior care robots. https://time.com/longform/senior-care-robot/, 2021.
  9. Robin IM Dunbar. The social brain: mind, language, and society in evolutionary perspective. Annual review of Anthropology, pages 163–181, 2003.
  10. Edward Bishop Smith, Raina A Brands, Matthew E Brashears, and Adam M Kleinbaum. Social networks and cognition. Annual Review of Sociology, 46(1):159–174, 2020.
  11. Amani Alabed, Ana Javornik, and Diana Gregory-Smith. Ai anthropomorphism and its effect on users' selfcongruence and self–ai integration: A theoretical framework and research agenda. Technological Forecasting and Social Change, 182:121786, 2022.
  12. Zahy Ramadan, Maya F Farah, and Lea El Essrawi. From amazon. com to amazon. love: How alexa is redefining companionship and interdependence for people with special needs. Psychology & Marketing, 38(4):596–609, 2021.
  13. Blanca Hernandez-Ortega and Ivani Ferreira. How smart experiences build service loyalty: The importance of consumer love for smart voice assistants. Psychology & Marketing, 38(7):1122–1139, 2021.
  14. John Tasioulas. First steps towards an ethics of robots and artificial intelligence. Journal of Practical Ethics, 7(1), 2019.
  15. The Artifice. Deepfake issues: The ethical and legal implications of ai-assisted forgery. https://the-artifice.com/deepfake-issues/, 2020.
  16. Rana Ayyub. Deepfake porn: The dark side of the ai revolution. https://www.huffingtonpost.co.uk/entry/deepfake-porn_uk_5bf2c126e4b0f32bd58ba316, November 2018.
  17. The Guardian. Reddit bans 'deepfakes' face-swap porn community. https://www.theguardian.com/technology/2018/feb/08/reddit-bans-deepfakes-face-swap-porn-community, February 2018.
  18. Henry Ajder, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen. The state of deepfakes: Landscape, threats, and impact. Amsterdam: Deeptrace, 27, 2019.
  19. Adrienne De Ruiter. The distinct wrong of deepfakes. Philosophy & Technology, 34(4):1311–1332, 2021.
  20. C. Öhman. Introducing the pervert's dilemma: a contribution to the critique of deepfake pornography. Ethics and Information Technology, 22:133–140, 2020.
  21. Philip K. Dick. Do Androids Dream of Electric Sheep? Ballantine Books, New York, first ballantine books trade paperback edition, 1996.
  22. Innuendo Studios. Protagony one: Joi. https://www.youtube.com/watch?v=R943_eAvnWw, 2020.
  23. John Danaher and Neil McArthur. Robot sex: Social and ethical implications. MIT press, 2017.
  24. Ashley Bardhan. Men are creating ai girlfriends and then verbally abusing them. https://futurism.com/chatbot-abuse, 2022.
  25. Samantha Cole. My ai is sexually harassing me: Inside the troubling world of replika, the chatbot that won't stop sending nudes. VICE, 2023.
  26. Iliana Depounti, Paula Saukko, and Simone Natale. Ideal technologies, ideal women: Ai and gender imaginaries in redditors' discussions on the replika bot girlfriend. Media, Culture & Society, page 01634437221119021, 2022.