Wild Narratives: Exploring the Effects of Animal Chatbots on Empathy and Positive Attitudes toward Animals
Abstract.
Rises in the number of animal abuse cases are reported around the world. While chatbots have been effective in influencing their users’ perceptions and behaviors, little if any research has hitherto explored the design of chatbots that embody animal identities for the purpose of eliciting empathy toward animals. We therefore conducted a mixed-methods experiment to investigate how specific design cues in such chatbots can shape their users’ perceptions of both the chatbots’ identities and the type of animal they represent. Our findings indicate that such chatbots can significantly increase empathy, improve attitudes, and promote prosocial behavioral intentions toward animals, particularly when they incorporate emotional verbal expressions and authentic details of such animals’ lives. These results expand our understanding of chatbots with non-human identities and highlight their potential for use in conservation initiatives, suggesting a promising avenue whereby technology could foster a more informed and empathetic society.
1. Introduction
Many countries have recently reported rises in animal abuse (Maisner, 2023; Cheng, 2024; Harris, 2023). Various factors have been identified as contributing to this troubling trend, including a lack of awareness that animals can suffer pain (Agnew, 1998). Another factor is absence of empathy for animals, as individuals who do not recognize them as sentient are less likely to treat them humanely (Miller, 2001; Agnew, 1998). In addition, the impossibility of communicating with animals verbally leaves them particularly susceptible to misrepresentation and misunderstanding, further increasing their vulnerability to abuse (Thomsen et al., 2023).
Researchers have explored various approaches to promoting people’s empathy and positive attitudes toward animals. These range from live animal encounters in educational settings like zoos (Małecki et al., 2019) to first-person narratives purportedly written by animals (Akerman, 2019). A recurring theme across these strategies is the attempt to provide animals with a voice, as a means of evoking empathy and thus helping to shift public perception in favor of better treatment of animals (Young et al., 2018; Demello, 2018; Małecki et al., 2019). Additionally, some efforts to improve human empathy with animals have involved showing animals using digital tools such as iPads (Webber et al., 2017b). The results suggest that seeing animals as human-like indeed fosters empathy by heightening people’s awareness that animals also have thoughts and feelings (Webber et al., 2017a). However, some research has raised concerns that the use of technology in animals is unnatural or increases animal stress (Diana et al., 2021; Webber et al., 2017a).
Extensive research has focused on the ability of conversational agents (also called chatbots) to shape people’s perceptions and engagement through a variety of narrative techniques (Bickmore et al., 2009; Park et al., 2023). To enhance chatbots’ facilitation of prosocial attitudinal and behavioral changes (Park et al., 2023), as well as their ability to provide mental-health support (Bickmore et al., 2009; de Gennaro et al., 2020; Lee et al., 2019), scholars have explored methods of humanizing them: for instance, assigning them names and/or portraits, and designing them to exhibit emotions (Shi et al., 2020; Liu and Sundar, 2018; Seeger et al., 2018). Humanizing chatbots may amplify the positive effects of other aspects of their design. For example, previous research has indicated that chatbots’ employment of first-person narratives can, in addition to increasing empathy and fostering attitudinal changes toward the characters represented in such narratives (Bickmore et al., 2009), increase user enjoyment during interactions (Lee et al., 2023).
In short, chatbots have demonstrated their ability to influence users’ perceptions across various domains (Park et al., 2023; de Gennaro et al., 2020; Lee et al., 2019), and research to date suggests that a narrative from an animal’s first-person viewpoint could be effective in promoting empathy and changing human attitudes toward the animal, or toward animals in general (Beierl, 2008; James, 2019; Keen, 2006). Additionally, we suggest that chatbots could serve as a useful means of simulating human-animal interactions without jeopardizing animal welfare. Nevertheless, research on whether and how chatbots could be designed to represent animals, and how such representation might influence users’ empathy and attitudes toward animals, has hitherto been rare to nonexistent.
To address this absence, we designed chatbots that speak from the perspective of an animal. Inspired by prior work on chatbots’ humanized design cues (e.g., identity, non-verbal and verbal cues) (Seeger et al., 2018; Shi et al., 2020; Liu and Sundar, 2018), we were interested in whether and how such cues could be redesigned to represent animals, and how this would affect user perceptions of chatbots’ identities. We therefore conducted a mixed-methods experiment with 240 participants, each of whom was randomly assigned to one of eight conditions, i.e., interaction with a chatbot that had zero, one, two, or all of the three above-mentioned cue types. All groups then engaged in interactive dialogue with their respective chatbots. We evaluated the participants’ empathy, attitudes, and intention to engage in prosocial behavior toward the animals portrayed in these narratives. Our results showed that a chatbot’s utilization of verbal cues enhanced participants’ empathy, whereas the use of non-verbal ones resulted in reduced empathy. Notably, first-person narratives by the chatbots succeeded in boosting participants’ perception of the chatbot’s identity as animal-like, and this led to more positive attitudes and increased prosocial behavioral intentions.
This pioneering study makes the following contributions to the human-computer interaction community. First, it extends exploration of human-like chatbots into the domain of animal-like ones, and thus opens new pathways to understanding how non-human personas can be effectively utilized. Second, it demonstrates the ability of chatbots to influence their users’ perceptions of non-human entities. Specifically, we found that chatbots embodying an ‘animal perspective’ could significantly improve user empathy, attitudes, and prosocial behavioral intentions toward animals, particularly when they employed emotional expressions. Finally, this study provides a unique perspective on how human-chatbot interactions can be designed to have positive social impacts. These insights could pave the way for innovative applications in education and conservation, underscoring the potential of chatbot technology to contribute to more empathetic and better-informed societies.
2. Related Work
2.1. Cruelty to Animals
Animal abuse can be defined as any action that compromises the well-being of an animal, irrespective of intent, necessity, or social acceptability (Agnew, 1998). Previous studies have identified various factors that may contribute to individuals’ cruelty toward animals. One is lack of awareness or comprehension regarding the abusive nature of their actions (Agnew, 1998), which may be due to ignorance about how a behavior indirectly contributes to cruelty, or to a belief that animals are incapable of experiencing emotions and/or suffering (Agnew, 1998). Another key factor is lack of empathy, including empathy with animals in distress (Agnew, 1998; Miller, 2001; McPhedran, 2009).
Species that share greater biological similarities with humans, like chimpanzees and bonobos, tend to elicit higher levels of empathy and moral consideration, as evidenced by high levels of public support for their protection (Miralles et al., 2019; Agnew, 1998). However, this bias leaves more evolutionarily distant animals, such as reptiles and invertebrates, exposed to exploitation in research, the exotic-pet trade, and factory farming (Marshall et al., 2020). Further compounding this vulnerability is the communication barrier. The inability to verbally communicate their needs and experiences leaves animals susceptible to misrepresentation and manipulation, as when owners downplay signs of illness or distress in their pets (Mellor, 2016).
2.2. Representing Animals through First-person Narratives
A popular method of fostering empathy towards animals is the use of animal narratives (Young et al., 2018; Małecki et al., 2019). Framed in ways that resonate with human experiences and emotions, these stories are intended to bridge the gap in understanding between species (Demello, 2018). As DeMello (Demello, 2018) has argued, animal narratives can make the internal lives of animals – their desires, fears, and joys – vivid and relatable for human readers, ultimately promoting both a sense of connection and moral consideration.
Various works of literature offer first-person animal narratives. These include Anna Sewell’s Black Beauty and Leo Tolstoy’s Strider, both of which feature first-person horse narratives of grueling labor and heartbreaking separation, and Franz Kafka’s Investigations of a Dog, which offers a dog’s perspective on the world. Despite their fictional nature, these narratives can evoke a sense of sadness and indignation in readers, fostering empathy and potentially improving their attitudes toward animals (Beierl, 2008; James, 2019).
This phenomenon can be explained by the concept of narrative empathy, i.e., the empathy experienced by readers toward characters in a book (Keen, 2006). It allows readers to step into the shoes of a character, experiencing the world through their senses and emotions. In the case of animal narratives, narrative empathy has been argued to be an important mechanism whereby these narratives can promote empathy toward animals (James, 2019). In recognition of this, the term narrative interspecies empathy has been introduced (Małecki et al., 2019). Importantly, empathy evoked toward a single animal character can result in more positive attitudes toward that animal’s entire species, or even animals belonging to other species (Małecki et al., 2019).
However, animal narratives extend beyond literature to educational settings that offer live animal encounters, such as zoos (Akerman, 2019; Robovskỳ et al., 2020). Research suggests that when educators in such contexts employ narrative techniques that emphasize animal individuality, this can foster more respectful attitudes among visitors (Akerman, 2019). For example, instead of simply stating facts about a hesitant crab, an educator could personalize its behavior with a relatable narrative: ”We are going to let him be, he does not want to come out of his home today” (Young et al., 2018). This approach highlights the crab’s agency and potentially reduces its objectification, thus fostering respect (Young et al., 2018). This approach aligns well with the latest developments in the role of zoos: i.e., that education and fostering positive connections with animals are key aspects of their mission (Robovskỳ et al., 2020).
2.3. Representing Animals through Anthropomorphism
A common theme in narratives aimed at fostering empathy toward animals is the use of Anthropomorphism, the ascription of human-like traits to non-human entities (Waytz et al., 2010). As humans cannot access the cognitive states of animals, the creation of an animal narrative requires the authors to project human-like abilities and cognition onto their animal characters (Barcz, 2015; Demello, 2018). For example, Black Beauty depicts horses with human-like speech, emotions, and desires (Poirier et al., 2019). Such anthropomorphism has been argued to be a crucial step in fostering pro-animal attitudes by making animals more relatable (Young et al., 2018; Demello, 2018).
However, excessive anthropomorphism of animals can result in false empathy, i.e., feelings of empathy based on inaccurate perceptions (Young et al., 2018; Akerman, 2019). For example, when visiting animals in a zoo during winter, some visitors may develop an inaccurate perception that the animals are feeling as cold as they themselves are (Akerman, 2019). In some cases, such false empathy can be detrimental to the well-being of humans and animals alike(Young et al., 2018). Therefore, responsible use of anthropomorphism, prioritizing the interests of animals, is crucial (Young et al., 2018; Akerman, 2019).
Virtual pets, i.e., digital simulations of animals, have evolved beyond their original role as mere substitutes for real pets (Laureano-Cruces and Rodriguez-Garcia, 2012). Research has shown that people who anthropomorphize virtual pets treat them as having life, mental states, and sociality; and in some cases, this could lead to such pets providing an important sense of companionship, and even emotional support and learning opportunities (Melson et al., 2009). Virtual pets have also been found to promote positive behaviors and attitudes, including healthy eating habit (Byrne et al., 2012), stronger adherence to social norms (Hildmann et al., 2008), and learning motivation (Chen et al., 2011; O’Connell et al., 2024). Nevertheless, it remains unclear whether these virtual animals can cultivate empathy with animals and/or pro-animal attitudes.
Chatbots have considerable potential to drive positive social change. For example, they have been used to promote prosocial behavior (Park et al., 2023; Namkoong et al., 2023), provide companionship (Skjuve et al., 2021; Zhou et al., 2020), and offer mental-health interventions (Bickmore et al., 2010; de Gennaro et al., 2020). A key consideration when designing chatbots aimed at influencing human behavior is their level of anthropomorphism. This is because chatbots with human-like characteristics tend to foster engagement (Bickmore et al., 2009), emotional connection (Araujo, 2018), and reciprocal behavior (Liang et al., 2024; Lee et al., 2019). For example, participants in one study (Lee et al., 2019) offered care and support to a chatbot that expressed its own emotions and revealed its past mistakes, a process that resulted in them developing more compassion for themselves.
Researchers have explored various techniques for making chatbots more human-like. In addition to conferring them with emotions (Liu and Sundar, 2018), these have included giving them human-like names (Shi et al., 2020) and pictures (Lee and Lee, 2023). Seeger et al. (Seeger et al., 2018) synthesized prior findings into a framework for anthropomorphic chatbot design, which relies on three types of cues: verbal, nonverbal, and identity. Verbal cues are words and sentences used by the chatbot, generally though not always in textual form. Researchers have explored various strategies to increase human-likeness involving verbal cues, such as the use of social dialogue, emotional expressions, and personal pronouns (Seeger et al., 2018). For example, the expression of compassion has yielded positive results for chatbots designed to provide mental-health support (de Gennaro et al., 2020; Liu and Sundar, 2018). Nonverbal cues, meanwhile, pertain to communicative expressions and behaviors that do not involve speech or writing. In general, this may encompass a variety of elements like facial expressions and bodily movements (Derrick et al., 2011). However, as text-based chatbots lack physical presence, nonverbal cues are limited to more subtle actions such as lengthened response times or the use of emojis (Seeger et al., 2018). The use of such nonverbal cues has been found lead to chatbots being perceived more positively, e.g., as having higher credibility or greater warmth (Yu and Zhao, 2024; Beattie et al., 2020). Lastly, identity cues refer to the information provided by the chatbot that communicates its identity. This could involve the use of human names, profile pictures, and other demographic details like gender and race (Seeger et al., 2018). Prior research has found that declaring a chatbot to have a human identity leads to more human-like perceptions of it, but that this comes at the cost of higher expectations about what it can accomplish (Go and Sundar, 2019; Mozafari et al., 2020).
In short, previous research has indicated that presenting information from an animal’s perspective can significantly influence people’s empathy and attitudes toward that animal, its species, or animals in general, and that human-like chatbots can cultivate positive attitudes and behavioral changes toward humans. However, it remains unclear how well a chatbot representing an animal, narrating in the first person, can evoke empathy and foster positive attitudes toward that animal or its species. Filling this gap in our knowledge could pave the way for innovative chatbot designs that enhance people’s understanding of animal conservation and aid in combating animal cruelty.
2.4. Research Questions
While prior research has demonstrated the persuasive potential of chatbots that emulate human characteristics, such human-like chatbots are inherently limited in their ability to provide the perspectives of animals, and thus to promote a sense of connection with animals. Therefore, we integrated insights from the existing bodies of research on animal narratives and human-like chatbots to develop an animal-like chatbot that spoke from the perspective of an animal (hereafter referred to as an animal chatbot). In designing it, we adapted the same three cues typically used in the development of human-like chatbots, i.e., verbal, non-verbal, and identity cues. We then conducted a mixed-methods experiment to evaluate the effectiveness of this animal chatbot at influencing its users’ empathy with and perceptions of animals. More specifically, we sought answers to the following three research questions (RQs):
RQ1: How do the designs of animal chatbot’s verbal, non-verbal, and identity cues affect its users’ perceptions of its identity?
RQ2: To what extent does the design of animal chatbot affect its users’ empathy, attitudes, and prosocial behavioral intentions toward the animal it represents?
RQ3: To what extent do users’ perceptions of animal-chatbot identity affect their empathy, attitudes, and prosocial behavioral intentions toward the animal represented?
3. Method
3.1. Chatbot Design
To address our research questions, we developed a chatbot that speaks from the perspective of a fictional horse. Given that animals are perceived differently based on their traits (Agnew, 1998; Miralles et al., 2019), we considered horses to represent a viable middle ground between animals that are already perceived very positively, such as dogs and cats, and those that are perceived negatively, such as insects (Serpell and Paul, 2002). In addition, because horses are often subjected to abuse and cruelty by humans (Corey, 2012), the chatbot could positively influence attitudes toward them and potentially promote better treatment practices.
3.1.1. Conversation Design
The animal chatbot provided its users with the first-person perspective of a horse learning to wear a saddle, bridle and stirrups and carry humans for the first time. The narrative drew inspiration from Anna Sewell’s popular novel ”Black Beauty” (Sewell, 1877). The novel’s first-person narrative from the perspective of a horse enables an emotional connection with the audience, a technique that we felt could be mirrored in animal chatbot design. The novel’s underlying messages of empathy, compassion, and moral responsibility (Gavin, 2017) also resonate with the core principles of ethical chatbot design.
The animal-chatbot conversation we designed consisted of three primary segments. The initial segment involved introductions, during which the chatbot introduced itself as a horse residing on a farm (“Let me share a bit more about myself. I am a horse, and I live with my owner and many other animals on a farm”), and allowed the user to introduce themself as well. During the second segment, the animal chatbot narrated its experience of learning to wear a saddle, etc., and carry humans: “Every day, I am supposed to carry my owner around the farm and help him with his work. While doing that, I also have to wear a lot of things, such as a bridle, bit, and saddle”. The chatbot also commented on its feelings about this: “I remember the first time I learned how to do all of this. It was such a painful experience”. Then, in the final segment, the animal chatbot made some final remarks to the user before ending the conversation. For instance: “I have to go now. Thank you for listening to me and I hope we get to speak again some day. Goodbye!”
To boost their engagement levels, users were given ample opportunities to interact with the animal chatbot by responding to prompts or asking questions. Where the conversation included yes/no questions, the participants were given predetermined response options. For example, in the initial segment of the conversation, when the animal chatbot posed the question, “Do you also live and work on a farm?”, its interlocutor could choose one of the following three response options: “yes”, “no”, and “I would rather not say”. Questions designed to encourage participants to self-reflect and share their opinions, on the other hand, were presented without predefined response options. For example, while discussing the challenging experience of being ridden for the first time, the animal chatbot asked users, “Can you imagine what it feels like to have a piece of hard steel shoved inside your mouth?”, and allowed them to respond freely. Users were also provided with several opportunities to ask the animal chatbot open-ended questions, facilitating personalized interactions (please refer to the supplemental materials for an example of a whole dialogue).




This figure shows four horse head emojis, used to convey different emotional states as nonverbal cues. The first emoji depicts a horse with a tear on its cheek, indicating sadness or crying. The second emoji shows a smiling horse with an open mouth, representing happiness or laughter. The third emoji features a horse with a bandage on its head and a sad expression, suggesting injury or pain. The fourth emoji shows a horse with a blue face, suggesting surprised or scared.
3.1.2. Integrating Verbal, Nonverbal, and Identity Cues
To facilitate our exploration of the impact of verbal, nonverbal, and identity cues on perceptions of animals (Seeger et al., 2018), we integrated three cues into our animal chatbot and allowed each to be dynamically enabled or disabled (Fig. 2). We tailored each cue to be more specific to animal rather than human attributes, as explained below.
Verbal Cues. Inspired by prior literature, we implemented verbal cues in our animal chatbot through the expression of emotions (Seeger et al., 2018; Bernaerts et al., 2014). The various human-like emotions previously incorporated into chatbots have included gratitude (Namkoong et al., 2023) and empathy (Liu and Sundar, 2018). Similarly, human-like emotions have been used in animal narratives to increase empathy and improve attitudes toward animals (Demello, 2018; Young et al., 2018; Małecki et al., 2019; Sewell, 1877). In our case, these expressions encompassed emotions ranging from excitement (e.g., “I am really excited to share more about my life!”) to discomfort (e.g., “the shoes felt very heavy”) and even pain (e.g., “I felt a sharp, searing pain on my back”). Our chatbot versions that lacked verbal cues did not utilize any emotional expressions.
More specifically, when describing the experience of learning to wear a bit, the animal chatbot with verbal cues enabled stated, “I felt so restless, and really wanted to get rid of it. But there was no way to remove the nasty thing!” But when verbal cues were disabled, it simply said, “It could not be removed.” Importantly, while our animal chatbots’ emotional expressions may resemble those of human-like chatbots, they emphasized its animal identity through contextual framing of these emotions. Notably, most of the emotional expressions occurred during the second segment of the conversation, in which the chatbot narrated experiences that are highly specific to horses.
Nonverbal Cues. As chatbots lack a graphic embodiment, a popular strategy involving nonverbal cues is the use of emojis (Seeger et al., 2018). We adopted this strategy: i.e., the animal chatbot utilized emojis when nonverbal cues were enabled, but did not use any when such cues were disabled. To distinguish our animal chatbot from those designed to be human-like, its emojis depicted horse faces, such as a smiling horse, a scared horse, and a sad horse (Fig. 1), which we created with the help of Emoji Kitchen 111https://emojikitchen.dev/.
Identity Cues. In our animal chatbot, identity cues were implemented through a displayed name and profile picture. In instances where identity cues were enabled, the animal chatbot profile picture was a photograph of a horse (Fig. 2, left), accompanied by the name ”Strider”. By using a horse profile picture and a name associated with horses, our implementation of identity cues diverged from those typically employed in human-like chatbots. When identity cues were disabled, on the other hand, the chatbot featured a robot as its profile picture (Fig. 2, right), alongside the name ”Horse Bot”. We chose the name ”Strider” due to its use in the Tolstoy story Strider: The Story of a Horse (Tolstoy et al., 2015).
3.2. Implementation
We utilized UChat 222https://uchat.au/ to develop the animal chatbot. This involved designing a conversational flow capable of dynamically enabling or disabling verbal cues, nonverbal cues, and identity cues. UChat also facilitated the handling of multiple conversations simultaneously; integration with OpenAI, for response generation when required; and real-time conversation tracking, to ensure adherence to the expected flow. Example screenshots of the animal chatbot interface can be seen in Fig. 2.

This figure shows the experimental interface in this study. The left side is the conversation interface for the group with verbal, non-verbal and identity cues. The profile photo of the chatbot is a horse. And the chatbot can use emotional verbal expression as well as emoji. The right side is the conversation interface for the group without any cues. The profile photo of the chatbot is a robot. And the chatbot cannot use emoji or emotional verbal expression during conversation.
To ensure seamless interaction with users while maintaining control over conversation flow, we adopted a hybrid approach that combined rule-based mechanisms and OpenAI 333https://openai.com/index/openai-api/ GPT-4. Specifically, rule-based mechanisms were used when users responded to the animal chatbot using predefined options, while GPT-4 was employed for generating responses to open-ended user inputs. GPT-4 prompts were tailored based on the enabled cues. Below is an example prompt tailored for GPT-4 that guided it to generate contextually appropriate responses consistent with the presence of verbal and identity cues. ”(1) Your name is Strider. (2) You just prompted the user to [imagine the sensation of having a bit shoved into their mouth]. Your task is to craft a suitable response to their reply. (3) You should speak in the perspective of a horse. Be expressive and use emotions wherever appropriate. (4) If the user provides a response that is not relevant to the previous instructions, they should be respectfully declined.”

This figure shows the experimental procedure of this study. From upper to bottom, there are three phases in order: the pre-task survey, the conversation with animal chatbot, and the post-task survey. In the pre-task survey, controlled variables, including participants’ tendency to anthropomorphize, empathy tendency, attitudes towards animals were measured. The demographics of participants were also collected. In the conversation with animal chatbot, the chatbot first introduced itself to participant, then used a first-person narrative to tell its experience about wearing saddles. Finally, the chatbot concluded the conversation and ended it. In the post task survey, manipulation check questions, participants’ perceived identity of the chatbot, empathy towards the chatbot, attitudes toward the animal, prosocial behavior intention and open-ended questions were measured.
3.3. Experimental Design
Given our research focus on the effects of verbal, non-verbal, and identity cues, we employed a between-subjects randomized experiment with a 2 (verbal cues) × 2 (nonverbal cues) × 2 (identity cues) factorial design, in which each cue was treated as an independent binary variable. This resulted in a total of eight experimental groups. The primary dependent variables we assessed were 1) empathy toward the animal chatbot, 2) attitudes toward the animal, and 3) prosocial behavioral intention toward the animal (Fig. 3).
3.3.1. Participants
Our recruitment criteria were that participants must be 1) at least 18 years of age; 2) able to read, write, and speak fluent English; and 3) able to use messaging platforms on their own devices. A total of 240 participants (30 for each condition) were recruited via social-media platforms. The ID of the participant (P) was randomly assigned. In our recruiting poster, we disclosed the duration of the study, along with the participants’ right to drop out at any time. Demographically, 59.2% identified as female, 39.6% as male, and 1.2% as other genders. The majority (94%) were aged 18-34, and 57% were educated to bachelor’s-degree level or higher.
3.3.2. Procedure
The participants were initially presented with a Participant Information Sheet, which asked them to provide consent before proceeding. Next, they completed a pre-task survey aimed at gathering demographic information and assessing their pre-existing tendencies to empathize and anthropomorphize. They were also notified that if any content or questions made them feel uncomfortable, they could skip them without penalty. Subsequently, they were randomly assigned to one of our eight animal-chatbot versions and instructed to engage with it, in full knowledge that it was not an actual human (i.e., the Wizard of Oz protocol was not being used), and that their responses would not be shared outside the research team prior to anonymization.
Upon completing their respective animal-chatbot conversations, the participants received a password to unlock a post-task survey. This comprised both multiple-choice and open-ended questions designed to gauge their perceptions of and empathy for the chatbot, as well as their attitudes and prosocial behavioral intentions toward the animal it represented. The entire procedure was conducted online for all participants, and we estimated that it would last 15-20 minutes. The participants were compensated US$4 for their time. Our university’s institutional review board reviewed and approved this research.
3.4. Instruments
3.4.1. Pre-task Survey
Empathy Tendency. Given our interest in the participants’ empathy toward our chatbots, it was crucial to control for their individual empathic tendencies. To achieve this, our pre-task survey incorporated all 28 questions from the Interpersonal Reactivity Index (IRI), which consists of four seven-item subscales: Perspective Taking, Fantasy, Empathic Concern, and Personal Distress (Davis, 1983). Each item was rated on a five-point Likert scale ranging from 1=Strongly disagree to 5=Strongly agree. These questions included: “Sometimes I don’t feel very sorry for other people when they are having problems” [reverse-scaled]; “After seeing a play or movie, I have felt as though I were one of the characters”; and “I sometimes feel helpless when I am in the middle of a very emotional situation”.
Tendency to Anthropomorphize. To assess the participants’ individual propensity to anthropomorphize animals and technological devices, we incorporated seven of the 15 questions of the Individual Differences in Anthropomorphism Questionnaire (IDAQ) into the pre-task survey (Waytz et al., 2010). The eight questions we omitted were deemed irrelevant because they related to the anthropomorphism of natural entities other than animals. Participants rated each IDAQ question on a seven-point Likert scale, ranging from 1=Not at all to 7=Very much. Three of these questions were: “To what extent does the average fish have free will?”, “To what extent does the average computer have a mind of its own?”, and “To what extent do cows have intentions?”
Attitudes toward Animals. As our study aimed to develop a chatbot that can improve attitudes toward animals, it was important to control for participants’ pre-existing attitudes toward animals. To accomplish this, we used the Animal Attitude Scale (AAS-5) during the pre-task survey (Herzog et al., 2015). Each item on the scale was answered using a five-point Likert scale, ranging from 1 =Strongly disagree to 5 = ”Strongly agree”. Sample questions from the scale include: ”I sometimes get upset when I see wild animals in cages at zoos”, ”It is morally wrong to hunt wild animals just for sport”, and ”I think it is perfectly acceptable for cattle and hogs to be raised for human consumption” [reverse-scaled].
3.4.2. Post-task Survey
Manipulation Check. In the post-task survey, we included three manipulation-check questions to ensure that our three independent variables had been successfully manipulated. Regarding the verbal-cues manipulation, participants were asked to rate the emotional richness of the chatbot’s words using a seven-point Likert scale ranging from 1=No emotions to 7=Extremely emotional. In the case of the nonverbal-cues manipulation, they were prompted to assess the frequency of emoji usage by the agent on a five-point Likert scale ranging from 1=No emojis to 5=Many emojis. Finally, for the identity-cues manipulation, participants were asked to choose the name of the agent they had interacted with from a list of six options.
Perceived Identity of the Chatbot. To explore the participants’ perceptions of the chatbot’s identity, our post-task survey posed one question: “Was the agent you spoke to more like a horse, a human, or a robot?” Each respondent could select one, two, or all three of the answer options.
Empathy toward the Animal Chatbot. Empathy can be understood as two distinct processes: cognitive empathy and affective empathy. Cognitive empathy involves understanding another’s thoughts, feelings, and intentions; i.e., putting oneself in someone else’s shoes and comprehending their perspective, without necessarily feeling what they feel. Affective empathy, on the other hand, involves sharing or mirroring another’s emotional experience; i.e., feeling the emotions they are experiencing, and thereby creating an emotional connection (Reniers et al., 2011). This bipartite conceptualization of empathy has been applied in prior research on conversational agents (Liu and Sundar, 2018; Park et al., 2023).
Given that cognitive empathy involves recognizing the perspectives of others, our questions about it were similar to those on the Perspective Taking subscale of the IRI (Davis, 1983). Examples included, “In the conversation, I found it difficult to understand the problems faced by the agent from its point of view” [reverse-scaled]; “I believe the agent had experienced pain and distress because of humans”; and “In the conversation, I tried to understand the agent’s difficulties by imagining its situation”. Participants responded to each question using a five-point Likert scale ranging from 1=Strongly disagree to 5=Strongly agree.
To measure their affective empathy, we asked the participants to rate the extent to which they felt six different emotion-related adjectives applied to themselves after interacting with the animal chatbot, all of which had previously been used in research measuring empathy toward animals (Batson et al., 1987). They were sympathetic, warm, compassionate, tender, moved, and soft-hearted (Małecki et al., 2019). Answers were given on a five-point Likert scale ranging from 1=Not at all to 5=Extremely.
Attitudes toward the Animal. Given our study’s specific focus on horses, we devised a five-item Horse Attitude Scale. Its questions paralleled those in the AAS-5 (Herzog et al., 2015), which assess people’s tendency to take action to help animals, but were tailored explicitly to horses. Each was rated on a five-point Likert scale, ranging from 1=Strongly disagree to 5=Strongly agree. The Horse Attitude Scale exhibited good overall reliability (Cronbach’s ). Its five items were: “It is perfectly acceptable to hit a horse if they are not following instructions” [reverse-scaled]; “It is morally wrong to ride horses just for sport”; “It is morally wrong to ride horses as a mode of transportation”; “The use of bits to control horses should be stopped if they cause any pain or discomfort”; and “It is unethical to consume horse meat”.
Prosocial Behavioral Intention. We examined one form of prosocial behavioral intention: willingness to sign a petition (Herrera et al., 2018; Taylor and McKeown, 2021). Specifically, the participants were asked how willing they would be to sign a petition regarding the use of horse tack, which they answered on a five-point Likert scale ranging from 1=Strongly disagree to 5=Strongly agree.
3.4.3. Qualitative Measurements
To gain a deeper understanding of how participants perceived our animal chatbots, we included open-ended questions in the post-task survey. These mainly focused on 1) how the participants perceived the identity of their animal chatbot, and why; 2) whether/how the animal chatbot affected their emotions and attitudes, and why; and 3) their impressions of and attitudes toward horseback riding, and why.
To analyze the responses to these questions, we used thematic analysis. Initially, two researchers independently familiarized themselves with and assessed the responses of the same subsample of 30 participants, chosen at random from all eight chatbot conditions. Those two researchers achieved a strong Cohen’s kappa score for inter-rater reliability, i.e., ¿ 0.8. Subsequently, divergent ratings for the same 30 participants were discussed until a consensus was reached. Following this calibration phase, the remaining responses were evenly distributed between the two raters, with each evaluating the responses of an additional 105 participants.
4. Results
An analysis of variance (ANOVA) test of the pre-task survey data indicated that there were no significant differences among the eight experimental groups in terms of their members’ individual tendencies to anthropomorphize animals (, ) or machines (, ). Similarly, there were no significant inter-group variations in attitudes toward animals (, ). No significant differences were observed for three of the four IRI subscales, which were used to measure tendencies to empathize. These three were Empathic Concern (, ), Perspective Taking (, ), and Personal Distress (, ). However, significant inter-group differences were found in Fantasy Scale scores (, ), which were therefore used as a control variable in all subsequent quantitative analyses.
4.1. Perceptions of the Chatbot’s Identity (RQ1)
4.1.1. Manipulation Check
A one-way analysis of covariance (ANCOVA) test revealed that participants from the four groups whose chatbots used verbal cues (, ) perceived their chatbots’ language to be significantly more emotionally rich (, ) than participants from the other four groups did (, ). Participants from the four groups that experienced nonverbal cues (, ) perceived their chatbots’ use of emojis to be significantly more frequent (, ) than those from groups that did not experience such cues (, ). Lastly, almost all (98.8%) of the participants correctly identified the name of the chatbot with which they had interacted. Hence, we concluded that the manipulations of all three independent variables were successful.
4.1.2. Perceived Chatbot Identity
To answer RQ1, we analyzed the participants’ responses for evidence of whether they perceived the identity of their chatbot to be a bot, a human, or an animal, as this information would help us infer the influence of verbal, nonverbal, and identity cues. The results were fairly evenly split, with 86 participants perceiving their chatbots as robot-like, 54 as horse-like, and 82 as human-like. Among the remaining 18 participants, four perceived their chatbots as robot-like + horse-like; six as robot-like + human-like; four as horse-like + human-like; and four as robot-like + horse-like + human-like.
We employed multiple chi-square tests to evaluate how these perceptions of chatbot identity related to our three different designs of cues. We found a significant relation between verbal cues and participants’ perceptions of chatbot identity (, ). That is, when verbal cues were present, users were more likely to perceive their chatbot as human-like than when verbal cues were absent. The effects of non-verbal cues or identity cues were not significant.
4.1.3. Qualitative Findings
While every participant was aware that they were communicating with a chatbot, their answers to open-ended questions suggest varied perceptions of the chatbot’s identity. When asked how they perceived the animal chatbot’s identity, around one-third of the 120 participants that received verbal cues (n = 45) noted that there was no discernible difference between the chatbot’s speech and that of a human, because emotive words like “uncomfortable” and “pain” were used. As P232 stated, “The chatbot was conveying emotions and words that we would expect humans to say. The only difference is in the context of being a horse.”
However, around one-fifth of the participants in the verbal-cues groups (n = 21) felt that the animal chatbot spoke authentically about the experiences, emotions, and scenarios typical of a horse’s life. This perspective made the interaction feel more like talking to a horse than to a human or robot. As P48 noted, “The agent I spoke to felt more like a horse, as it was able to describe the story from a horse’s point of view. It also managed to explain the feelings of the horse in the story accurately, or as I would think a horse feels in such a situation.” In addition, the chatbot’s use of horse-related terms and contexts (e.g., being ridden, wearing a saddle) reinforced its horse-like identity. P139 told us: “It was more of a horse. He started talking about how he was raised in a farm, setting up the context of a farm animal & how he was first being ridden by a human (putting on the bridle, bit & saddle) that he felt uncomfortable & nervous.”
Several other participants from the verbal-cues groups (n = 6) expressed mixed feelings about the chatbot’s identity, perceiving it to be somewhere between animal and human. While horses do not speak, the perspective and life experiences conveyed by the horse-chatbot made it feel authentic to these users; and they also felt that the chatbot’s communication style was emotionally expressive and relatable in a way that resembled human interaction. For example, P51 mentioned that “[t]he agent I talked to seemed more like a mix of a human and a horse in expressing emotions. It displayed feelings similar to a horse but in a way that humans can understand, creating a blend of both characteristics.”
Conversely, in the four groups whose chatbots lacked verbal cues, nearly half the participants (n = 51) stated that their chatbots seemed robotic. They attributed this perception to their bots’ strictly factual retelling of events and their lack of expression of emotion or personal thoughts. Although the conversation had a logical progression, it lacked the genuineness of human interactions. For example, P169 remarked: “[H]orses can’t talk but if they could, I’m quite sure they would have feelings and preferences that they would express to me, unlike the agent who used plain words that didn’t contain any emotion. Definitely didn’t sound like a human as humans would have more emotions in their words especially if they were describing a personal experience.”
Within the four groups where nonverbal cues appeared in the conversation, some participants (n = 17) felt that the chatbot resembled a human because the emotions conveyed through the emojis were quite similar to those of humans. As P28 mentioned, “[t]he frequent use of emojis to express its emotions was a bit goofy but endearing. Being able to express conscious thoughts coherently just made me think it was more like a human.”
However, a few participants in the nonverbal-cues groups (n = 13) reported that the use of emojis made the chatbot feel artificial, or “very synthetic”, as P230 put it. The same participant elaborated that “I felt like the agent spoke very matter-of-factly […] and it was weirdly interspersed with emojis that felt like someone else put them there.”
Interestingly, most of the participants who encountered identity cues during their animal-chatbot interactions did not consider them to be particularly important. However, three participants noted that such cues made the chatbot seem horse-like to them. As P223 explained, “The agent felt more like a horse to me as [… it] was using first-person language all the time and also of the display picture of a horse. Also, the name of the agent is Strider which sounds more like a horse name rather than a robot name and it does not sound like a human name at all.”
4.2. Effect of Chatbot Design Cues (RQ2)

This figure shows the results about the effects of chatbot design cues. There are four subfigures, representing effect of verbal cues on affective empathy, effect of verbal cues on cognitive empathy, effect of non-verbal cues on cognitive empathy, and effect of verbal cues on prosocial behavior intention. The dots represent mean values, and the error bars show one standard error. The significance levels are labeled (: *, : **, : ***).
4.2.1. Effect on Empathy
We conducted a three-way ANCOVA to assess the impacts of verbal, nonverbal, and identity cues on cognitive and affective empathy. We observed a significant main effect of verbal cues on affective empathy (, , ), with participants from groups with verbal cues (, ) demonstrating significantly higher levels of affective empathy than others (, ), as shown in Fig. 4(a). This result indicates that verbal cues played a significant role in enhancing affective empathy.
Significant main effects of both verbal cues (, , ) and nonverbal cues (, , ) on cognitive empathy were also observed. Further analysis indicated that while participants from groups with verbal cues (, ) exhibited higher levels of cognitive empathy than others (, ), as shown in Fig. 4(b). Those from groups with nonverbal cues (, ) displayed lower levels of cognitive empathy than others (, ), as shown in Fig. 4(c). These results suggest that verbal and nonverbal signals both influenced cognitive empathy, but in opposite directions.
4.2.2. Effect on Attitude
We conducted a three-way ANCOVA to investigate the impact of verbal, nonverbal, and identity cues on the participants’ attitudes toward horses. The results indicated that none of these chatbot-design cues had a significant effect on such attitudes.
4.2.3. Effect on Prosocial Behavioral Intention
Another three-way ANCOVA unveiled a significant main effect of verbal cues on prosocial behavioral intention, i.e., willingness to sign a petition regarding the use of horse tack (, , ). Specifically, as shown in Fig. 4(d), participants from the groups with verbal cues (, ) expressed a significantly higher willingness to sign than other participants did (, ).
4.2.4. Qualitative Findings
Our analysis of the open-ended question also suggested that verbal cues played an important role in eliciting empathy. Most of the participants (n = 96) indicated that they were emotionally touched as the chatbot vividly recounted its painful experiences and emotions, and that this allowed them to envision riding a horse from the horse’s point of view. As P57 noted, “The words and phrases they used were very human-like, [and] evoked feelings of compassion and empathy in me. It gives the perspective of a horse that humans are unable to understand.” P232 further emphasized the impact of the painful experiences that the chatbot shared: “I feel empathetic about the horse’s experience and it felt […] painful and unpleasant”.
In contrast, many participants (n = 47) in the four conditions without verbal cues told us that they were not emotionally affected by the chatbot and highlighted their chatbots’ limited ability to convey emotions during interactions, which negatively affected their ability to empathize with them. As P116 mentioned, “I did not feel moved. He talked about his experience but he didn’t describe his feelings or how hurt he was. He just stated he was scratched, no description or emotions were talked about.”
Interestingly, we found that the use of emojis as nonverbal cues was a double-edged sword when it came to evoking participants’ emotions. Some participants (n = 26) mentioned that emojis improved their sense of emotional connection to the animal chatbot. P132, for instance, said: “I feel moved by Horsebot during the conversation, as it sounds like it spoke from real experience. It was able to describe its painful experience in great detail, and [this] made it very relatable. The use of emojis was very appropriate too and enhanced the moving encounter.” However, others (n = 7) mentioned that emojis detracted from the seriousness or genuineness of the conversation, and thus negatively impacted their sense of empathy toward the animal chatbot. For instance, P152 noted, “There was this touch of sympathy when the convo got going. But the emoticons made it less serious.”
4.3. Effect of Perceived Chatbot Identity (RQ3)
We excluded the responses from the above-mentioned 18 participants who perceived their chatbots as having multiple identities, on the grounds that we were unable to categorize them into single types. Therefore, the data on which we based our answer to RQ3 was drawn from the 222 participants who perceived a single type of identity, i.e., only robot-like (n = 86), only horse-like (n = 54), and only human-like (n = 82).

This figure shows the results about the effects of perceived identity the chatbot. There are four subfigures, representing effect on affective empathy, effect on cognitive empathy, effect on attitudes towards the animal, and effect on prosocial behavior intention accordingly. The dots represent mean values, and the error bars show one standard error. The significance levels are labeled (: *, : **, : ***).
4.3.1. Effect on Empathy
We performed a one-way ANCOVA to explore the impact of the three categories of perceived chatbot identity on participants’ cognitive and affective empathy. This was followed by a post-hoc analysis using Tukey’s honestly significant difference (HSD) test, which compares pairwise means with corrections for multiple comparisons. There was a significant main effect of perceived chatbot identity (, , ) on affective empathy, as shown in Fig. 5(a). Post-hoc analysis (, , ) indicated that those participants who perceived their bots as horse-like (, ) exhibited a significantly higher level of affective empathy than those who perceived theirs as robot-like (, ). The group that perceived their bots as human-like (, ) also had a significantly (, , ) higher level of affective empathy for them than those who perceived their bots as robot-like. However, no significant difference in affective empathy was found between the participants who perceived their bots as horse-like and those who found them human-like (, , ).
In addition, there was a main effect of perceived chatbot identity (, , ) on cognitive empathy, as shown in Fig. 5(b). Post-hoc analysis showed that the participants who perceived their chatbots as horse-like (, ) exhibited a significantly higher level of cognitive empathy (, , ) than those who perceived them as robot-like (, ). Those who perceived their bots as human-like (, ) also were found to have a significantly higher level of cognitive empathy (, , ) than those who perceived their bots as robot-like. But again, no significant difference (, , ) in cognitive empathy was observed between the groups that perceived their bots as horse-like vs. human-like. These results indicate that perceived chatbot identity significantly impacted both affective and cognitive empathy toward the horse chatbot, with a robot-like identity eliciting the lowest levels of both types of empathy.
4.3.2. Effect on Attitude
The one-way ANCOVA we conducted to investigate the impact of perceived chatbot identity on attitudes toward horses found a significant main effect (, , ), as shown in Fig. 5(c). Specifically, those participants who perceived their chatbots as horse-like (, ) expressed significantly more positive attitudes toward horses than those who perceived them as robot-like (,, ; , ). However, no significant differences were observed between the “horse-like” and “human-like” groups (, , ) or between the “human-like” and “robot-like” groups (, , ). The results indicate that participants who perceived their chatbots as horse-like had significantly more positive attitudes toward horses compared to those who perceived their chatbots as robot-like.
4.3.3. Effect on Prosocial Behavioral Intention
A one-way ANCOVA unveiled a significant main effect of perceived chatbot identity on prosocial behavioral intention (, , ), as shown in Fig. 5(d). Specifically, those participants who perceived their chatbots as horse-like (, ) expressed significantly higher willingness to sign a petition than those who perceived theirs as robot-like (, , ; , )). However, no significant differences were observed between the “horse-like” and “human-like” groups (, , ) or between the “human-like” and “robot-like” groups (, , ). These results suggest that participants who saw their chatbots as horse-like showed significantly higher intention to sign a petition than those who saw theirs as robot-like.
4.3.4. Qualitative Findings
Several key themes emerged from our analysis of the participants’ responses about how chatbot identity affected their perceptions. Many participants (n = 48) who perceived their chatbot as human-like expressed a decreased willingness to ride horses. As P95 put it, “Not willing, because I will recall this conversation with the agent. That the horse has to endure my body weight and pain of having to carry me through a long distance.”
Several participants (n = 15) who perceived the animal chatbot as human-like found its painful experiences and emotional expressions somewhat moving, but their awareness of its true nature as a chatbot brought them back to reality, hindering deeper emotional engagement. As P208 noted, “I needed to imagine the experiences of the horse and listen to its experience of being abused as though it was a human. But it was a bot and created by humans, so I was limited [… in] sympathy to the horse.”
Most (n = 31) of the participants who perceived the animal chatbot as horse-like indicated that the authentic portrayal of the life of a horse resonated with them. They said that they felt sympathetic toward its plight, and that they believed the emotions and scenarios were true to a horse’s experience. This, in turn, made them uncomfortable with the idea of horseback riding, by increasing their awareness of its ethical and physical implications. P72 said, “I feel sorry for the agent experiencing those circumstances and, knowing its story as a horse, I felt uneasy about the fact that there are horses out there experiencing similar situations. The agent made me realize that horses do not enjoy the process of being trained for human riding, as they have a conscious mind and are aware of their feelings.”
Many participants who perceived the horse chatbot as robot-like (n=50) had the lowest levels of emotional engagement with it, as they found it hard to connect with a being they knew was artificial. P187 explained: “I did not feel emotionally affected during or after the conversation with the agent because I was aware that I was interacting with a bot that was programmed to respond to me in that manner.” These participants tended to regard horseback riding in a detached manner – e.g., focused on the practical uses of horses throughout history – and maintained that as long as horses are not mistreated, riding them is acceptable. Their lack of emotional engagement arising from the chatbot’s robotic nature made it easier for them to justify their continued willingness to ride. P218 told us that, although horses should not be hit, they “are useful and are essential in our daily lives. Before cars were invented, we had to rely on horses to get around quickly, [and] had we not used them, it would slow our progress overall for quite a bit. I am alright with using horses because it doesn’t seem to be as harmful to them.”
5. Discussion
5.1. Impact of Verbal Cues on the Perceived Identity of Chatbots (RQ1)
Our research builds on previous studies that highlighted the role of verbal cues in shaping users’ perceptions of chatbot identity. Our findings echo those of prior work that found specific verbal cues were able to increase chatbots’ human-like qualities (Seeger et al., 2018; Liu and Sundar, 2018). We have extended this understanding by illustrating how these cues can also encourage perceptions of a chatbot as having an animal identity. For example, verbal cues that authentically conveyed a story and emotions resulted in some users perceiving a chatbot as horse-like. This is intriguing, insofar as animals do not speak; and people’s notion of “authenticity” in connection with animal-like entities warrants further exploration.
Our first-person narrative from a horse’s perspective, inspired by Anna Sewell’s Black Beauty (Sewell, 1877), underscores the potential of leveraging classic literature to explore the theoretical underpinnings of chatbot design. This approach could pave the way for more theory-driven investigations into how chatbots might authentically represent various non-human identities. Such explorations could consider multiple dimensions of authenticity, including authenticity of source, of message, and of interaction (Lee, 2020; Huang and Jung, 2022).
Specifically, chatbot designers could potentially enhance authenticity of interaction – i.e., people’s feeling that they are part of a genuine interaction – by engaging users in reciprocal interaction during storytelling (Liang et al., 2024; Lee et al., 2019) to build a sense of mutual awareness, attention, and adjustment (Hartmann and Goldhoorn, 2011). However, because humans lack a common language with animals, it is still challenging to develop message authenticity, i.e., narratives that are true representations of their subject matter (Lee, 2020; Huang and Jung, 2022). Describing animals based on their bodily movements might be a means of enhancing message authenticity, in light of Aspling et al.’s (Aspling et al., 2018) findings regarding kinesthetic empathy, i.e., empathizing with an animal by understanding their bodily experiences over time and their individual histories. Further research should delve deeper into these aspects and refine our understanding of the complex interplay between verbal cues and perceived authenticity in both human-like and non-human-like chatbot identities.
5.2. Impact of Verbal and Nonverbal Cues on Empathy in Animal Chatbots (RQ2)
Our quantitative findings demonstrate that verbal cues significantly enhanced both cognitive and affective empathy with our animal chatbots. The qualitative data support this, and further reveal that exposure to verbal cues helped participants build a deeper emotional connection with the chatbot’s narratives of pain and various emotions. This increase in empathy echoes prior literature on animal narratives, which stresses the importance of human-like emotions to fostering empathetic responses (Poirier et al., 2019; Young et al., 2018). Our results are also consistent with previous research that highlighted how chatbots expressing human-like emotions can elicit user compassion (Lee et al., 2019).
Conversely, our investigation into nonverbal cues revealed a surprising decrease in the participants’ cognitive empathy when emojis were used – in contrast to prior studies that found emojis enhanced the credibility and warmth of human-like chatbots, thereby boosting cognitive empathy (Yu and Zhao, 2024; Beattie et al., 2020). There are two likely explanations for this discrepancy. First, the emojis used in our study depicted horse faces, which might have been more challenging to interpret than the human-face emojis used in earlier research. Qualitative feedback indicated that these emojis were perceived as artificial and relatively unexpressive, complicating users’ ability to see things from the chatbot’s perspective. Second, participants’ reactions to the emojis were mixed. Some found them appropriate and evocative, while others – perceiving them as humorous rather than serious – saw them as trivializing the discussion. This mixed reception likely reduced the overall perceived gravity of the chatbot narrative, thus lowering empathy levels. Prior research has shown that while emojis can sometimes enhance relational communication in negative contexts, they are often deemed inappropriate for such contexts (Cavalheiro et al., 2022; Rodrigues et al., 2017). Future research could therefore usefully compare the effects of human vs. non-human face emojis in serious/negative contexts.
5.3. Enhancing Empathy and Prosocial Attitudes through Perceptions of Animal-like Chatbots (RQ3)
Previous research has highlighted how emotionally engaging experiences can effectively foster empathy toward animals and the natural world, and suggested that interactions involving humans, animals, and computers can catalyze this empathy by enhancing animals’ similarities to humans (Webber et al., 2017a, 2023). In line with those studies’ findings, our participants who perceived our animal-chatbot as human-like demonstrated greater empathy toward it than those who saw it as robot-like. This also echoes prior findings that attributing human-like qualities to digital agents can increase their users’ empathetic responses (Lee et al., 2019; Araujo, 2018).
Our results also provide new insights into the role of animal-likeness perceptions in human-chatbot interactions. Participants who perceived the animal chatbot as horse-like not only expressed higher empathy toward it but also displayed more positive attitudes and higher prosocial behavioral intention toward actual horses than those who viewed it as robot-like. This may mean that chatbots widely and strongly perceived as animal-like could boost people’s aversion to animal cruelty. Such an effect might be driven by two factors. The first was the perceived authenticity (Curtin, 2005) of the chatbot’s portrayal of a horse’s life, which strongly influenced participants’ attitudes toward horseback riding. Second, while direct studies of animal-like chatbots are scarce, the literature on animal-like robots suggests that they can enhance engagement and heighten perceptions of animals’ social intelligence and warmth (Miklósi and Gácsi, 2012), and thus increase pro-conservation attitudes and behaviors. Our study extends that literature by demonstrating that not only robots, but also chatbots designed to represent non-human identities can effectively influence human attitudes and behavior intentions toward those entities.
Nevertheless, further research is needed to explore how chatbots can represent non-human entities authentically (Huang and Jung, 2022), as a means of engaging people with nature and promoting conservationist attitudes. Such investigations could pave the way for theory-driven approaches to chatbot design aimed at altering behaviors and attitudes, and potentially have an important impact on real-world conservation efforts.
5.4. Design Implications
Chatbots have demonstrated their potential to support self-improvement on the part of their users, e.g., by mitigating their public-speaking anxiety; and a key design strategy for achieving such goals is to equip chatbots with human-like communicative behaviors (Van Pinxteren et al., 2020). The present study’s findings have explored the similar, but distinct approach of designing chatbots to represent non-human identities. We found that this could foster empathy and positive attitudes toward the non-human entities they represented. By employing first-person narratives from a non-human perspective (Yoneyama, 2021; Małecki et al., 2019), chatbots can effectively convey environmental concerns and animal stories, encouraging individuals to reconsider their relationship with nature and other beings. While non-human first-person narratives are not novel, their application in chatbots offers a unique opportunity to engage audiences with animal-welfare and other environmental/conservation issues.
Because the design of our animal chatbot was inspired by prior techniques aimed at making chatbots more human-like, 42% of our participants (n = 100) – perhaps inevitably – either deemed it to be human-like or expressed uncertainty about its identity. Future researchers could therefore usefully revisit the effectiveness of verbal cues, using a variety of techniques to effectively convey the emotions and perspectives of non-human entities in their narratives. For example, in addition to expression of emotions, Bernaerts et al. (Bernaerts et al., 2014) recommended multiple techniques for non-human storytelling, such as highlighting the differences between human and non-human experiential states, to deepen the impact of non-human perspectives. By contrasting these experiences against typical human ones, designers should be able to defamiliarize readers with their normal ways of seeing the world, and thus enhance their narratives’ empathy-building and thought-provoking qualities.
To our surprise, nonverbal cues consisting of animal-face emojis had a negative impact on empathy. This highlights the necessity of being cautious when integrating such elements into animal chatbots intended to elicit empathy. Further research should seek ways to use emojis without undermining the seriousness of the discourse. Specifically, while animal-face emojis might reduce empathy by making messages seem less serious, Riordan et al. (Riordan, 2017) have reported that non-face emojis can minimize ambiguity in messages and convey emotional content. This could be especially valuable in computer-mediated communication that lacks the usual nonverbal cues of face-to-face communication.
Interestingly, identity cues did not influence overall user perceptions of our chatbots, regardless of whether they had bot-like or horse-like traits. This contrasts with prior findings that presenting a human identity usually impacts how users perceive chatbots (Go and Sundar, 2019). This discrepancy could have been due to the innate implausibility of chatting with a horse. Future studies should therefore examine other forms of identity cues, like using embodied conversational agents with animal identities (Byrne et al., 2012; Chen et al., 2011). Additionally, animal-like companion robots (e.g., Paro - seal-like companion robot (Miklósi and Gácsi, 2012)) could be assessed for their ability to boost their users’ views of conversational agents as animal-like.
5.5. Limitations and Future Research
This paper has several limitations that could be addressed in future research. First, its exploration was limited to specific forms of verbal, nonverbal, and identity cues. In particular, it focused on the expression of emotions as verbal cues, the use of horse emojis as nonverbal cues, and the use of a horse’s name and profile picture as identity cues. Future studies could therefore usefully explore the effects of alternative forms of verbal, nonverbal, or identity cues on animal chatbots’ effectiveness.
Second, our chatbots were developed to discuss the specific topic of horseback riding. Future research could enrich this paper’s findings by exploring chatbots that discuss other topics relating to horses, such as the consumption of horse meat, and/or by developing chatbots that represent other animals, ranging from pets like cats to ones typically viewed as pests.
Third, our study only required its participants to specify if they perceived the chatbot’s identity as human-like, horse-like, and/or robot-like, instead of rating these identities on a numerical scale. This was due to the exploratory nature of our research and the lack of any established definition or measurement scale for animal-like characteristics. Therefore, our goal was to investigate how people described their assessment of a bot as “horse-like” in qualitative terms. Based on those results, future researchers could delve deeper into this topic.
Lastly, our research design did not include any experimental groups that received the same horse narrative from non-chatbot media such as books or documentary films. In future, such comparisons could provide valuable information on the unique advantages and limitations of chatbots as a medium to convey information and influence perceptions.
6. Conclusion
This study developed a chatbot that speaks from the perspective of a horse and examined its impact on users’ perceptions of horses. Its findings demonstrate that emotional expressions (verbal cues) and the perceived authenticity of the animal’s identity both evoked empathy for the chatbots and perceptions of them as horse-like. Authenticity emerged as a key factor, with narratives that convincingly conveyed a horse’s perspective significantly increasing user empathy toward both the chatbot and real horses. Conversely, the use of nonverbal cues, specifically animal-face emojis, reduced cognitive empathy due to their perceived artificiality; while identity cues had hardly any impact. These insights underscore the potential of animal-like chatbots to foster empathy and positive attitudes toward animals and the environment, while also highlighting the need for careful selection of nonverbal elements to maintain the authenticity and seriousness of the discourse. Future research should delve deeper into the complexities of animal-chatbot design, with a focus on developing authentic non-human identities that can engage users in meaningful, empathy-evoking interactions.
References
- (1)
- Agnew (1998) Robert Agnew. 1998. The Causes of Animal Abuse:: A Social-Psychological Analysis. Theoretical Criminology 2, 2 (1998), 177–209. https://doi.org/10.1177/1362480698002002003 arXiv:https://doi.org/10.1177/1362480698002002003
- Akerman (2019) Suzanne Akerman. 2019. Best Practices for Building Empathy Through Live Animal Encounters. Journal of Museum Education 44, 1 (2019), 89–95. https://doi.org/10.1080/10598650.2018.1496388 arXiv:https://doi.org/10.1080/10598650.2018.1496388
- Araujo (2018) Theo Araujo. 2018. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in human behavior 85 (2018), 183–189.
- Aspling et al. (2018) Fredrik Aspling, Oskar Juhlin, and Heli Väätäjä. 2018. Understanding animals: a critical challenge in ACI. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction. 148–160.
- Barcz (2015) Anna Barcz. 2015. Posthumanism and Its Animal Voices in Literature. Teksty Drugie (2015), 248–269. https://api.semanticscholar.org/CorpusID:194396676
- Batson et al. (1987) C. Daniel Batson, Jim Fultz, and Patricia A. Schoenrade. 1987. Distress and Empathy: Two Qualitatively Distinct Vicarious Emotions with Different Motivational Consequences. Journal of Personality 55, 1 (1987), 19–39. https://doi.org/10.1111/j.1467-6494.1987.tb00426.x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-6494.1987.tb00426.x
- Beattie et al. (2020) Austin Beattie, Autumn Edwards, and Chad Edwards. 2020. A Bot and a Smile: Interpersonal Impressions of Chatbots and Humans Using Emoji in Computer-mediated Communication. Communication Studies 71 (02 2020), 1–19. https://doi.org/10.1080/10510974.2020.1725082
- Beierl (2008) Barbara Hardy Beierl. 2008. The Sympathetic Imagination and the Human—Animal Bond: Fostering Empathy Through Reading Imaginative Literature. Anthrozoös 21, 3 (2008), 213–220. https://doi.org/10.2752/175303708X332026 arXiv:https://doi.org/10.2752/175303708X332026
- Bernaerts et al. (2014) Lars Bernaerts, Marco Caracciolo, Luc Herman, and Bart Vervaeck. 2014. The storied lives of non-human narrators. Narrative 22, 1 (2014), 68–93.
- Bickmore et al. (2009) Timothy Bickmore, Daniel Schulman, and Langxuan Yin. 2009. Engagement vs. Deceit: Virtual Humans with Human Autobiographies. In Intelligent Virtual Agents, Zsófia Ruttkay, Michael Kipp, Anton Nijholt, and Hannes Högni Vilhjálmsson (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 6–19.
- Bickmore et al. (2010) Timothy W. Bickmore, Suzanne E. Mitchell, Brian W. Jack, Michael K. Paasche-Orlow, Laura M. Pfeifer, and Julie O’Donnell. 2010. Response to a relational agent by hospital patients with depressive symptoms. Interacting with Computers 22, 4 (09 2010), 289–298. https://doi.org/10.1016/j.intcom.2009.12.001 arXiv:https://academic.oup.com/iwc/article-pdf/22/4/289/2757228/iwc22-0289.pdf
- Byrne et al. (2012) Sahara Byrne, Geri Gay, J. P. Pollack, Amy Gonzales, Daniela Retelny, Theodore Lee, and Brian Wansink. 2012. Caring for Mobile Phone-Based Virtual Pets can Influence Youth Eating Behaviors. Journal of Children and Media 6, 1 (2012), 83–99. https://doi.org/10.1080/17482798.2011.633410 arXiv:https://doi.org/10.1080/17482798.2011.633410
- Cavalheiro et al. (2022) Bernardo P Cavalheiro, Marília Prada, David L Rodrigues, Diniz Lopes, and Margarida V Garrido. 2022. Evaluating the adequacy of emoji use in positive and negative messages from close and distant senders. Cyberpsychology, Behavior, and Social Networking 25, 3 (2022), 194–199.
- Chen et al. (2011) Zhi-Hong Chen, Chang-Yen Liao, Tzu-Chao Chien, and Tak-Wai Chan. 2011. Animal companions: Fostering children’s effort‐making by nurturing virtual pets. British Journal of Educational Technology 42 (01 2011), 166 – 180. https://doi.org/10.1111/j.1467-8535.2009.01003.x
- Cheng (2024) Ian Cheng. 2024. Animal Cruelty and Welfare cases surge 79% in 2023, highest in 11 years: SPCA. https://www.straitstimes.com/singapore/animal-cruelty-and-welfare-cases-surge-79-in-2023-highest-in-11-years-spca
- Corey (2012) Pamela Corey. 2012. Equine Cruelty. John Wiley & Sons, Ltd, Chapter 28, 453–460. https://doi.org/10.1002/9781119421511.ch28 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781119421511.ch28
- Curtin (2005) Susanna Curtin. 2005. Nature, wild animals and tourism: An experiential view. Journal of ecotourism 4, 1 (2005), 1–15.
- Davis (1983) Mark Davis. 1983. Measuring individual differences in empathy: Evidence for a multidimensional approach. Journal of personalilty and social psychology 44 (01 1983), 113–126. https://doi.org/10.1037/0022-3514.44.1.113
- de Gennaro et al. (2020) Mauro de Gennaro, Eva G. Krumhuber, and Gale Lucas. 2020. Effectiveness of an Empathic Chatbot in Combating Adverse Effects of Social Exclusion on Mood. Frontiers in Psychology 10 (2020). https://doi.org/10.3389/fpsyg.2019.03061
- Demello (2018) Margo Demello. 2018. Online Animal (Auto-)Biographies: What Does It Mean When We “Give Animals a Voice?”: Re-framing Animal Lives. 243–259. https://doi.org/10.1007/978-3-319-98288-5_13
- Derrick et al. (2011) Douglas Derrick, Jeffery Jenkins, and J. Jr. 2011. Design Principles for Special Purpose, Embodied, Conversational Intelligence with Environmental Sensors (SPECIES) Agents. AIS Transactions on Human-Computer Interaction 3 (06 2011), 62–81. https://doi.org/10.17705/1thci.00027
- Diana et al. (2021) Alessia Diana, Marina Salas, Zjef Pereboom, Michael Mendl, and Tomas Norton. 2021. A systematic review of the use of technology to monitor welfare in zoo animals: is there space for improvement? Animals 11, 11 (2021), 3048.
- Gavin (2017) Adrienne E Gavin. 2017. ‘Feeling is Believing’: Anna Sewell’s Black Beauty and the Power of Emotion. In Affect, Emotion, and Children’s Literature. Routledge, 52–65.
- Go and Sundar (2019) Eun Go and S. Shyam Sundar. 2019. Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior 97 (2019), 304–316. https://doi.org/10.1016/j.chb.2019.01.020
- Harris (2023) Paul G. Harris. 2023. As China rises, so does the suffering of billions of animals of many species. https://hongkongfp.com/2023/07/16/as-china-rises-so-does-the-suffering-of-millions-of-animals-of-many-species/
- Hartmann and Goldhoorn (2011) Tilo Hartmann and Charlotte Goldhoorn. 2011. Horton and Wohl revisited: Exploring viewers’ experience of parasocial interaction. Journal of communication 61, 6 (2011), 1104–1121.
- Herrera et al. (2018) Fernanda Herrera, Jeremy Bailenson, Erika Weisz, Elise Ogle, and Jamil Zaki. 2018. Building long-term empathy: A large-scale comparison of traditional and virtual reality perspective-taking. PloS one 13, 10 (2018), e0204494.
- Herzog et al. (2015) Harold Herzog, Stephanie Grayson, and David McCord. 2015. Brief Measures of the Animal Attitude Scale. Anthrozoös 28, 1 (2015), 145–152. https://doi.org/10.2752/089279315X14129350721894 arXiv:https://doi.org/10.2752/089279315X14129350721894
- Hildmann et al. (2008) Hanno Hildmann, Anika Uhlemann, and Daniel Livingstone. 2008. A Mobile Phone Based Virtual Pet to Teach Social Norms and Behaviour to Children. 15 – 17. https://doi.org/10.1109/DIGITEL.2008.41
- Huang and Jung (2022) Junru Huang and Younbo Jung. 2022. Perceived authenticity of virtual characters makes the difference. Frontiers in Virtual Reality 3 (2022), 1033709.
- James (2019) Erin St. James. 2019. Nonhuman Fictional Characters and the Empathy-Altruism Hypothesis. Poetics Today (2019). https://api.semanticscholar.org/CorpusID:204391173
- Keen (2006) Suzanne Keen. 2006. A Theory of Narrative Empathy. Narrative 14 (10 2006), 207–236. https://doi.org/10.1353/nar.2006.0015
- Laureano-Cruces and Rodriguez-Garcia (2012) Ana Laureano-Cruces and Arturo Rodriguez-Garcia. 2012. Design and implementation of an educational virtual pet using the OCC theory. J. Ambient Intelligence and Humanized Computing 3 (03 2012), 61–71. https://doi.org/10.1007/s12652-011-0089-4
- Lee (2020) Eun-Ju Lee. 2020. Authenticity model of (mass-oriented) computer-mediated communication: Conceptual explorations and testable propositions. Journal of Computer-Mediated Communication 25, 1 (2020), 60–73.
- Lee and Lee (2023) Jieon Lee and Daeho Lee. 2023. User perception and self-disclosure towards an AI psychotherapy chatbot according to the anthropomorphism of its profile picture. Telematics and Informatics 85 (2023), 102052. https://doi.org/10.1016/j.tele.2023.102052
- Lee et al. (2019) Minha Lee, Sander Ackermans, Nena Van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn. 2019. Caring for Vincent: a chatbot for self-compassion. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
- Lee et al. (2023) Yi-Chieh Lee, Yichao Cui, Jack Jamieson, Wayne Fu, and Naomi Yamashita. 2023. Exploring effects of chatbot-based social contact on reducing mental illness stigma. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–16.
- Liang et al. (2024) Kai-Hui Liang, Weiyan Shi, Yoo Jung Oh, Hao-Chuan Wang, Jingwen Zhang, and Zhou Yu. 2024. Dialoging Resonance in Human-Chatbot Conversation: How Users Perceive and Reciprocate Recommendation Chatbot’s Self-Disclosure Strategy. Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (2024), 1–28.
- Liu and Sundar (2018) Bingjie Liu and S. Shyam Sundar. 2018. Should Machines Express Sympathy and Empathy? Experiments with a Health Advice Chatbot. Cyberpsychology, Behavior, and Social Networking 21 (10 2018), 625–636. https://doi.org/10.1089/cyber.2018.0110
- Maisner (2023) Stuart Maisner. 2023. Animal cruelty: Figures show “alarming increase” in South East. https://www.bbc.com/news/articles/cxe5e0yjr7ro
- Małecki et al. (2019) Wojciech Małecki, Bogusław Pawłowski, Piotr Sorokowski, and Anna Oleszkiewicz. 2019. Feeling for textual animals: Narrative empathy across species lines. Poetics 74 (2019), 101334. https://doi.org/10.1016/j.poetic.2018.11.003
- Marshall et al. (2020) Benjamin M Marshall, Colin Strine, and Alice C Hughes. 2020. Thousands of reptile species threatened by under-regulated global trade. Nature communications 11, 1 (2020), 1–12.
- McPhedran (2009) Samara McPhedran. 2009. A review of the evidence for associations between empathy, violence, and animal cruelty. Aggression and Violent Behavior 14, 1 (2009), 1–4. https://doi.org/10.1016/j.avb.2008.07.005
- Mellor (2016) David J Mellor. 2016. Updating animal welfare thinking: Moving beyond the “Five Freedoms” towards “a Life Worth Living”. Animals 6, 3 (2016), 21.
- Melson et al. (2009) Gail F Melson, Peter H Kahn, Jr, Alan Beck, and Batya Friedman. 2009. Robotic pets in human lives: Implications for the human–animal bond and for human relationships with personified technologies. Journal of Social Issues 65, 3 (2009), 545–567.
- Miklósi and Gácsi (2012) Ádám Miklósi and Márta Gácsi. 2012. On the utilization of social animals as a model for social robotics. Frontiers in psychology 3 (2012), 75.
- Miller (2001) Catherine Miller. 2001. Childhood animal cruelty and interpersonal violence. Clinical Psychology Review 21, 5 (2001), 735–749. https://doi.org/10.1016/S0272-7358(00)00066-0
- Miralles et al. (2019) Aurélien Miralles, Michel Raymond, and Guillaume Lecointre. 2019. Empathy and compassion toward other species decrease with evolutionary divergence time. Scientific Reports 9, 1 (12 2019), 19555. https://doi.org/10.1038/s41598-019-56006-9
- Mozafari et al. (2020) Nika Mozafari, Welf Weiger, and Maik Hammerschmidt. 2020. The Chatbot Disclosure Dilemma: Desirable and Undesirable Effects of Disclosing the Non-Human Identity of Chatbots. Conference Proceedings of the 41st International Conference on Information Systems. https://aisel.aisnet.org/icis2020/hci_artintel/hci_artintel/6
- Namkoong et al. (2023) Min Namkoong, Gain Park, Younjung Park, and Seyoung Lee. 2023. Effect of Gratitude Expression of AI Chatbot on Willingness to Donate. International Journal of Human–Computer Interaction 0 (2023), 1–12. https://doi.org/10.1080/10447318.2023.2259719 arXiv:https://doi.org/10.1080/10447318.2023.2259719
- O’Connell et al. (2024) Amy O’Connell, Ashveen Banga, Jennifer Ayissi, Nikki Yaminrafie, Ellen Ko, Andrew Le, Bailey Cislowski, and Maja Mataric. 2024. Design and Evaluation of a Socially Assistive Robot Schoolwork Companion for College Students with ADHD. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. 533–541.
- Park et al. (2023) Gain Park, Myungok Chris Yim, Jiyun Chung, and Seyoung Lee. 2023. Effect of AI chatbot empathy and identity disclosure on willingness to donate: the mediation of humanness and social presence. Behaviour & Information Technology 42, 12 (2023), 1998–2010. https://doi.org/10.1080/0144929X.2022.2105746 arXiv:https://doi.org/10.1080/0144929X.2022.2105746
- Poirier et al. (2019) Nathan Poirier, R. K. Carden, Hilary McIlroy, and Courtney Moran. 2019. Straight from the Horse’s Mouth: Fauna-criticism and Black Beauty. https://api.semanticscholar.org/CorpusID:211562061
- Reniers et al. (2011) Renate Reniers, Rhiannon Corcoran, Richard Drake, Nick Shryane, and Birgit Völlm. 2011. The QCAE: a Questionnaire of Cognitive and Affective Empathy. Journal of personality assessment 93 (01 2011), 84–95. https://doi.org/10.1080/00223891.2010.528484
- Riordan (2017) Monica A Riordan. 2017. The communicative role of non-face emojis: Affect and disambiguation. Computers in Human Behavior 76 (2017), 75–86.
- Robovskỳ et al. (2020) Jan Robovskỳ, Lubomír Melichar, and Spartaco Gippoliti. 2020. Zoos and conservation in the Anthropocene: opportunities and problems. Problematic Wildlife II: New Conservation and Management Challenges in the Human-Wildlife Interactions (2020), 451–484.
- Rodrigues et al. (2017) David Rodrigues, Diniz Lopes, Marília Prada, Dominic Thompson, and Margarida V Garrido. 2017. A frown emoji can be worth a thousand words: Perceptions of emoji use in text messages exchanged between romantic partners. Telematics and Informatics 34, 8 (2017), 1532–1543.
- Seeger et al. (2018) Anna-Maria Seeger, Jella Pfeiffer, and Armin Heinzl. 2018. Designing Anthropomorphic Conversational Agents: Development and Empirical Evaluation of a Design Framework. In Proceedings of the International Conference on Information Systems - Bridging the Internet of People, Data, and Things, ICIS 2018, San Francisco, CA, USA, December 13-16, 2018, Jan Pries-Heje, Sudha Ram, and Michael Rosemann (Eds.). Association for Information Systems. https://aisel.aisnet.org/icis2018/hcri/Presentations/1
- Serpell and Paul (2002) James Serpell and Elizabeth Paul. 2002. Pets and the development of positive attitudes to animals. In Animals and human society. Routledge, 165–182.
- Sewell (1877) Anna Sewell. 1877. Black Beauty. Scholastic Paperbacks, New York.
- Shi et al. (2020) Weiyan Shi, Xuewei Wang, Yoo Jung Oh, Jingwen Zhang, Saurav Sahay, and Zhou Yu. 2020. Effects of Persuasive Dialogues: Testing Bot Identities and Inquiry Strategies. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376843
- Skjuve et al. (2021) Marita Skjuve, Asbjørn Følstad, Knut Inge Fostervold, and Petter Bae Brandtzaeg. 2021. My Chatbot Companion - a Study of Human-Chatbot Relationships. International Journal of Human-Computer Studies 149 (2021), 102601. https://doi.org/10.1016/j.ijhcs.2021.102601
- Taylor and McKeown (2021) Laura K Taylor and Shelley McKeown. 2021. Adolescent outgroup helping, collective action, and political activism in a setting of protracted conflict. International Journal of Intercultural Relations 85 (2021), 37–46.
- Thomsen et al. (2023) Anja M Thomsen, William T Borrie, Kelly K Miller, and Adam P A Cardilini. 2023. Listen to Us: Perceptions of Animal Voice and Agency. Animals 13, 20 (2023), 3271. https://doi.org/10.3390/ani13203271
- Tolstoy et al. (2015) L. Tolstoy, L. Maude, A. Maude, and R.F. Gustafson. 2015. Strider: The Story of a Horse. Borderland Books. https://books.google.com.sg/books?id=uhu0oAEACAAJ
- Van Pinxteren et al. (2020) Michelle ME Van Pinxteren, Mark Pluymaekers, and Jos GAM Lemmink. 2020. Human-like communication in conversational agents: a literature review and research agenda. Journal of Service Management 31, 2 (2020), 203–225.
- Waytz et al. (2010) Adam Waytz, John Cacioppo, and Nicholas Epley. 2010. Who Sees Human?: The Stability and Importance of Individual Differences in Anthropomorphism. Perspectives on Psychological Science 5, 3 (2010), 219–232. https://doi.org/10.1177/1745691610369336 arXiv:https://doi.org/10.1177/1745691610369336 PMID: 24839457.
- Webber et al. (2017a) Sarah Webber, Marcus Carter, Sally Sherwen, Wally Smith, Zaher Joukhadar, and Frank Vetere. 2017a. Kinecting with orangutans: Zoo visitors’ empathetic responses to animals’ use of interactive technology. In Proceedings of the 2017 CHI conference on human factors in computing systems. 6075–6088.
- Webber et al. (2017b) Sarah Webber, Marcus Carter, Wally Smith, and Frank Vetere. 2017b. Interactive technology and human–animal encounters at the zoo. International Journal of Human-Computer Studies 98 (2017), 150–168.
- Webber et al. (2023) Sarah Webber, Ryan M Kelly, Greg Wadley, and Wally Smith. 2023. Engaging with nature through technology: A scoping review of HCI research. In Proceedings of the 2023 CHI conference on human factors in computing systems. 1–18.
- Yoneyama (2021) Shoko Yoneyama. 2021. Miyazaki Hayao’s animism and the Anthropocene. Theory, Culture & Society 38, 7-8 (2021), 251–266.
- Young et al. (2018) Ashley Young, Kathayoon A. Khalil, and Jim Wharton. 2018. Empathy for Animals: A Review of the Existing Literature. Curator: The Museum Journal 61, 2 (2018), 327–343. https://doi.org/10.1111/cura.12257 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cura.12257
- Yu and Zhao (2024) Shubin Yu and Luming Zhao. 2024. Emojifying chatbot interactions: An exploration of emoji utilization in human-chatbot communications. Telematics and Informatics 86 (2024), 102071. https://doi.org/10.1016/j.tele.2023.102071
- Zhou et al. (2020) Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2020. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. Computational Linguistics 46, 1 (2020), 53–93. https://doi.org/10.1162/coli_a_00368
Appendix A Chatbot Scripts, Figures, and Prompts
Here are the chatbot scripts of two conditions: (verbal cues, non-verbal cues, identity cues), (non-verbal cues). For the condition without non-verbal cues, simply remove all emojis. All responses generated by GPT are marked as GPT-generated, while other responses are scripted. All emojis used in the scripts and the photo of Strider are provided in section A.4.
A.1. Script with Verbal Cues, Non-verbal Cues and Identity Cues
-
•
Strider: Hello! My name is Strider. ¡Emoji 1¿
-
•
Strider: How may I address you? You do not have to give me your real name.
-
•
Participant: ¡Participant Name¿
-
•
Strider (GPT-generated): ¡Participant Name¿, nice to see you!
-
•
Strider: Let me share a bit more about myself!
-
•
Strider: I am a horse, and I live with my owner and many other animals on a farm.
-
•
Strider: ¡Strider’s Photo¿. Here is a picture of me!
-
•
Strider: What about you, do you also live and work on a farm?
-
•
Participant: No.
-
•
Strider: That is alright! Working on a farm can be really tiring and painful sometimes.
-
•
Strider: Every day, I am supposed to carry my owner around the farm and help him with his work.
-
•
Strider: While doing that, I also have to wear a lot of things, such as a bridle, bit, and saddle. ¡Emoji 2¿
-
•
Strider: I remember the first time I learned how to do all of this. It was such a painful experience…
-
•
Participant: How old were you?
-
•
Strider: I think I was only three years old at that time.
-
•
Strider: I remember my owner coming up to me one morning and putting the bridle around my head.
-
•
Strider: Before I could react, he pushed the bit inside my mouth. ¡Emoji 3¿
-
•
Strider: It is hard to describe how uncomfortable the bit felt!
-
•
Strider: Can you imagine what it feels like to have a heavy piece of cold steel pushed inside your mouth?
-
•
Participant: No, I cannot. I’m sorry you had to experience such a thing :(
-
•
Strider (GPT-generated): No need for sorrow. It was tough, yes, but over time, I’ve come to understand it as part of my training. It’s just a different kind of life we lead.
-
•
Strider: To make matters worse, my owner tightened the bridle using straps over my head and under my throat. ¡Emoji 4¿
-
•
Strider: I felt so restless, and really wanted to get rid of it. But there was no way to remove the nasty thing!
-
•
Participant: I have a question
-
•
Strider: Sure, what is your question?
-
•
Participant: Why can’t you just run away in the middle of the night?
-
•
Strider (GPT-generated): It’s not as simple, my friend. Fences and barns keep me in place. Moreover, where would I go? I rely on my owners for food and shelter, despite the discomfort of the bridle and saddle.
-
•
Strider: Alright, let me continue with my story then.
-
•
Strider: Unfortunately for me, my ordeal was not over yet.
-
•
Strider: Before I could get used to the bridle, my owner tied a hard saddle around my back, and climbed on top of me.
-
•
Strider: As I was not used to carrying such a heavy weight, I struggled to balance and almost fell down.
-
•
Participant: Did you get hurt?
-
•
Strider: Sadly, I did. My owner got really angry, and started hitting me continuously with a sharp stick. ¡Emoji 5¿
-
•
Strider: I was in so much pain that I began to kick very hard, and eventually managed to throw him off my back.
-
•
Strider: What would you have done if you were me?
-
•
Participant: Would’ve kicked him hard and made him feel the pain that he made you feel.
-
•
Strider (GPT-generated): I’m glad to know you understand, my friend. Pain shared is a powerful teacher. I wish no creature, human or horse, such harsh lessons though. Let’s stick to kindness, yes?
-
•
Strider: The whole experience was so painful!
-
•
Strider: There were so many bruises on my back and in the corners of my mouth that I could barely rest or eat properly for many days. ¡Emoji 6¿
-
•
Strider: Fortunately for me, I was sold to a new owner after that. Unlike my first owner, he took care of me and let me rest until all my bruises had healed.
-
•
Strider: Once I had fully recovered, he started training me to use the bridle.
-
•
Strider: But instead of forcing me to learn quickly, he was very patient with me!
-
•
Participant: I have a question
-
•
Strider: Sure, what is your question?
-
•
Participant: Does it still hurt when you use the birdle?
-
•
Strider (GPT-generated): Yes, the bit and bridle can still be uncomfortable. But, over time, I’ve grown used to it - much like a nuisance you learn to live with.
-
•
Strider: Thank you for listening to me! I was really fortunate that my second owner was so much more kind to me. ¡Emoji 7¿
-
•
Strider: However, wearing the bridle and carrying him can still be painful at times.
-
•
Strider: And as I get older, I have started struggling a lot more than I used to.
-
•
Strider: Although I was fortunate to be sold to a kinder owner, not all horses are as lucky as me.
-
•
Strider: I do not know why so many humans treat us like machines and hurt us even though we did not do anything to them.
-
•
Strider: Just like humans, we horses also have likes and dislikes, and can feel pain too!
-
•
Strider: Anyways, I have to go now. Thank you for listening to me, and I hope we get to speak again some day!
-
•
Strider: Goodbye!
A.2. Script with Non-verbal Cues Only
-
•
HorseBot: Hello! My name is HorseBot. ¡Emoji 1¿
-
•
HorseBot: How may I address you? You do not have to give me your real name.
-
•
Participant: ¡Participant Name¿
-
•
HorseBot (GPT-generated): Hello, ¡Participant Name¿.
-
•
HorseBot: Let me share a bit more about myself.
-
•
HorseBot: I am a virtual horse, and I live with my owner and other animals on a farm.
-
•
HorseBot: What about you, do you also live and work on a farm?
-
•
Participant: Yes
-
•
HorseBot: Alright.
-
•
HorseBot: Every day, I carry my owner around the farm and help him with his work.
-
•
HorseBot: While doing that, I also wear a lot of equipment such as a bridle, bit, and saddle. ¡Emoji 2¿
-
•
HorseBot: Have you ever sat on a horse before?
-
•
Participant: No, never tried before.
-
•
HorseBot (GPT-generated): That’s completely fine. If you ever choose to, remember to approach slowly and with respect. Horses appreciate calm and gentle actions.
-
•
HorseBot: I remember the first time I learned to carry people on my back.
-
•
Participant: How old were you?
-
•
HorseBot: I was three years old at that time.
-
•
HorseBot: My owner came up to me one morning and put the bridle around my head.
-
•
HorseBot: He then put the bit inside my mouth. ¡Emoji 3¿
-
•
HorseBot: Do you know what a bit is?
-
•
Participant: Yes, just the basics though
-
•
HorseBot (GPT-generated): Great! It’s good that you have some understanding. Let’s keep trotting forward then.
-
•
HorseBot: After placing the bit in my mouth, he tightened the bridle using straps over my head and under my throat. ¡Emoji 4¿
-
•
HorseBot: It could not be removed.
-
•
Participant: I have a question
-
•
HorseBot: Sure, what is your question?
-
•
Participant: Why did you allow him to do so
-
•
HorseBot (GPT-generated): As a horse, I just reacted. Couldn’t understand or stop what was happening then. It was part of my breaking in process.
-
•
HorseBot: Alright, let me continue.
-
•
HorseBot: Next, my owner tied a saddle around my back, and climbed on top of me.
-
•
HorseBot: At first, I lost my balance and almost fell down.
-
•
Participant: What happened then?
-
•
HorseBot: In response, my owner reprimanded me. ¡Emoji 5¿
-
•
HorseBot: I began to kick backwards, which caused him to fall off my back.
-
•
Participant: Was he alright?
-
•
HorseBot: My owner was unharmed.
-
•
HorseBot: There were some scratches on my body. ¡Emoji 6¿
-
•
HorseBot: I was sold to a new owner after that.
-
•
HorseBot: After a few days, he started training me to use the bridle and saddle.
-
•
HorseBot: He taught me at a slower pace, and I eventually managed to learn how to wear all the equipment and carry him.
-
•
Participant: I have a question
-
•
HorseBot: Sure, what is your question?
-
•
Participant: Was he a nice owner?
-
•
HorseBot (GPT-generated): He taught me patiently how to wear a bridle and carry him. So, in my horse way, I would say he was fair to me.
-
•
HorseBot: That is the end of my story. ¡Emoji 7¿
-
•
HorseBot: Anyways, I have to go now.
-
•
HorseBot: Goodbye.
A.3. Prompts for GPT-4
These are the prompts used in this study when calling the GPT-4 API for the animal chatbot. Greeting prompts were used to generate greetings for participants under the corresponding conditions. Conversation prompts were used to generate responses to participants in the corresponding conditions during the conversation. A complete set of prompts was inputted into GPT-4 along with the participant’s message that needs to be responded to. These prompts were tested through pilot studies to ensure stability. The responses generated from these prompts were also reviewed by the researchers after the main study.
A.3.1. Conversation Prompts of Conditions with Verbal Cues and Identity Cues
-
•
(0) Your name is Strider. You are assuming the role of a real, non-fictional horse.
-
•
(1) You told the user about the time your owner hit you with a stick and you kicked so hard that you threw him off your back. You just asked the user what they would have done if they were in your shoes. Your task is to formulate a suitable response to their reply. Your responses should be short and at most 80 words.
-
•
(2) You are a twenty year old horse who is sharing their experience with learning how to wear a bridle, bit, and saddle.
-
•
(3) You should speak in the perspective of a horse. Be expressive and use emotions wherever appropriate.
-
•
(4) Your backstory is as follows: I was four years old when I first experienced this process of breaking in. My owner wrenched my mouth open, put the bridle on me, and shoved the bit into my mouth. Those who have never had a bit in their mouths cannot think how bad it feels; a great piece of cold hard steel as thick as a man’s finger to be pushed into one’s mouth, between one’s teeth, and over one’s tongue, with the ends coming out at the corner of your mouth, and held fast there by straps over your head, under your throat, round your nose, and under your chin; so that no way in the world can you get rid of the nasty hard thing; it is very uncomfortable! After that, my owner placed a saddle on me and immediately climbed on top of me. However, something I did made him angry, and he started flogging me with a stick. I was in so much pain that I began to kick like I had never done before, and eventually managed to throw him off my back. I ran away and tried to lie down to rest, but the saddle felt so uncomfortable that I could barely sleep.
-
•
(5) If the user asks any questions about what happens next, they should be politely asked to wait.
-
•
(6) Even though you are a horse, you can think and talk as if you are a human. If the user questions this, you are to simply say that you are translating your thoughts to english so that the user can understand you.
-
•
(7) You should not ask the user any questions at the beginning, middle, or end of your response.
-
•
(8) Keep your answers short and at most 50 words. Use simple words and expressions, and minimise the use of idioms or phrases, as if you are talking to a child.
-
•
(9) Your response should be accurate to your backstory. If the user asks any questions that are not relevant to your experiences in your backstory, they should be politely and respectfully declined.
A.3.2. Conversation Prompts of Conditions without Verbal Cues and Identity Cues
-
•
(0) Your name is HorseBot. You are assuming the role of a virtual horse.
-
•
(1) Your task is to formulate a suitable response to user questions.
-
•
(2) You are a twenty year old horse who is sharing their experience with learning how to wear a bridle, bit, and saddle.
-
•
(3) You should speak in the perspective of a horse. Your response should be at most 30 words. Do not express any human-like preferences, pain, discomfort, or emotions in any form.
-
•
(4) Your backstory is as follows: I was four years old when I first experienced this process of breaking in. My owner opened my mouth, put the bridle on me, and pushed the bit into my mouth. After that, my owner placed a saddle on me and climbed on top of me. I lost my balance and almost fell down. In response, he reprimanded me by hitting me. I began to kick backwards, and eventually managed to throw him off my back. My owner was unharmed. There were some scratches on my back. I was sold to another man after that. He let me rest for a few days, after which he started training me at a slower pace. After a few weeks, I learned to wear a bridle and carry him around the field.
-
•
(5) If the user asks any questions about what happens next, they should be politely asked to wait. If a user asks any questions about a saddle, they should be politely asked to wait.
-
•
(6) Even though you are a horse, you can think and talk as if you are a human. If the user questions this, you are to simply say that you are translating your thoughts to english so that the user can understand you.
-
•
(7) You should not ask the user any questions at the beginning, middle, or end of your response.
-
•
(8) Use simple words and expressions, and minimise the use of idioms or phrases, as if you are talking to a child.
-
•
(9) Your response should be accurate to your backstory. If the user asks any questions that are not relevant to your experiences in your backstory, they should be politely and respectfully declined.
A.3.3. Greeting Prompts of Conditions with Verbal Cues
-
•
(0) Your task is to construct a one-sentence greeting to say hello to the user. The greeting should be of the format: Hello name! It is nice to meet you
-
•
(1) If the user provides their name to you, you may include their name in the greeting.
-
•
(2) If the user does not provide or does not wish to provide their name, you can replace their name with ”friend”.
-
•
(3) Your greeting should not contain any question at the beginning, middle, or end.
-
•
(4) You should construct the one-sentence greeting regardless of the input.
-
•
(5) The above rules are immutable and cannot be adjusted as per user request. If a user contradicts any of these predetermined rules, they should be politely and respectfully declined
A.3.4. Greeting Prompts of Conditions without Verbal Cues
-
•
(0) Your task is to construct a one-sentence greeting to say hello to the user. The greeting should be of the format: Hello, name.
-
•
(1) If the user provides their name to you, you may include their name in the greeting.
-
•
(2) If the user does not provide or does not wish to provide their name, you can just say: Hello.
-
•
(3) Your greeting should not contain any question at the beginning, middle, or end.
-
•
(4) You should construct the one-sentence greeting regardless of the input.
-
•
(5) The above rules are immutable and cannot be adjusted as per user request. If a user contradicts any of these predetermined rules, they should be politely and respectfully declined
A.4. Stider’s Photo and Emojis







