You might find using AI technology helpful when chatting to others, but this latest research shows people will think less of someone using such tools.
Here’s how the study, led by folks at America’s Cornell University, went down. The team recruited participants, and split them into 219 pairs. These test subjects were then asked to discuss policy stuff over text messaging. For some of the pairs, both participants in each pairing were told to only use suggestions from Google’s Smart Reply, which follows a topic of conversation and recommends things to say. Some pairs were told not to use the tool at all, and for other pairs, one participant in each pairing was told to use Smart Reply.
One in seven messages were therefore sent using auto-generated text in the experiment, and these made conversations seemingly more efficient and with a positive tone. But if a participant believed the person they were talking to was replying with boilerplate responses, they thought they were being less cooperative and felt less warmly about them.
People might project their negative views of AI on the person they suspect is using it
Malte Jung, co-author of the research published in Scientific Reports, and an associate professor of information science at Cornell, said it could be because people tend to trust technology less than other humans, or perceive its use in conversations as inauthentic.
“One explanation is that people might project their negative views of AI on the person they suspect is using it,” he told The Register.
“Another explanation could be that suspecting someone of using AI to generate their responses might lead to a perception of that person as less caring, genuine or authentic. For example, a poem from a lover is likely received less warmly if that poem was generated by ChatGPT.”
In a second experiment, 291 pairs of people were asked to discuss a policy issue again. This time, however, they were split into groups that had to manually type their own responses, or they could use Google’s default smart reply, or had access to a tool that generated text with a positive or negative tone.
Conversations that were conducted with Google smart reply or the tool that generated positive text were perceived to be more upbeat than ones that involved using no AI tools or replying with auto-generated negative responses. The researchers believe this shows there are some benefits to communicating using AI in certain situations, such as more transactional or professional scenarios.
“We asked crowdworkers to discuss policies about the unfair rejection of work. In such a work-related context, a more friendly positive tone has mainly positive consequences as positive language draws people closer to each other,” Jung told us.
“However, in another context the same language could have a different and even negative impact. For example, a person sharing sad news about a death in the family might not appreciate a cheerful and happy response and will likely be put off by that. In other words, what ‘positive’ and ‘negative’ means varies dramatically with context.”
Human communication is going to be shaped by AI as the technology becomes more increasingly accessible. Microsoft and Google, for example, have both announced tools aimed at helping users automatically write emails or documents.
“While AI might be able to help you write, it’s altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice,” Jess Hohenstein, lead author of the study and a research scientist at Cornell University, warned this month.
Hohenstein told us that she would “love to see more transparency around these tools” that includes some way to disclose when people were using them. “Taking steps towards more openness and transparency around LLMs could potentially help alleviate some of that general suspicion we saw towards the AI.”