OpenAI warns GPT-4o, voice mode may lure users to build emotional bonds

Written by

Published 13 Aug 2024

Fact checked by

NSFW AI

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free Joyful young woman phoning on street in evening Stock Photo

Following the launch of the Advanced Voice Mode (AVM) on ChatGPT, OpenAI has warned that its anthropomorphic voice assistant puts users at risk of emotional attachment to the artificial intelligent (AI) chatbot.

The AI titan revealed this concern in its August 8 system card, which reported the weaknesses of its flagship model, GPT-4o, which is currently incorporated into ChatGPT AVM.

The Advanced Voice Mode had already had a string of issues since its first introduction last May when Scarlett Johansson threatened to sue OpenAI for using her voice without consent. Now, the feature earned another bad rep after the company disclosed the possibility of anthropomorphization and emotional reliance on its AI model.

“During early testing, including red teaming and internal user testing, we observed users using language that might indicate forming connections with the model,” the company wrote in the report.

One such case was evident when a user used language, such as “This is our last day together.” OpenAI cautioned that this shared bond between AI and the human using it could pose harm to humanity.

When users get too attached

OpenAI acknowledged that GPT-4o’s audio capabilities might have exacerbated the risk of anthropomorphization, which it described as the attribution of human-like behaviors and characteristics to nonhuman entities like AI models.

The company added that this could lead to users placing more trust in AI content even when models hallucinate incorrect answers. In the long term, real-life human relationships might also be affected.

“Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships. Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions,” OpenAI said.

Moreover, the advanced features in GPT-4o and other omni models could further complicate the issue. OpenAI highlighted the ability to remember key details and use them in conversation, which might potentially cause over-reliance and dependence on AI.

On the other hand, some company executives expressed positive feedback on these artificial bonds, saying that this could be an opportunity to help individuals practice social interactions and overcome loneliness.

As such, OpenAI has promised to continue its anthropomorphization and emotional reliance research through its beta testers. “We intend to further study the potential for emotional reliance, and ways in which deeper integration of our model’s and systems’ many features with the audio modality may drive behavior.”

Not limited to ChatGPT

These issues are not isolated within OpenAI’s products. Even before generative AI rose to popularity, anthropomorphization began with named products like Siri, Bixby, and Alexa—along with unnamed ones such as Google Assistant—which have all featured a human voice.

Today, many interactive AI products have long been personified, with users mostly referring to them as “he/him” or “she/her.”

Likewise, emotional connections with AI models are not limited to ChatGPT. Other chatbots, such as Character AI and Replika, revealed that some of their users experienced antisocial encounters.

In a viral video on TikTok, one Character AI user showed that they used the app while watching a movie. Other netizens also commented that they could not use the chatbot outside of their room because of the intimacy of their interactions.

That is why it is no surprise that Google DeepMind also published a document similar to OpenAI, discussing the ethical challenges brought by more sophisticated AI assistants.