Key takeaways
-
Thanks to advances like ChatGPT, AI chatbots are incredibly lifelike, almost human-like.
-
AI companionship has its benefits, such as helping with grief management or alleviating loneliness.
-
Treating AI like a true friend can lead to emotional dependency, bad habits and privacy concerns.
Talking to AI chatbots like ChatGPT is really easy. Send them a message any time of the day, and they’ll respond within seconds. Or, talk to them via voice chat, and they’ll respond and emote just like a normal human friend would. However, AI chatbots are not human or your friend, and forgetting this can be dangerous!
Chatbots have become incredibly life-like.
If you have played with OpenAI. GPT-4o Or watch the demo videos, you’ll find out. Chat GPT He has come a long way as a human beingEspecially when it comes to speech.. Earlier versions of ChatGPT could talk, but their delivery was always a little too perfect, robotic, and emotional. You’ll know you were talking to an AI.
However, this new model may fool even the harshest critic. It laughs when you tell a joke, it says “um” and “uh,” changes tone, hesitates before speaking, and basically does everything a real human would do. Does when speaking.
In fact, ChatGPT is so good now that I’m tempted to treat it like a real person even though I know in my head it’s not. This is what we call anthropomorphism – the tendency to assign human characteristics to non-human objects.
The funny thing is that Chat GPT isn’t even pretending to be human, and I already have to remind myself not to transform him into human form. This battle becomes even more difficult when your enemies enter the world. AI Friends.
A prominent example of these AI friends is Copy. Replika’s pitch is that it allows you to create an avatar that can act as anything from your friend to a therapist to a romantic partner. You can then exchange messages with these avatars, talk to them on a video call, or even interact with them through AR and VR.
It is also “friend”, an AI wearable set to be launched in 2025 that is supposed to provide constant companionship, emotional support and assistance to its user.
AI companionship isn’t inherently bad, but…
AI companionship isn’t inherently a bad idea, and off the top of my head, I can think of a few instances where it would actually be beneficial.
There is a grief management. AI chatbots can help you process the loss of a loved one and provide emotional support during your period of grief. In fact, CEO of Replica, Eugenia Koeda revealed. Idea for Replika It first came to him in response to the loss of a close friend.
AI companions can also be a boon for people who struggle with loneliness. Picture an elderly person in a nursing home. An AI companion can help prevent feelings of loneliness between family visits.
They can also be helpful for people with social anxiety, as they can use them to practice communicating without worrying about people judging them or laughing at them behind their backs, e.g. Humans do.
But while AI companions may have real utility, there are still risks involved in building relationships with them.
You can easily depend on your AI companion.
In this Safety report on GPT-4oOpenAI said that human-like socialization with AI could reduce the need for human interaction and potentially foster healthier relationships.
That’s putting it mildly. To put it bluntly, treating AI as a companion can cause you to develop an emotional dependence or, worse, something like an addiction.
The simple truth is that, lacking physical support, AI companions can be your best friend than any human. They are always ready to chat, no matter how late the hour, and will never be tired, bored or distracted. You always have priority in the conversation, and as long as you’re ready to continue, the AI companion is there to listen and respond. Human friends, on the other hand, are limited by their need to sleep and work and can’t always be there when you need them.
Talking to fellow AIs is such a great experience every time that it gives a kind of positive reinforcement. Your brain knows that every time you interact with the AI, you’ll feel good about yourself, so it craves it more and more, and before you know it, you’re addicted. are
This is not just speculation, it is already happening. In 2023, the replica was forced to rebuild. Features sexy role playing In his app, his user base revolted after they were removed. Some users even claimed to suffer mental health crises because of it.
Now, I’ll admit when I was pretty upset. Google killed my favorite Android feature. without warning. But not so much that a mental crisis arises. And if you’ve become so attached to an AI that losing it could make this happen, maybe it’s time to reevaluate your relationship with that AI.
Your AI companion can teach you bad habits.
Anthropomorphizing AI can blur the lines between humans and AI, and cause you to start treating people in real life the way you treat AI.
For example, when you are talking with ChatGPT, it always ignores you. So, even if it’s in the middle of explaining something, you can cut it down, and the AI will let you get the floor without any hard feelings. You might accidentally carry this behavior into real life, except your human friends won’t be as forgiving as the AI.
The bad habits you can learn from AI companions may not all be as mild as interrupting a conversation. For example, you may get used to being the center of attention in your conversations with the AI, and this may happen. Weakening your ability To maintain healthy relationships with real people.
I also worry that since their AIs are very agreeable.they can cause you to expect constant approval from others and struggle to accept rejection or disagreement when it inevitably occurs.
Your AI companion can reveal your secrets.
Privacy Another big reason to be wary of AI companions. As much as it may feel like you’re chatting with a friend, AI companions are machines, and they’re gathering information about you to improve their performance. That’s why there are some. Things you should never use ChatGPT for.
Sure, these AI companies promise that your conversations are safe, but they can’t always guarantee it. What if they get hacked and your conversations are exposed? Or if the FBI demands to see your chat logs? At least with a human friend, it’s your word against theirs. With AI companions, the truth is in plain text for all to see.
Ultimately, AI chatbots are products.
Whether AI is marketed as a friend, girlfriend, therapist, or companion, it’s still a product, owned by a company whose goal is to make money.
Just like Microsoft sells Windows, companies like OpenAI are selling their chatbots. But what makes them different from other products is the emotional bond you can form with them, and that’s where things can get dangerous.
Humans don’t always make the best decisions when emotions are involved, and you’d be surprised how far you can go to maintain an emotional connection, even if it’s with AI. I don’t think a private company, whose primary goal (despite all the good marketing) is profit, should have that much control over you.
An AI companion, strange as it may sound, isn’t a bad idea. They certainly have their advantages in the right context. But the field is still in its infancy, and we’re building rules as we go along. That’s why it’s important to be aware of the risks that come with treating AI as a friend.