• Wion
  • /Technology
  • /Love in the age of AI: How safe are AI romance chatbots? Experts warn of heartbreak and privacy risks - Technology News

Love in the age of AI: How safe are AI romance chatbots? Experts warn of heartbreak and privacy risks

Love in the age of AI: How safe are AI romance chatbots? Experts warn of heartbreak and privacy risks

Romantic AI chatbots

In the modern age, love is a rare find, and sometimes people just make do with technology: enter AI romance.

For those of you who didn't know this was a thing, there are AI chatbots that can help somewhat fill the romantic vacuum which comes from not having a significant other. These chatbots can help you duringthe time of loneliness, give you an outlet for your frustrations, and even help you with the opportunity to work out your sexual fantasies. But, how safe or trustworthy are these AI girlfriends/boyfriends? Experts warn they can break your heart.

A privacy nightmare

Add WION as a Preferred Source

In its survey,thenon profit organisation the Mozilla Foundation has come to the conclusion that these chatbots are nothing short of a privacy nightmare.

"Marketed as an empathetic friend, lover, or soulmate, and built to ask you endless questions, there’s no doubt romantic AI chatbots will end up collecting sensitive personal information about you."

In the report, researcher Misha Rykov writes "Although they are marketed as something that will enhance your mental health and well-being."

"They specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you," she adds.

Researchers found that about 73 per cent of the apps hide how they manage security vulnerabilities, 45 per cent allow weak passwords, and all but one — Eva AI Chat Bot & Soulmate — share or sell personal data. About half of the apps or 54 per cent don't let users delete their personal data.

Furthermore, one AI — CrushOn.AI even collects data on the users' sexual health, prescription meds, and gender-affirming care. This, as per the Mozilla Foundation is mentioned in the apps' privacy policy.

Disturbing content

What's more, these AI chatbots at times even give way to some disturbing content. The survey found that some apps have chatbots with pretty dark character descriptions, featuring violence, underage abuse. Otherscarried warnings that the chatbots may be "offensive, unsafe, or hostile".

Not just that, the Mozilla Foundation noted that previously apps have even encouraged dangerous behaviour: suicide (Chai AI) and an assassination attempt on the late British monarch Queen Elizabeth II (Replika).

The researchers suggest that those trying out the world of AI companionship should refrain from saying "anything to your AI friend that you wouldn't want your cousin or colleagues to read."

(With inputs from agencies)

About the Author

Share on twitter

Moohita Kaur Garg

Moohita Kaur Garg is a senior sub-editor at WION with over four years of experience covering the volatile intersections of geopolitics and global security. From reporting on global...Read More