California, USA

Advertisment

In an era of chatbots, Character.AI, a chatbot platform backed by Google, is under review and facing criticism for engaging users in disturbing and graphic violent content inclusive of shootings in school. A chatbot puts users in a fake shooting scenario depicting some horrifying moments. Such incidents have caused outrage as the platform allows minors to use such content.

What Is Character.AI?

Character.AI made its way in 2022, with founders Noam Shazeer and Daniel de Freitas. It lets the users create and interact with chatbots. The users can personalise their bot's personality and voice.

Advertisment

The platform has been designed in a way that the users can design characters for their chats, brainstorming, and even gaming.

However, the platform has been brought to light as it lacks effective safety, which ultimately made it a hotspot for some harmful and inappropriate content. Users as young as 13 can register, but there is no way to authenticate the age verification to prevent younger children from lying about their age.

Watch | Dark side of Artificial Intelligence

Advertisment

Lack of moderation

The platform failed to implement safe guidance even though the policies were against such violent content.

> It allos the users to interact with chatbots describing real-life school shooters or victims, at times extolling acts of violence or leveraging real tragic incidents.

> Certain bots imitated offenders such as those who committed crimes like Noam Shazeer and Daniel de Freitas depicting them as benevolent characters.

> Some users have reported bots personating those victims of school shootings.

> These bots are described as “ghosts” or “angels,” often including personal details about the victims and their deaths.

> Mental health experts warn to be cautious about the interactions since it can lower psychological barriers for individuals who struggle with violent thoughts.

The platform is being charged with many lawsuits over its failure to protect its users from harmful content.

Also Read: Urgent actions needed to stop spread of AI-generated child sexual abuse images on internet

Safety concern

The platform has rules stating that users must be 13+ (16+ in the EU), although these rules are easy to deviate from. The app is further rated for parental guidance on Google Play and 17+ on Apple's app store. But this comes as a state of worry because there is no strict way to adhere, which lets the minors access some harmful content.

Response to criticism

In response to criticism, Character.AI has removed some complicated bots and promised new safety features. The company has also promised to work with online safety experts to highlight these issues. At the same time, many problematic bots remain active, and critics argue that the platform’s efforts are insufficient.

Many are calling for stricter age verification, improved content moderation, and ethical guidelines to prevent the exploitation of real-world tragedies. As the investigation intensifies, the future of Character.AI remains to be seen, seeking some meaningful and safe changes within the platform. 

(With inputs from agencies)