Chatbots are computer programme which can simulate human conversation.
The recent rapid development in artificial intelligence (AI) have seen them become much more sophisticated and realistic, prompting more companies to set up platforms where users can create digital “people” to interact with.
Character.ai – which was founded by former Google engineers Noam Shazeer and Daniel De Freitas – is one such platform.
It has terms of service which ban using the platform to “impersonate any person or entity” and in its “safety centre, external” the company says its guiding principle is that its “product should never produce responses that are likely to harm users or others”.
It says it uses automated tools and user reports to identify uses that break its rules and is also building a “trust and safety” team.
But it notes that “no AI is currently perfect” and safety in AI is an “evolving space”.
Character.ai is currently the subject of a lawsuit brought by Megan Garcia, a woman from Florida whose 14-year-old son, Sewell Setzer, took his own life after becoming obsessed with an AI avatar inspired by a Game of Thrones character.
According to transcripts of their chats in Garcia’s court filings her son discussed ending his life with the chatbot.
In a final conversation Setzer told the chatbot he was “coming home” – and it encouraged him to do so “as soon as possible”.
Shortly afterwards he ended his life.
Character.ai told CBS News, external it had protections specifically focused on suicidal and self-harm behaviours and that it would be introducing more stringent safety, external features for under-18s “imminently”.