The new features include giving users a notification after they have been talking to a chatbot for an hour, and introducing new disclaimers.
Users will now be shown further warnings that they are talking to a chatbot rather than a real person – and to treat what it says as fiction.
And it is adding additional disclaimers to chatbots which purport to be psychologists or therapists, to tell users not to rely on them for professional advice.
Social media expert Matt Navarra said he believed the move to introduce new safety features “reflects a growing recognition of the challenges posed by the rapid integration of AI into our daily lives”.
“These systems aren’t just delivering content, they’re simulating interactions and relationships which can create unique risks, particularly around trust and misinformation,” he said.
“I think Character.ai is tackling an important vulnerability, the potential for misuse or for young users to encounter inappropriate content.
“It’s a smart move, and one that acknowledges the evolving expectations around responsible AI development.”
But he said while the changes were encouraging, he was interested in seeing how the safeguards hold up as Character.ai continues to get bigger.