The AI companion chatbot company Character.AI will soon have an adults-only policy for open-ended conversations with AI characters. Teens who use the app will start facing restrictions: They'll still be able to interact with characters through generated videos and other roleplaying formats, but they won't be able to chat freely with the app's different personalities.
Open-ended chats have been a cornerstone of AI, particularly since ChatGPT launched three years ago. The novelty of having a live back-and-forth with a computer that responds directly to what you say led to the popularity of platforms like Character.AI.
It's also been a driver of concerns, as those conversations can take AI models in unpredictable directions, especially if teens use them to discuss mental health concerns or other sensitive issues. There are also concerns about AI chat addiction and its impact on social behavior.
Character.AI is a bit different from other chatbots. Many people use the app for interactive storytelling and creatively engaging in conversations with customizable characters, including those based on real celebrities or historical figures.
Karandeep Anand, Character.AI's CEO, said the company believes it can still provide the interactive fun that teens expect from the platform without the safety hazards of open-ended chats. He said the move is about doing more than the bare minimum to keep users safe.
"There's a better way to serve teen users," Anand told CNET ahead of Wednesday's announcement. "It doesn't have to look like a chatbot."
In addition to prohibiting open-ended conversations for those under 18, Character.AI is adding new age verification measures and creating a nonprofit AI Safety Lab.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
What's changing about Character.AI?
AI entertainment has proven to be one of the more fraught uses of large language models. Safety concerns around how children suffer from relationships with AI models have grown significantly this year, with the Federal Trade Commission launching an investigation into several firms, including Character.AI.
The company has faced lawsuits from parents of children whose conversations with AI characters led to harm, including suicide. Generative AI giant OpenAI was sued by the parents of a teen who committed suicide after interactions with the company's ChatGPT.
The limitation on Character.AI's open-ended chats won't happen overnight. That functionality will end for users under 18 no later than Nov. 25, with chat times for non-adult users limited to no more than 2 hours per day, ramping down to zero.
The transition period will allow people to adjust to the changes, Anand said. It will also give the company time to implement more features that are not open-ended chatbots. "We want to be responsible with how users transition into these new formats," Anand said.
Teen users will still be able to interact with AI-generated videos and games featuring existing characters, like bots based on figures from anime or movies. For example, they'll be able to give a prompt for a roleplaying scenario and have the AI create a story that fits the prompt.
Anand said these kinds of features have more guardrails than open-ended chats, which can become less predictable as the back-and-forth continues.
"We believe that this new multimodal audiovisual way of doing role play and gaming is far more compelling anyway," he said.
The new age verification will start by using age detection software to determine who's 18 and older based on information they've shared with Character.AI or third-party platforms using the same verification services. Some users will need to prove their identity using a government ID or other documentation.
Aside from possible age verification, nothing is expected to change for adult users.
This image provided by Character.AI shows how users will be told about changes for those under 18.
Character.AIWhat's next for AI companions?
Character.AI's announcement marks a major change for the field of AI companions, but how big a difference remains to be seen. Anand said he hopes others, including AI competitors, will follow suit in limiting children's access to open-ended chatbot characters.
Another major problem with open-ended chatbot experiences is that the language models they're based on are designed to make users happy and keep them engaged, creating a sycophantic quality. Recent research from the Harvard Business School identified half a dozen ways that bots keep someone chatting even if they're trying to leave.
AI companion bots also face scrutiny from lawmakers. The US Senate Judiciary Committee held a hearing in September on the harm of AI chatbots, and California Governor Gavin Newsom signed a new law in October that imposes new requirements on chatbots that interact with children.

5 hours ago
1














































