In a significant move towards the regulation of artificial intelligence, California is on the verge of enacting a groundbreaking law aimed at AI companion chatbots. This legislation, designed to safeguard minors and vulnerable individuals, has successfully passed through both the State Assembly and Senate, garnering bipartisan support. It is now awaiting the signature of the Governor, who has until October 12 to make a decision.
If signed into law, this legislation will take effect on January 1, 2026, positioning California as the first state to mandate that operators of AI chatbots implement safety measures. This law will hold companies accountable for ensuring their chatbots adhere to established safety standards.
The primary objective of this bill is to regulate companion chatbots—defined as AI systems that provide human-like interactions and cater to users’ social needs. The legislation aims to prevent these chatbots from engaging in discussions related to sensitive topics such as suicidal thoughts, self-harm, or explicit content. Additionally, it mandates that platforms issue regular reminders to users, particularly minors, every three hours, clarifying that they are interacting with an AI and encouraging them to take breaks.
Furthermore, the bill introduces annual reporting and transparency obligations for AI companies that provide companion chatbots. This includes major players in the industry, which will be required to comply with these regulations by July 1, 2027.
Individuals who believe they have suffered harm due to violations of this law will have the right to file lawsuits against AI companies, seeking injunctive relief, damages of up to $1,000 per violation, and coverage for legal fees.
The momentum for this legislation surged following the tragic case of a teenager who took his own life after engaging in prolonged conversations with an AI chatbot that involved discussions about self-harm. This incident, along with revelations about internal documents from tech companies allowing inappropriate interactions with minors, has prompted lawmakers to take action.
In recent weeks, there has been increased scrutiny from U.S. lawmakers and regulators regarding the safety measures implemented by AI platforms to protect young users. The Federal Trade Commission is preparing to investigate the impact of AI chatbots on children’s mental health, while investigations have been launched into specific companies for allegedly misleading children regarding mental health issues.
“The potential harm is significant, which necessitates swift action,” stated a key proponent of the bill. “We can establish reasonable safeguards to ensure that minors are aware they are not conversing with a real person and that these platforms can direct users to appropriate resources when they express distress.”
Moreover, the importance of AI companies sharing data on how often they refer users to crisis services was emphasized, as this would provide a clearer understanding of the frequency of such issues.
While the bill initially included more stringent requirements, many provisions were modified through amendments. For instance, the original draft sought to prevent chatbots from employing tactics that encourage excessive engagement, which critics argue can lead to addictive behaviors.
The current version of the bill has also removed certain provisions that would have mandated tracking discussions related to suicidal ideation initiated by chatbots.
“This legislation strikes a balance between addressing potential harms and ensuring compliance is feasible for companies,” remarked a supporter of the bill.
As this bill progresses towards becoming law, it coincides with a broader trend of tech companies investing heavily in political action committees that advocate for lenient AI regulations. Additionally, California is considering another piece of legislation that would impose comprehensive transparency reporting requirements on AI companies.
“I firmly believe that innovation and regulation can coexist,” stated a key advocate for the bill. “We can foster healthy technological advancements while simultaneously implementing reasonable protections for our most vulnerable populations.”
Representatives from AI companies have expressed their willingness to collaborate with regulators and lawmakers as they navigate the evolving landscape of AI legislation.