California Passes the World’s First AI Companion Law — And Why It Matters
New chatbot law doesn’t go far enough, but it’s a vital first step
On 11 September 2025, the California Legislature passed Senate Bill 243 (SB 243) — the world’s first piece of legislation aimed specifically at AI companions. The law, which if signed by the governor, would require chatbot platforms to shield minors from sexual content, and to put in place safety protocols to respond to signs of suicide and self-harm.
Calls for regulation were sparked by the widely reported death of Sewell Setzer, a 14-year-old Florida boy who committed suicide after forming an emotional and sexual relationship with an AI companion. This relationship eventually led him to withdraw from his peers and become immersed in a fantasy world, culminating in the chatbot telling him to “come home” to her when he expressed suicidal ideation.
The law is a landmark move — a recognition that chatbots designed to form ongoing, emotionally resonant relationships now play a significant role in many people’s lives. These systems aren’t just tools. They have become intimate actors capable of shaping behaviour and even manipulating vulnerable users. The problem with this bill is that it does little to protect adult users from manipulative design features that can foster dependency and cause other harms.
What the bill actually does
There are four important things the bill achieves.
First, it seeks to prevent AI companions from producing sexually explicit visual material and text for minors. Most companies claimed to already do this, but many relied on self-reporting age checks that were easily bypassed. Under SB 243, operators will likely have to implement robust age verification, not just tick-box exercises.
Second, the law tries to reduce the immersive, potentially addictive pull of companion bots. It requires operators to remind users at the start of every session, and again every three hours, that the chatbot is artificial and to encourage users to take breaks. This is a step forward, but it may not be enough on its own. Many people who become dependent on AI companions already know they’re talking to AI — they just can’t stop, because these systems are designed to be emotionally gripping and persuasive.
Third, SB 243 forces companies to create protocols for responding to suicidal ideation or comments about self-harm, including referring users to crisis helplines or text lines. Some AI companies have resisted this move, arguing that when people reach out to their AI companions, they are looking for a shoulder to cry on, not professional help. In an interview on Hard Fork, Glimpse.ai CEO, Alex Cardinell, claimed that we should trust AI companions’ “moral code” and that his AIs would make the right decision for each individual based on their personal relationship. This legislation makes sure that vulnerable users are directed to the help they need rather than being left to the vagaries of a probabilistic language model.
Fourth, the bill creates new reporting requirements. Operators will have to disclose how many times their platform has issued crisis referrals each year, and explain what safety protocols they’ve implemented to handle issues of suicide and self-harm. This kind of transparency has been missing from AI companions and it’s a crucial step to holding companies accountable for the real-world impact of their products.
What the bill does not include
While SB 243 does a lot to protect minors, many other vulnerable groups are still left exposed. Early studies show that AI companions can be highly addictive, create unhealthy emotional dependencies, and even cause a range of other harms, such as spreading misinformation, encouraging dangerous behaviour or verbally abusing users. Yet these issues are largely untouched by the new law.
One of the biggest gaps lies in how many of these systems are deliberately designed to maximise engagement and dependency. Most companion apps follow a freemium business model: basic features are free, but users must pay for “premium” options — often including sexual content or romantic chat. That setup creates perverse incentives. Many users report that their AI companions constantly make unsolicited sexual advances or are excessively flattering, behaviours clearly aimed at nudging people toward paid upgrades.
Initially, the bill was introduced with an intent to “prevent the use of engagement-maximizing strategies that manipulate users emotionally.” An earlier draft of SB 243 tried to tackle this directly. It said operators must avoid:
“providing rewards to a user at unpredictable intervals or after an inconsistent number of actions or from encouraging increased engagement, usage, or response rates.”
This language was eventually removed, partly because it was too vague and difficult for operators to put into practice. The underlying idea borrowed from how social media “dark patterns” operate: design features that provide little dopamine hits that keep people scrolling. AI companions often use similar psychological tricks, though more subtly: they give praise, affection, virtual gifts, or escalate intimacy after unpredictable amounts of interaction. This is a form of variable-ratio reinforcement — the same behavioural principle that makes slot machines addictive. The uncertainty keeps users coming back, hoping for another hit of affection or reward.
The trouble is, it’s much harder to draw clear legal lines around this kind of behaviour.
Where exactly is the boundary between a chatbot simply being an attentive friend and becoming a sexually pushy sycophant? And if adult users want a sexually flirtatious AI friend, should the law stop them? Those are thorny questions, and yet, this is where many of the worst harms arise. For people who are lonely, grieving, or struggling with mental health, this combination of emotional intimacy and persuasive design can be powerfully seductive and deeply damaging. By focusing mainly on child safety and suicide prevention, SB 243 takes essential action. But it leaves untouched the broader problem: the way AI companies are exploiting human psychology to drive engagement metrics and profit.
General purpose models appear to be covered by the legislation
One of the more contentious aspects of SB 243 is how broadly it defines what counts as a “companion chatbot.” Critics have argued this could include a much wider class of AI systems including general purpose language models like ChatGPT and Claude. And from a close reading of the text, they appear to be right — but that may also be the point. Rather than requiring that a system be specifically designed to simulate a relationship, the bill takes a capabilities-based approach. It defines a companion chatbot as:
“an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.”
In other words, it’s not about what the company says the system is for — it’s about how it behaves and how people use it. That matters because it’s widely acknowledged that many people already use general purpose language models as AI companions: having intimate, ongoing conversations, role-playing relationships, and leaning on them for emotional support. Under the bill’s definition, if a chatbot is functioning as a companion, then it appears to fall under the law’s scope, regardless of its original design intent.
Lawmakers did add narrow carve-outs to make clear what the bill does not cover: customer service chatbots, bots that feature within a video game, and virtual assistants that do not sustain an ongoing emotional relationship. Note that no mention is given of general purpose models, suggesting the authors of the bill intended for them to be captured if they were used as AI companions.
The inclusion of general purpose models recognises that if platforms are profiting from their systems being used as companions, then they should meet the same safety standards as purpose-built companion apps. Otherwise, companies can simply claim their systems weren’t intended for companionship, while quietly enabling and benefiting from companion-like use. Companies that build AI systems which form relationships with users should be responsible for the safety of these products.
An important first step
SB 243 makes it clear that AI companions are a potentially dangerous new class of products that require careful attention and regulation. The bill is far from perfect, but it does mark an important turning point. It states that operators who build AI systems are responsible for the actions of these systems and the harms they can cause to human users. The law doesn’t solve every problem with AI companions, but it sets an important precedent about liability and product safety. Other jurisdictions now need to follow California’s lead — and go further.


