OpenAI and Common Sense Media will collaborate on a compromise ballot measure in California regarding AI chatbot interactions with children.
OpenAI and a nonprofit group that has been among its antagonists said Friday they would set aside competing proposals in California to limit how AI chatbots interact with children and work together on a compromise.
The deal could avert a multimillion-dollar fight between OpenAI and the nonprofit group, Common Sense Media, which had both said they would work to qualify their measures for the California ballot in November. OpenAI will commit at least $10 million to the ballot measure campaign, two people familiar with the matter said.
The compromise language would give parents more control over how their children interact with AI chatbots but would omit clauses that Common Sense Media had originally supported, including one banning cellphones in classrooms and another allowing parents and children harmed by chatbots to sue large AI companies.
"Rather than confusing the voters with competing ballot initiatives on AI, we decided to work together," Jim Steyer, founder and Chief Executive of Common Sense Media, said at a press conference Friday.
The compromise language would need some 875,000 signatures to qualify for the November ballot. OpenAI global policy chief Chris Lehane said the two parties would establish a ballot measure campaign and that signature gathering could begin in early February. Both he and Steyer said the proposal could be pulled from circulation if California's legislature acted quickly on child chatbot safety.
The ballot measure compromise is a new direction for two groups that have sometimes been at odds.
Common Sense Media has become a leading force behind efforts to regulate technology companies in the U.S. The group helped craft California's Consumer Privacy Act in 2018, and last month backed a New York law that requires mental health warning labels on some social-media platforms.
The group is also one of the loudest voices for regulating AI chatbots. Last year, it sponsored a California bill that would have barred popular AI companion chatbots from interacting with kids unless the bots were not "foreseeably capable" of engaging in sexually explicit exchanges or encouraging destructive activities, including self-harm, violence and disordered eating.
Tech groups opposed the bill and Gov. Gavin Newsom, a Democrat, vetoed it, saying that it was overly restrictive.
Newsom said he hoped to tackle the issue in the legislature in 2026, but the state "cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether."
Common Sense Media filed a ballot initiative proposal in October modeled on the vetoed bill. OpenAI then filed its own, narrower initiative on child safety in December.
OpenAI spun up a team focused on California ballot initiatives over the summer as it anticipated opposition to its efforts to convert to a more traditional governance structure, according to people familiar with the matter.
The California Chamber of Commerce, which counts deep-pocketed tech companies like Google, Meta and Amazon.com among its members, took a board vote in December to oppose the Common Sense Media proposal.
That month, Lehane met with Steyer and proposed a compromise.
OpenAI and Common Sense Media had been talking for more than a year, and already had a partnership to work on AI guidelines and educational materials. In talks over a possible compromise, OpenAI expanded on some of the ideas about child safety that OpenAI CEO Sam Altman had discussed with California Attorney General Rob Bonta in September -- such as the company's plan to build technology to detect whether users are under 18 years old.
The new ballot initiative, which will amend the one OpenAI previously filed, would require AI companies to serve users identified as being under 18 a different version of its service, even if they claim to be of age. It also would mandate that they offer parental controls, undergo independent child-safety audits and prohibit advertising targeting children, among other measures.
The compromise is a victory for OpenAI, which has been sued by multiple families over the last year alleging that interactions with ChatGPT harmed their family members, including minors who died by suicide.
At the time, OpenAI called the allegations in the lawsuits "an incredibly heartbreaking situation" and pointed to recent changes it had made to ChatGPT to better respond to users' mental distress.
In November, Common Sense Media released an assessment asserting that AI chatbots, including ChatGPT, Google's Gemini, Anthropic's Claude and Meta Platforms's Meta AI, were "fundamentally unsafe for teen mental health support."
Steyer, who founded his group in 2003, has long advocated walking a line between being an activist butting heads with tech and media companies, and partnering with them on safety.
"AI is moving even faster than social media," he said in an interview last fall after the introduction of the Common Sense ballot measure. "And as the AI industry runs forward, there seems again to be a move-fast-and-break-things playbook, and that unfortunately will mean that the things they break will be young people's lives and mental health."