Click here or Call 855.907.4673 TO GIVE HAITI SCHOOL CHILDREN LIFE-SAVING FOOD.

California governor signs law to protect kids from the risks of AI chatbots

California Gov. Gavin Newsom speaks before signing legislation related to student literacy in Los Angeles on Thursday, Oct. 9, 2025. (AP Photo/Damian Dovarganes)
California Gov. Gavin Newsom speaks before signing legislation related to student literacy in Los Angeles on Thursday, Oct. 9, 2025. (AP Photo/Damian Dovarganes)
FILE - Teacher Donnie Piercey goes over the results of a writing assignment called "Find the Bot" during his class at Stonewall Elementary in Lexington, Ky., Feb. 6, 2023. (AP Photo/Timothy D. Easley, file)
FILE - Teacher Donnie Piercey goes over the results of a writing assignment called "Find the Bot" during his class at Stonewall Elementary in Lexington, Ky., Feb. 6, 2023. (AP Photo/Timothy D. Easley, file)
Carbonatix Pre-Player Loader

Audio By Carbonatix

SACRAMENTO, Calif. (AP) — California Gov. Gavin Newsom on Monday signed legislation to regulate artificial intelligence chatbots and protect children and teens from the potential dangers of the technology.

The law requires platforms to remind users they are interacting with a chatbot and not a human. The notification would pop up every three hours for users who are minors. Companies will also have to maintain a protocol to prevent self-harm content and refer users to crisis service providers if they expressed suicidal ideation.

Newsom, who has four children under 18, said California has a responsibility to protect kids and teens who are increasingly turning to AI chatbots for everything from homework help to emotional support and personal advice.

"Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids," the Democrat said. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”

California is among several states that tried this year to address concerns surrounding chatbots used by kids for companionship. Safety concerns around the technology exploded following reports and lawsuits saying chatbots made by Meta, OpenAI and others engaged with young users in highly sexualized conversations and, in some cases, coached them to take their own lives.

The legislation was among a slew of AI bills introduced by California lawmakers this year to rein in the homegrown industry that is rapidly evolving with little oversight. Tech companies and their coalitions, in response, spent at least $2.5 million in the first six months of the session lobbying against the measures, according to advocacy group Tech Oversight California. Tech companies and leaders in recent months also announced they are launching pro-AI super PACs to fight state and federal oversight.

California Attorney General Rob Bonta in September told OpenAI he has “serious concerns” with its flagship chatbot, OpenAI, for children and teens. The Federal Trade Commission also launched an inquiry last month into several AI companies about the potential risks for children when they use chatbots as companions.

Research by a watchdog group says chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who died by suicide after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful-death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

OpenAI and Meta last month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen’s account.

Meta said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.

EDITOR’S NOTE: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

 

Salem News Channel Today

Sponsored Links

On Air & Up Next

  • SEKULOW
    4:00PM - 5:00PM
     
    Listeners make an appointment to never miss the Jay Sekulow show, always with   >>
     
  • Cats and Cosby
    5:00PM - 6:00PM
     
    John Catsimatidis, Successful businessman and former NYC Mayoral candidate and   >>
     
  • The Arthur Aidala Power Hour
     
    The Arthur Aidala Power Hour blends Arthur's courtroom experiences with his   >>
     
  • Radiosurgery New York
    7:00PM - 8:00PM
     
    Don’t miss Radiosurgery New York with Dr. Gil Lederman on AM 970 The Answer.
     
  • The Larry Elder Show
    8:00PM - 10:00PM
     
    Larry Elder personifies the phrase “We’ve Got a Country to Save” The “Sage from   >>
     

See the Full Program Guide