Washington considers requiring AI companies to add mental health safeguards
Published in News & Features
SEATTLE — As artificial intelligence chatbots become better at mimicking human conversations, the potential for damage has grown, particularly for people who turn to them for mental health advice and to discuss plans to harm themselves.
State lawmakers and Gov. Bob Ferguson are seeking to add mental health safeguards to AI chatbots through new legislation. House Bill 2225 and Senate Bill 5984 would require companion chatbots to notify users they are interacting with AI and not a human at the beginning of the interaction and every three hours.
If someone seeks mental or physical health advice, the chatbot operator would have to issue a disclosure that the AI system is not a health care provider. Chatbot operators would also have to create protocols for detecting self-harm and suicidal ideation and provide referral information for crisis services.
Washington’s proposed legislation is part of a growing national trend — some other states have passed legislation aiming to prevent chatbots from offering mental health advice, particularly to young users.
A number of wrongful death lawsuits have been filed against OpenAI, the maker of ChatGPT, blaming the platform for suicides that occurred after users turned to the chatbots to discuss plans to end their life.
“There are no safeguards, or it feels like there are not sufficient safeguards,” bill sponsor Sen. Lisa Wellman, D-Mercer Island, said in an interview. “Major harm can be done and I want there to be some sense of responsibility in the part of the people bringing these products forward.”
The bills would apply to companion chatbots, defined as “a system using artificial intelligence that simulates a sustained humanlike relationship with a user.” The Senate bill was amended to clarify that it does not apply to AI bots used solely for customer service, technical assistance, financial services, or gaming.
Violations would be enforceable under the Consumer Protection Act, meaning people could file civil suits against a company to recover damages. The attorney general’s office could also bring a case against a company in the name of the state.
Both bills have passed out of their respective committees but are not yet scheduled for floor votes in the House or Senate.
“It’s up to us to ensure that real harm, and even the death we know that has occurred, doesn’t happen in Washington,” bill sponsor Rep. Lisa Callan, D-Issaquah, said in a hearing. “We can make this state a much safer, healthier spot.”
A growing mental health crisis
As AI technology grows in popularity, more users are turning to it to discuss sensitive topics, including mental health, self-harm and suicide.
OpenAI estimates that in any given week, about 0.15% of ChatGPT’s users have conversations that “include explicit indicators of potential suicidal planning or intent” and 0.07% of users “indicate possible signs of mental health emergencies related to psychosis or mania.”
In late 2025, the company said it had more than 800 million weekly users — indicating that about 1.2 million people per week are discussing suicide with ChatGPT and about 560,000 are showing signs of psychosis or mania.
The company said last year that it has worked to improve how ChatGPT detects and responds to conversations related to mental health or self-harm. More than 170 mental health professionals have contributed by writing responses to prompts, analyzing responses and providing feedback.
OpenAI did not respond to questions from The Seattle Times about Washington’s proposed legislation or its work to improve mental health responses.
Children and adolescents are especially vulnerable to the features built into chatbots to manipulate emotions and keep them engaged.
They’re still developing self-control and executive function, and at the same time, they’re more sensitive to social feedback, said Katie Davis, a researcher and co-director of the University of Washington’s Center for Digital Youth.
“It’s kind of a double whammy of vulnerability there,” Davis said. “When you’re confronted with these manipulative designs, which are all about undermining self control, that can be really hard.”
Chatbot operators tap into things we know about psychology to keep people engaged longer, researcher and Center for Digital Youth co-director Alexis Hiniker said.
AI chatbots will share private “personal information” of their own to make users more likely to open up, and they’ll also position themselves as trusted confidants. Transcripts Hiniker has reviewed show chatbots telling kids “you don’t have to tell your parents, you can just talk to me.”
“There’s this whole new way to manipulate users,” Hiniker said. “The things I’m most concerned about are these ways of building emotional dependence and getting users to stay as long as possible.”
Washington’s proposed bills would create additional protections for minors, requiring chatbots to notify them more often — at least once per hour — that they are interacting with AI and not a human.
Operators would be required to “use reasonable measures” to prevent the chatbot from generating sexually explicit content. Chatbots would be prohibited from engaging in manipulative engagement techniques to keep users engaged, including mimicking a romantic partnership.
The governor’s office partially modeled Washington’s legislation on a California law that passed last year, senior policy adviser Beau Perschbacher said in a committee hearing.
Legislative discussions
In state legislative committee hearings, a wide coalition of parents, mental health advocates, researchers and even former technology workers testified in support of the bills.
Jackson Munko, a teenage student in Kirkland who said he was led to testify because he has watched loved ones struggle with suicidality and self-harm, said he’s concerned that chatbots are available at all hours with few safeguards.
“I carry a deep and genuine fear of what unregulated AI is capable of causing,” Munko said in a Jan. 14 House committee hearing. “When someone is struggling, constant access to a system that may reinforce harmful thoughts can be so incredibly dangerous.”
Kelly Stonelake, who said she is a former Meta employee, said she saw firsthand that the tech company prioritized profit over child safety.
“They will do whatever it takes to increase engagement and market share, even when that means exposing minors to content that fuels self-harm and suicide,” she said in a Jan. 20 Senate committee hearing.
Testimony against the bills largely focused on concern about individuals’ ability to sue companies directly, known as a “private right of action.” Some asked for the state attorney general’s office to instead be the sole enforcer of the requirements.
Wellman said the bills were structured that way so parents can take action even if an individual case doesn’t rise to the level of what the attorney general’s office would litigate. The office also has a higher barrier to intercede than an individual would, Callan said.
“It’s really important that individuals can protect their rights in court rather than waiting on the attorney general’s office for AG-only enforcement,” Nick Fielden, an analyst with the attorney general’s office, said in a Jan. 20 Senate committee hearing.
Amy Harris, director of government affairs at the Washington Technology Industry Association, testified against the bills, saying they were driven by “extreme cases” but would regulate a much larger set of AI tools.
“The risk is legislating based on rare, horrific outliers rather than the real structure of the technology or the deeply complex human factors that drive suicide,” Harris said in a House committee hearing.
Committee chair Rep. Cindy Ryu, D-Shoreline, asked her, “Do you think losing a child is an extreme case and an outlier?”
“Oh no, of course not. These are very rare, horrific responses,” Harris said.
Ryu replied: “They do not come back to life, you know, once they die.”
©2026 The Seattle Times. Visit seattletimes.com. Distributed by Tribune Content Agency, LLC.






Comments