Salem Radio Network News Wednesday, April 29, 2026

Science

Families of Canadian mass shooting victims sue OpenAI, CEO Altman in US court

Carbonatix Pre-Player Loader

Audio By Carbonatix

By Ryan Patrick Jones and Diana Novak Jones

April 29 (Reuters) – Family members of victims of one of Canada’s deadliest mass shootings sued OpenAI and CEO Sam Altman in U.S. court on Wednesday, alleging the company knew eight months before the attack that the shooter was planning it on ChatGPT but did not warn police. 

Seven lawsuits, filed in federal court in San Francisco, accuse OpenAI leaders of not alerting police because it would have exposed the volume of violence-related conversations on ChatGPT and potentially jeopardized the company’s path to a nearly $1 trillion initial public offering.

The February shooting in Tumbler Ridge, British Columbia, killed nine people, many of them children.

An OpenAI spokesperson called the shooting “a tragedy” and said the company has a zero-tolerance policy for using its tools to assist in committing violence.

The spokesperson said the company has strengthened ChatGPT safeguards through better responses to signs of distress, improved connections to mental health support, stronger threat assessment and escalation, and enhanced detection of repeat offenders.

The cases are part of a growing wave of lawsuits accusing artificial intelligence companies of failing to prevent chatbot interactions that plaintiffs say contribute to self-harm, mental illness and violence. They appear to be the first in the U.S. to allege that ChatGPT played a role in facilitating a mass shooting.

Jay Edelson, who is representing the plaintiffs in the U.S., said he plans to file another two dozen lawsuits in the coming weeks against the company on behalf of other people affected by the shooting.

LAWSUITS CLAIM OPENAI SAFETY TEAM OVERRULED

Jesse Van Rootselaar, whose interactions with ChatGPT are at the center of the lawsuits, shot her mother and stepbrother at home before killing an educational assistant and five students aged 12 to 13 at her former school on February 10, according to police. Van Rootselaar, who was 18, then died by suicide.

The plaintiffs include the husband of the educational assistant who was killed, the parents of a slain 13-year-old boy and the family of a 12-year-old girl who survived after being shot three times but remains in intensive care with severe brain injuries.

According to one of the complaints, OpenAI’s automated systems in June 2025 flagged ChatGPT conversations in which the shooter described gun violence scenarios. 

Safety team members recommended contacting the police after concluding she posed a credible and imminent threat of harm, the lawsuit said, which cites a February Wall Street Journal article about the company’s internal discussions. 

But Altman and other OpenAI leadership overruled the safety team and police were never called, the lawsuit alleges. The shooter’s account was deactivated, but she was able to get a new account and continue using the platform to plan her attack, the lawsuit claims. 

Following publication of the article, the company said the account was flagged by systems that identify “misuses of our models in furtherance of violent activities,” but the issues did not meet its internal criteria for reporting to law enforcement. 

Last week, a Tumbler Ridge newspaper published an open letter in which Altman said he was “deeply sorry” the account was not flagged to law enforcement.

In a blog published on Tuesday, OpenAI said it trains its models to refuse requests that could “meaningfully enable violence,” and notifies law enforcement when conversations suggest “an imminent and credible risk of harm to others,” with mental health experts helping assess borderline cases. 

The lawsuits seek unspecified damages and a court order requiring OpenAI to overhaul its safety practices, including mandatory law enforcement referral protocols. 

Vancouver-based law firm Rice Parsons Leoni & Elliott, which represents the plaintiffs in Canada, said it chose to pursue the cases in California in part because of limits on damages for pain and suffering in Canada.

OPENAI FACES MULTIPLE SUITS

The lawsuits come after multiple lawsuits against OpenAI have been filed in U.S. state and federal courts in recent months over claims ChatGPT facilitated harmful behavior, suicide, and, in at least one case, a murder-suicide.

While still in early phases, the lawsuits will force courts to grapple with what role an AI platform can play in promoting violence and whether the company can be held liable for its actions or the actions of its users.

OpenAI has denied the claims in the lawsuits, arguing in the murder-suicide case that the perpetrator had a long history of mental illness.

Florida Attorney General James Uthmeier announced this month a criminal investigation into ChatGPT’s role in a 2025 shooting at Florida State University. 

Evan Solomon, the Canadian minister in charge of AI, said after the lawsuits were filed that he is examining options for regulating AI chatbots and has been working with OpenAI to examine their safety protocols.

(Reporting by Ryan Patrick Jones and Diana Novak Jones, Editing by Alexia Garamfalv, Lincoln Feast, Rod Nickel)

Previous
Next
The Media Line News
X CLOSE