OpenAI apology: Sam Altman expresses remorse to Tumbler Ridge after flagged ChatGPT account was not reported
OpenAI CEO Sam Altman has issued a public apology to residents of Tumbler Ridge, acknowledging the company’s failure to alert law enforcement after it flagged a ChatGPT account linked to an alleged shooter. The OpenAI apology follows media reports that the account was banned in June 2025 but was not referred to police before a mass shooting that left eight people dead. Altman said the company must do better and pledged cooperation with Canadian authorities as the community continues to grieve.
Altman apologizes to Tumbler Ridge residents
In a letter to the town, Sam Altman said he was "deeply sorry" that OpenAI did not notify police about the account that had been banned months earlier. He said he had spoken with Tumbler Ridge Mayor Darryl Krakowka and British Columbia Premier David Eby and that they agreed a public apology was necessary. Altman emphasized that an apology cannot undo the loss suffered by families and the community.
Flagged ChatGPT account was banned in June 2025
Company records show the ChatGPT account in question was banned in June 2025 after exchanges that described scenarios involving gun violence. The account was associated by police with 18-year-old Jesse Van Rootselaar, later identified as the suspected shooter in the attack that killed eight people. OpenAI has said its moderation systems detected problematic content and removed the account at that time.
Debate inside OpenAI over notifying authorities
Staff within OpenAI reportedly debated whether to alert law enforcement when the account was flagged, but ultimately the company did not make a direct referral before the shooting. After the attack, OpenAI representatives reached out to Canadian authorities and have since described the decision not to notify police as a grave mistake. Officials within the company acknowledged the internal debate underscored unclear thresholds for escalation.
Company pledges to change safety protocols
OpenAI has announced plans to revise its safety protocols, including more flexible criteria for when accounts are referred to authorities and establishing direct points of contact with Canadian law enforcement. The company said it will refine processes to better identify when potential threats meet a threshold for notification. Altman added that OpenAI will work with multiple levels of government to help prevent similar failures in the future.
Local and provincial officials call for further action
Tumbler Ridge leaders and provincial officials responded cautiously to the apology, saying regret alone is insufficient for the scale of the tragedy. Premier David Eby described the apology as necessary but not enough, calling for concrete measures to protect communities and support victims’ families. Local officials and residents continue to press for clarity about the timeline of events and the steps companies must take when content raises safety concerns.
Canadian regulators consider new AI rules
In the wake of the incident and the OpenAI apology, federal and provincial policymakers have signaled they are reviewing potential regulations for artificial intelligence. Government officials have discussed tighter oversight on how tech companies handle threats detected by automated systems and when they must notify law enforcement. Regulators are weighing proposals that could mandate clearer escalation protocols, reporting requirements, and points of contact for cross-border cooperation.
If you are in a crisis or having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline.