
Flagged but ignored: the Tumbler Ridge case exposes Canada’s AI governance gaps
Eight people were killed in the Tumbler Ridge shooting after OpenAI’s automated review system flagged the shooter’s ChatGPT account months earlier for violent discussions; OpenAI banned the account but did not refer the case to police because it didn’t meet a then-threshold. The incident highlights a broader Canadian AI governance vacuum: there is no binding national framework to require referrals of flagged AI interactions to authorities, no independent triage body, and privacy laws ill-suited to probabilistic threat indicators. With Bill C-27 (AI Act) and Bill C-63 (Online Harms) stalled, Canada relies on voluntary codes and faces ambiguity about disclosures. The piece calls for a binding, multidisciplinary framework, an independent digital safety commission, modernized privacy rules, and renewed international AI-regulation efforts to prevent future tragedies.













