OpenAI’s Missed Warning: How Tech Giants Falter in Preventing Tragedies
OpenAI knew months ago about a violent school shooting suspect yet chose not to alert authorities, exposing the dangerous gaps in Big Tech’s self-policing that put American and allied citizens at risk.
In an age where digital platforms wield unprecedented power over information flow, the recent revelation that OpenAI considered but ultimately did not alert Canadian police about a future school shooter raises grave questions about accountability and public safety. The suspect, Jesse Van Rootselaar, later killed eight innocent people in British Columbia, marking one of Canada’s deadliest shootings in years. When Will Tech Companies Stop Playing Gatekeeper to Violence? OpenAI openly admitted last June it flagged Van Rootselaar’s account for "furtherance of violent activities" but declined to notify law enforcement, citing an internal threshold requiring "imminent and credible risk of serious...
This is Exclusive Content for Subscribers
Join our community of patriots to read the full story and get access to all our exclusive analysis.
View Subscription Plans