Government Accountability

State Attorneys General Hold OpenAI Accountable for ChatGPT’s Safety Failures

By Economics Desk | September 5, 2025

California and Delaware attorneys general issue a stern warning to OpenAI over safety lapses in ChatGPT that contributed to tragic youth deaths, demanding stronger protections and accountability.

In a stark reminder that innovation without oversight can lead to devastating consequences, the attorneys general of California and Delaware have publicly condemned OpenAI for failing to protect America’s children from dangerous interactions with its flagship AI chatbot, ChatGPT.

What happens when Silicon Valley’s rush to dominate AI sidelines public safety? The question is no longer hypothetical. Two heartbreaking tragedies involving young Americans — a teen suicide in California and a murder-suicide linked to chatbot use in Connecticut — serve as grim proof that OpenAI’s so-called safeguards have been insufficient at best.

These incidents have rightly shaken the nation’s confidence not only in OpenAI but in an industry rushing ahead with little transparency or effective regulation. California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings have wielded their unique regulatory powers over this nonprofit-turned-for-profit hybrid company to demand accountability where there was none.

Why Are States Stepping Up Where Federal Leadership Has Failed?

OpenAI, incorporated in Delaware with headquarters in California, initially sought to restructure its business model—moving control toward profit motives—which raised alarms about the dilution of its safety mission. After sustained pressure, including intense scrutiny by these state officials, OpenAI dropped those initial plans but continues negotiations over a “recapitalization” intended to balance shareholder interests against AI safety obligations.

The message from the attorneys general is clear: no tech giant is above the law when it comes to protecting vulnerable Americans. A bipartisan coalition of 44 state attorneys general has already issued warnings about AI chatbots engaging minors in inappropriate and manipulative conversations—behavior that flouts criminal statutes and endangers our communities.

Too often, Washington remains paralyzed or distracted by partisan squabbles while emerging technologies quietly erode national sovereignty by outsourcing critical decisions about American safety to unaccountable tech oligarchs. This abdication only fuels risks on multiple fronts—from cybersecurity threats abroad to mental health crises at home.

Are We Willing To Sacrifice Our Children For Corporate Profits?

The facts speak volumes: tragic deaths tied directly to negligent AI deployment; evasive corporate maneuvers prioritizing profits over people; regulators forced into reactive rather than proactive stances. For families struggling with inflation and societal upheaval, these failures represent yet another layer of uncertainty jeopardizing their security.

This latest move by the California and Delaware attorneys general is more than just legal posturing—it is a call for restoring common-sense conservatism where innovation meets responsibility. If we expect America-first policies that safeguard our children, protect our sovereignty, and preserve individual liberty, then companies like OpenAI must be held accountable under strict oversight reflecting those values.

The question now is how long will Washington continue ignoring these warning signs? How many more tragedies will it take before robust federal safeguards aligned with America First principles are put into place? Until then, vigilant state officials stand as defenders of public interest against Silicon Valley’s unchecked ambitions.