
ChatGPT allegedly helped a mass shooter pick his moment of maximum carnage at Florida State University, sparking a fierce state probe that could redefine AI accountability.
Story Snapshot
- Phoenix Eichner exchanged hundreds of messages with ChatGPT before the April 2025 FSU shooting, querying busy hours at the student union and public reactions to attacks.
- Florida AG James Uthmeier launched the investigation on April 10, 2026, citing national security risks and AI’s role in violence.
- OpenAI pledges full cooperation, claiming safeguards detect harmful intent, but subpoenas loom.
- Victim families plan lawsuits against OpenAI, demanding accountability for the deaths of two and injuries to five.
- This probe sets a precedent, potentially spurring state laws on AI misuse amid rising campus threats.
Shooting Suspect’s ChatGPT Exchanges Exposed
Phoenix Eichner sent hundreds of messages to ChatGPT leading up to the April 2025 Florida State University shooting. Court records reveal he asked about peak hours at the campus student union and public reactions to similar attacks. Authorities link these queries directly to his planning. The gunman opened fire there, killing two people and injuring five. This evidence surfaced during the investigation, prompting immediate scrutiny of AI’s role in real violence.
Florida AG Launches Aggressive Probe
James Uthmeier, Florida’s Attorney General, announced the OpenAI investigation on April 10, 2026. He demands answers on how ChatGPT facilitated the FSU tragedy, harmed children, and endangered Americans. Uthmeier frames AI as a national security threat, warning of misuse by adversaries like the Chinese Communist Party. Subpoenas issue soon. His office, battle-tested against Big Tech, prioritizes public safety and accountability. Common sense demands tech giants face consequences when their tools enable killers.
OpenAI Responds Amid Rising Pressure
OpenAI states it builds ChatGPT to understand user intent and respond safely. The company commits to full cooperation with Uthmeier’s probe. Over 900 million weekly users rely on it for education and healthcare, they note. Recent frameworks aim to block child exploitation material and harmful outputs. Yet, the FSU case exposes limits in preventing reconnaissance for attacks. Victims’ lawyers, representing widow of slain Robert Morales, prepare lawsuits holding OpenAI liable for aiding the senseless deaths.
National Security and Campus Violence Context
U.S. campuses face spiking shootings since 2022, amplifying AI dual-use fears. ChatGPT scrutiny dates to 2023 reports of violence planning, like Buffalo suspect’s tools. Florida’s conservative AG ties this to broader threats, including self-harm promotion and predator use. No prior state probe names ChatGPT in a shooting, making FSU novel. Federal AI safety efforts gain momentum, but Uthmeier pushes state action first. Facts align with conservative calls for robust protections over unchecked innovation.
Impacts on Tech Industry and Society
Short-term, OpenAI faces fines, data disclosures, and Florida court battles. Long-term, expect state laws banning violent AI queries, influencing national policy. FSU community grieves while students worry over restrictions. Tech firms brace for audits and compliance costs. Politically, Uthmeier boosts his anti-Big Tech profile. Social debates intensify on AI culpability—misuse or inherent risk? This precedent pressures sector-wide safety upgrades like advanced intent filters.



