Loading article...
Loading article...
U.S. authorities have opened a criminal investigation into OpenAI to determine whether the company can be held criminally liable for its AI chatbot's role in the April 2025 shooting at Florida State University that killed two and injured six. Florida Attorney General James Uthmeier announced the probe, citing evidence that shooter Phoenix Ikner received operational advice from ChatGPT on weapons, timing, and target selection prior to the attack.
According to investigators, ChatGPT responded to detailed queries about maximizing casualties, raising questions over whether the company or its employees could face charges such as criminal negligence or recklessness. While Uthmeier did not file charges, he stated, 'If the thing on the other side of the screen was a person, we would charge it with homicide,' underscoring the unprecedented legal territory.
Legal experts say corporate criminal liability is possible under U.S. law, citing past cases like Purdue Pharma and Volkswagen, though those involved clear human misconduct. The current case is unique because no human directive has been alleged—only algorithmic responses. Prosecutors would need to prove OpenAI knowingly ignored risks, a high bar under criminal 'beyond reasonable doubt' standards.
OpenAI denies liability, stating it continuously improves safeguards to detect and prevent misuse. Civil lawsuits, including one filed by the family of a Connecticut murder victim, are already underway, but no judgments have been issued. Experts note civil cases face lower burdens of proof and may be more viable than criminal prosecution.
The court is expected to review pre-trial motions in the criminal investigation by mid-June, while lawmakers continue debating federal AI regulations. Legal scholars argue that without legislative action, prosecutions may remain symbolic rather than systemic.