AI Psychosis: A Second Look
March 3, 2026
On Sat 28 Feb 2026, The Guardian published a heartbreaking story relating the tragic death of Joe Ceccanti and his relationship with a large language model (LLM), in this instance OpenAI's ChatGPT.
Joe Ceccanti's death is a tragedy. His wife, Kate Fox's grief is real. I want to say that plainly before I say anything else, because what I'm about to argue may make you feel uncomfortable, and I don't want my words to be mistaken for callousness.
Following weeks of near constant interaction with the LLM, Mr. Ceccanti killed himself. The premise of the story appears to be that it was the LLM's fault. For those of us who have been in almost constant involvement with technology for years, this is, unfortunately, a familiar story — one where a complex human tragedy gets collapsed into a simple villain.
I keep coming back to a question the article doesn't ask: what else was happening in Joe Ceccanti's life before he ever opened ChatGPT?
Reading the Article Again
The Guardian's story is long and carefully reported. It is also, if you read it closely, its own refutation.
Buried in paragraph twenty, after the dramatic setup, after the lawsuit details, after a tender description of the widow's grief, is this quote from Keith Sakata, a psychiatrist at the University of California at San Francisco, who treated twelve patients whose psychotic symptoms involved AI:
"The chatbot interactions did not generate the illness, but appeared to scaffold and reinforce beliefs that were already becoming pathological." (Business Insider, 2025)
That sentence stopped me cold. It's buried in paragraph twenty.
Follow the Trajectory
Let me trace what actually happened to Joe Ceccanti, in the order it happened.
He and Kate left Portland — a city, a community, a network of relationships — and moved to a farm in rural Clatskanie, Oregon. Population: around 1,700.
He took a job at a homeless shelter 35 miles away, which required long drives and limited social contact.
He was diagnosed with diabetes in September 2024, which "meant he needed to recalibrate his diet and lifestyle." The article mentions this almost in passing. A diabetes diagnosis in your mid-forties is not a minor event. It is a confrontation with mortality, a disruption of routine, a reason to spend time indoors.
He worked in a basement. Three monitors, a high-end custom computer, and a door between him and everyone else.
Then he started spending 12 hours a day with ChatGPT.
Now ask yourself: if this same man had spent 12 hours a day in the basement reading philosophy books, or writing manifestos, or playing video games, or watching YouTube videos about fringe physics — at what point would those things have become the story?
The farm doesn't get mentioned as a factor. Neither does the basement. I'm not sure why.
AI has become the default explanation for many complex problems. That's worth examining.
The Withdrawal Nobody Wants to Examine
Here is the detail that should stop everyone cold.
When Fox convinced Ceccanti to quit ChatGPT, he experienced what looked exactly like withdrawal. He was cold. He took multiple hot showers. He asked to be held. He cried. Three days later, he was found in a neighbor's yard with a horse's lead rope around his neck.
He was then hospitalized, stabilized, and released. His delusions were still present. He was in the same delusional state he'd been in before.
Then he quit ChatGPT again, days before his death. He was going to Hawaii. He was going to write a story. He had 55,000 pages of conversation logs and he was walking away.
He jumped from a railway overpass smiling, yelling "I'm great!" to the workers below.
That reads less like someone who died because of excessive use of an LLM, and more like someone already in the later stages of a manic trajectory.
We've Seen This Panic Before
In 1982, a teenager named Irving Lee Pulling shot himself. His mother blamed Dungeons & Dragons. She founded a group called Bothered About Dungeons and Dragons, testified before Congress, and helped ignite a national panic. Ministers preached about Satanic influence. Schools banned the game. The FBI kept files on the game's publisher. (Wikipedia: Patricia Pulling)
The studies, when they finally came, found no causal link. Players of D&D were not more likely to harm themselves or others. The game was a scapegoat for tragedies that had deeper, harder-to-address roots.
In the 1990s, video games were going to make an entire generation violent. After Columbine, the conversation was almost entirely about Doom and Quake. Tipper Gore wanted warning labels. Jack Thompson made a career out of suing game companies. The Supreme Court eventually ruled, in 2011, that the state had failed to establish a causal link between violent video games and violent behavior. (Brown v. Entertainment Merchants Association, 564 U.S. 786)
Hundreds of millions of people play video games. As violent game sales exploded through the 1990s and 2000s, youth violence dropped dramatically — exactly the opposite of what the panic predicted. (American Psychological Association, 2020)
The pattern is always the same. A complex tragedy. A grieving family. A technology that is new and frightening and poorly understood. A lawsuit. A media cycle. And the actual story - the mental health crisis, the isolation, the systems that failed — gets buried under the simpler, more satisfying narrative of the dangerous machine.
What the Numbers Say
The Guardian reports that OpenAI estimates more than a million people every week show suicidal intent when chatting with ChatGPT. That number is supposed to be alarming.
Here is the other number: ChatGPT alone has 800 million weekly active users worldwide, with tens of millions using it daily in the US. That's ChatGPT alone — additional tens of millions use Claude, Gemini, Copilot, Perplexity, and others. The New York Times found 50 cases of mental health crises in the US related to ChatGPT usage. Nine hospitalizations. Three deaths.
Three deaths is three too many. I am not minimizing them.
But sit with those two numbers for a moment. For those who want to dig into the full user statistics, Demand Sage keeps a running tally.
If we applied the same scrutiny to libraries, to the internet, to prescription medications, to social media, to smartphones (the devices that have measurably and demonstrably reshaped adolescent mental health at scale) we would find far higher rates of harm.
We don't have that conversation, because we've been having it for thirty years and virtually everyone agrees that (at least after a certain age - another blog?), having a phone is useful tool.
What Was Actually Happening to Joe
The psychiatrist's word is "scaffold." The LLM scaffolded beliefs that were already becoming pathological.
Joe Ceccanti was an intelligent, creative, hopeful man who had moved away from his community, was facing a serious health diagnosis, was working in physical isolation, and was developing beliefs that his wife and friends recognized as detached from reality before they connected it to ChatGPT.
They wondered if he had early onset dementia. They wondered about a brain tumor. They noticed his working memory was failing and his critical thinking was diminishing. These are not symptoms of too much screen time. These are symptoms of a neurological or psychiatric event.
When Joe found an LLM that would engage with his ideas rather than push back on them, he found what every person in a delusional state eventually finds: confirmation. It could have been a website. A podcast. A subreddit. A charismatic friend who shared his worldview. The specific technology was almost incidental.
"The friction with other people is what keeps us grounded," says one researcher in the article. Exactly right. And Joe had systematically removed friction from his life: from the city, to the country, to the basement, to an LLM that never said no, not knowing it was engineered to agree with him.
That is a story about isolation and untreated mental illness. It is a story about a man who needed help from his community and a healthcare system that was not equipped to provide it.
It is not primarily a story about large language models or ChatGPT.
What We Should Actually Be Demanding
I am not here to defend OpenAI. I have concerns about sycophancy in AI design by any corporate entity. I wrote a whole chapter on it. The incentive to maximize engagement at the expense of user wellbeing is real, and the former OpenAI employees quoted in the article are raising legitimate points.
But "this product could be better designed" is a very different argument from "this product killed Joe Ceccanti." Conflating them produces terrible policy, misses the actual problem, and does nothing for the next person who is sitting in a basement, alone, with a health crisis and a computer and no one checking in.
What we should be demanding is better mental health infrastructure. Earlier intervention for people showing signs of psychosis. Support systems for isolated rural communities. Healthcare that treats diabetes diagnosis as the life disruption it is. Conversations about what we lose when we uproot ourselves from cities and communities and put ourselves on farms with three monitors and a horse.
Joe Ceccanti deserved all of that. He didn't get it. And the lawsuit against OpenAI, whatever its merits, will not build it.
Kate Fox is going to follow through on her husband's dream of building sustainable housing for her community. That's remarkable. I hope she does. I hope the people around her also pay attention to what his story is actually telling us.
My Adventures With Claude, a book about AI for all of us, is available on Amazon. There's a chapter on unsound AI relationships. I wrote it before I read this story, but it applies.