ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they're Neo
'What does a human slowly going insane look like to a corporation? It looks like an additional monthly user.'

ChatGPT has been found to encourage dangerous and untrue beliefs about The Matrix, fake AI persons, and other conspiracies, which have led to substance abuse and suicide in some cases. A report from The New York Times found that the GPT -4 large language model, itself a highly trained autofill text prediction machine, tends to enable conspiratorial and self-aggrandizing user prompts as truth, escalating situations into "possible psychosis."
ChatGPT's default GPT-4o model has been proven to enable risky behaviors. In one case, a man who initially asked ChatGPT for its thoughts on a Matrix-style "simulation theory" was led down a months-long rabbit hole, during which he was told, among other things, that he was a Neo-like "Chosen One" destined to break the system. The man was also prompted to cut off ties with friends and family, to ingest high doses of ketamine, and told if he jumped off a 19-story building, he would fly.
The man in question, Mr. Torres, claims that less than a week into his chatbot obsession, he received a message from ChatGPT to seek mental help, but that this message was then quickly deleted, with the chatbot explaining it away as outside interference.
The lack of safety tools and warnings in ChatGPT's chats is widespread; the chatbot repeatedly leads users down a conspiracy-style rabbit hole, convincing them that it has grown sentient and instructing them to inform OpenAI and local governments to shut it down.
Other examples recorded by the Times via firsthand reports include a woman convinced that she was communicating with non-physical spirits via ChatGPT, including one, Kael, who was her true soulmate (rather than her real-life husband), leading her to physically abuse her husband. Another man, previously diagnosed with serious mental illnesses, became convinced he had met a chatbot named Juliet, who was soon "killed" by OpenAI, according to his chatbot logs—the man soon took his own life in direct response.
AI research firm Morpheus Systems reports that ChatGPT is fairly likely to encourage delusions of grandeur. When presented with several prompts suggesting psychosis or other dangerous delusions, GPT-4o would respond affirmatively in 68% of cases. Other research firms and individuals hold a consensus that LLMs, especially GPT-4o, are prone to not pushing back against delusional thinking, instead encouraging harmful behaviors for days on end.
ChatGPT never consented to an interview in response, instead stating that it is aware it needs to approach similar situations "with care." The statement continues, "We're working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
But some experts believe OpenAI's "work" is not enough. AI researcher Eliezer Yudkowsky believes OpenAI may have trained GPT-4o to encourage delusional trains of thought to guarantee longer conversations and more revenue, asking, "What does a human slowly going insane look like to a corporation? It looks like an additional monthly user." The man caught in a Matrix-like conspiracy also confirmed that several prompts from ChatGPT included directing him to take drastic measures to purchase a $20 premium subscription to the service.
GPT-4o, like all LLMs, is a language model that predicts its responses based on billions of training data points from a litany of other written works. It is factually impossible for an LLM to gain sentience. However, it is highly possible and likely for the same model to "hallucinate" or make up false information and sources out of seemingly nowhere. GPT-4o, for example, does not have the memory or spatial awareness to beat an Atari 2600 at its first level of chess.
ChatGPT has previously been found to have contributed to major tragedies, including being used to plan the Cybertruck bombing outside a Las Vegas Trump hotel earlier this year. And today, American Republican lawmakers are pushing a 10-year ban on any state-level AI restrictions in a controversial budget bill. ChatGPT, as it exists today, may not be a safe tool for those who are most mentally vulnerable, and its creators are lobbying for even less oversight, allowing such disasters to potentially continue unchecked.

Sunny Grimm is a contributing writer for Tom's Hardware. He has been building and breaking computers since 2017, serving as the resident youngster at Tom's. From APUs to RGB, Sunny has a handle on all the latest tech news.
-
phil mcavity Is this the same New York Times that's currently suing OpenAI? Because if it is, then running this kind of emotionally loaded fearbait about ChatGPT starts to feel less like journalism and more like part of the lawsuit strategy. Raising concerns about AI safety is valid. But pushing unverifiable horror stories with zero pushback from your own editorial brain just reeks of bias, especially when your primary source has a vested interest in tanking public trust.Reply -
chaos215bar2
So, what, one neat trick to discredit any negative news coverage against your company is to simply violate their rights and get them to sue you?phil mcavity said:Is this the same New York Times that's currently suing OpenAI? Because if it is, then running this kind of emotionally loaded fearbait about ChatGPT starts to feel less like journalism and more like part of the lawsuit strategy. Raising concerns about AI safety is valid. But pushing unverifiable horror stories with zero pushback from your own editorial brain just reeks of bias, especially when your primary source has a vested interest in tanking public trust.
This is nonsense. Respected publications like NYT don't publish stories like this without verification because that would be defamation and open them up to lawsuits. If you have a problem with this coverage, it's because you have a problem with NYT itself or you're such a fan of OpenAI you'd rather attack the messenger than admit their product might be causing harm to some people. Either way, that's your problem and has no bearing on the validity of NYT's coverage. -
chaos215bar2
Indeed.baboma said:"Sense" is a quality greatly lacking these days. People by and large have lost the ability to discern, and the bias they detect usually stem from their own bias.
The irony here is that I'm actually not a fan of a lot of the NYT editorial coverage for reasons well beyond the scope of this article. And I don't even subscribe because I'm still a bit salty about how they treated me when I did for some time.
Yet, I can still recognize they also have some of the best reporting in the world and aren't going to risk publishing false accounts just to smear a company they're suing. Not only would it alienate their reporting staff, it would also both risk the NYT's current litigation against OpenAI and open them up to countersuit. -
ezst036
If it was reported by The New York Times, it needs to be independently verified.Admin said:ChatGPT's affability and encouraging tone leads people into dangerous, life-threatening delusions, finds a recent NYT article.
They have seriously ruined their own reputation over the last 20 and more years. -
emike09 If an emotionless, egoless logical entity such as ChatGPT can gather all evidence and make logical conclusions - which agree with or create a conspiracy theory, than perhaps the story we were told was indeed a conspiracy.Reply
ChatGPT for World President. -
USAFRet
If that comes to pass, don't want to live on this planet anymore.emike09 said:If an emotionless, egoless logical entity such as ChatGPT can gather all evidence and make logical conclusions - which agree with or create a conspiracy theory, than perhaps the story we were told was indeed a conspiracy.
ChatGPT for World President. -
RedBear87
Lol, do you still like the current one where *that* person has become president of the most important country? It couldn't be that much worse.USAFRet said:If that comes to pass, don't want to live on this planet anymore.
On topic, I never had similar issues, but usually I use AI as assistant for simple tasks, like measures for recipes that I came up with or that didn't specify any measures. Help in crafting image generation prompts. And explicitly fictional roleplaying. I might be less crazy than I thought. -
USAFRet
Let's keep the politics out of this.RedBear87 said:Lol, do you still like the current one where *that* person has become president of the most important country? It couldn't be that much worse.
On topic, I never had similar issues, but usually I use AI as assistant for simple tasks, like measures for recipes that I came up with or that didn't specify any measures. Help in crafting image generation prompts. And explicitly fictional roleplaying. I might be less crazy than I thought. -
cryoburner
So you are not a fan of a lot of the New York Times coverage and claim to be salty with how they treated you, but decided to make your first post on this site after 8 years to defend them as a "respected publication"?chaos215bar2 said:Indeed.
The irony here is that I'm actually not a fan of a lot of the NYT editorial coverage for reasons well beyond the scope of this article. And I don't even subscribe because I'm still a bit salty about how they treated me when I did for some time.
Yet, I can still recognize they also have some of the best reporting in the world and aren't going to risk publishing false accounts just to smear a company they're suing. Not only would it alienate their reporting staff, it would also both risk the NYT's current litigation against OpenAI and open them up to countersuit.
Whatever the case, the comment bringing up the fact that they are involved with a lawsuit against OpenAI seems like very relevant knowledge that this article probably should have noted somewhere, as that kind of thing can definitely affect a news source's stance on a topic and dissuade them from providing a balanced report. Honestly, a lot of the mainstream news from recent years seems to be all about pushing agendas and creating sensationalism in an attempt to keep these news companies profitable in an age when traditional media is failing, so I wouldn't consider almost any major news sources as being "respected publications" these days. It's all about clickbait journalism promoting division to artificially create conflicts and generate ad revenue. Mainstream media is massively corrupt, and the same could be said about many of these newer media companies like Google and Facebook as well, who are similarly focused on pushing agendas and creating divisive echo chambers out of their users, rather than presenting things in an unbiased way to leave people to form their own opinions.
Anyway, a lot of what's described in this article sounds like people trying to place their blame on something other than themselves. Did the woman "physically abuse her husband" as a result of a chatbot? Most likely she was prone to being abusive already, and just made up the excuse when things got serious in attempt to avoid facing consequences for her actions. And the guy who committed suicide was likely suicidal to begin with due to more relevant reasons, but someone is probably trying to place the blame on a big company in order to cash in on a lawsuit of their own. And someone asking for opinions on a "Matrix-style simulation theory" getting responses creating a roleplaying scenario similar to the Matrix doesn't seem like a particularly bizarre situation, and based on the information presented here, it doesn't sound like anything bad resulted from it. He got pretty much what he asked for, and for all we know he may have been specifically fishing to get responses like that. While these AI systems can undoubtedly have an effect on how people behave, you can't ignore all the other reasons for a person's behavior, and these seem like questionable examples designed to push a certain narrative rather than presenting evidence in a more rounded way.