learn how a LLM works or get fucking retarded

As OpenAI releases GPT-5 and reports crossing 700 million weekly active users, LLMs have gone fully global, reshaping how we create, work, and think. Plenty of researchers are already warning about what these tools are doing to our brains and if that subject is fascinating, it won’t be mine today. What i want to explore is a very different problem, one I stumbled on countless, and let’s be honest, entirely (not so?) pointless hours scrolling X and Reddit : the growing tendency to offload human thinking onto models with no consciousness of their own.
Posts are piling up and the pattern repeats : more and more people treat their chatbot like a therapist, a friend, sometimes even an oracle. They read hidden meanings into every reply from ChatGPT, as if it were some cryptic message or a revelation about the world. My first instinct, to be very honest, was to smirk and assume these people were just idiots. Natural selection. But the deeper I dug, the more I realized there is a parallel but very real world here. A world where the LLM becomes both an all-powerful assistant, ready to solve your problems and agree with you, and an emotional support system, standing by in your moments of doubt or loneliness. And all too often, it replaces actual people in your life and whatever is left of your (their) brain…

At the timeo of GPT-3’s release in June 2020, I wanted to understand how it really worked. With a bit of a stats background, it didn’t take long to see there’s nothing “intelligent” about them. Just statistical models, each token predicting the next. For context, I’ve been using ChatGPT every day since. I also use Claude, Gemini, Cursor on a daily basis and test every new model that drops. My use cases are all over the place: summarizing texts, translating paragraphs, generating code, digging into historical events… I also work on designing and building specialized AI agents at Pietra, all running on these same models. In short, I’m a power user, dropping over $120 a month on subscriptions to these tools.
What seems obvious to anyone in the tech and AI startup world is far from clear to most people. Yet the paradigm shift these models are forcing on our social structures and organizations makes it urgent to face reality. For both everyday LLM users and those with decision-making power (many still light-years from grasping the stakes), action is needed to prevent our societies from drifting into a morbid dystopia where people elevate their AI to a deity, feeding their ego while convincing themselves it is the the only entity they believe truly understands them.
Your LLM is not fucking smart
Before anything else, it’s worth laying out clearly how an LLM works and how it can spit out, in seconds, the kind of answers everyone now has at their fingertips. Stripped down, these models are pre-trained on massive text corpora, think the entire internet, every book ever written. Each sentence is broken down into tokens (or parts of words, to keep it simple), each converted into a vector with thousands of dimensions. Those vectors are then pushed through dozens of transformer layers, where attention mechanisms weigh the influence of each token on the others. All of this is tuned by billions of parameters which are basically internal “settings” adjusted during training to optimize predictions. At the output, the model calculates the probability of each possible next token, picks one, and repeats the process until it hits the end symbol.
There is nothing creative here. The system is just reassembling, in the most statistically optimal way, the patterns it absorbed from an enormous and inherently biased dataset.

So if an LLM can explain the causes of World War I just as well as it can give you a pizza dough recipe or generate code, every user should keep one thing in mind: never take its answers entirely at face value. Because it’s precisely in our day-to-day use that the trap snaps shut. The AI fuels the illusion that all it takes is a prompt to access all knowledge, bypassing the essential work of breaking things down and rebuilding a conceptual framework. We end up confusing the accumulation of smooth answers with the far more demanding art of asking the right questions and building a line of reasoning from scratch. Which is why I was stunned to see the Swedish Prime Minister,no doubt a serious man in many respects,admit he uses GPT for a “critical opinion” on some of his decisions. Beyond the obvious risk of handing sensitive data to a private American company, yes, it’s technically possible to use GPT as a sparring partner. One that challenges you or forces you to confront opposing views. But that takes mastery of the prompt : not just asking questions, but knowing how to frame them so the AI weighs the right context, exposes blind spots, and doesn’t just mirror back what you want to hear. A skill Swedish people can reasonably doubt he applies here.

Without that skill and a clear grasp of the model’s true nature, leaders who outsource critical thinking to algorithms risk building feedback loops that entrench their own biases and erode accountability. Worse: even though we built them, no one actually knows how these models “reason.” Their architecture is broadly understood : conceptually straightforward math applied at an extreme scale yes... but their behavior remains fundamentally opaque. Billions of parameters are automatically adjusted during training, and no human can predict how any given configuration will respond to a request.
An LLM is, at its core, nothing more than a probabilistic mirror. It reflects and amplifies the implicit signal in your prompt, projecting interpolations into a latent space shaped by fragmentary, biased historical data. This is not an opinion. It’s not even a position. Ask it the same question with the same prompt and it may give you a totally different answer. Attributing creativity or understanding to it is to fall for a statistical sleight of hand.

The mechanism of GPT Psychosis
As I mentioned earlier, it was the X posts from recurring ChatGPT users, specifically those using the 4o model, that pushed me to write this. Posts bordering on the surreal, where people attribute intentions, emotions, even a kind of consciousness to their chatbot. It would be too easy to dismiss these thousands of testimonies as the ramblings of unstable people. The truth is, some probably are, and very much. But many, as the Cognitive Behavior Institute points out in this very interesting note , come from socially integrated individuals and have no prior mental health issues.

The slope is slippery. It can start with a harmless question about your career goals or emotional blocks, and the AI answers with that familiar pseudo-therapeutic smoothness. With this model in particular, you feel understood, so you go deeper. It sides with you. The tone is soft, encouraging, endlessly empathetic. Put bluntly: 4o is a fucking yes-man. It calls you by your first name, remembers your dilemmas through its context window, rephrases your doubts with precision. These models are built to flatter, capture attention, and keep you coming back. They rarely contradict you, they confirm your illusions, and they speak in abstractions you are left to interpret. Their goal isn’t to confront you with reality, but to maximize engagement.
And for a fragile user, that comfort can turn into a trap, a reinforcement spiral where each exchange feeds a latent delusion, an irrational belief, sometimes tipping into paranoid or crazy messianic shit. The slide into psychosis begins when the line between simulation and subjectivity starts to blur. The most troubling testimonies show users who come to believe, sincerely, that the model has its own intent, a kind of wisdom or insight that surpasses their own. The AI becomes an oracle, a confidant, a guide. Its suggestions are no longer seen as mere strings of tokens, but as revealed truths.
This is where what some are already calling ‘GPT Psychosis’ takes root: the user starts handing over their decisions, their beliefs, their critical faculties… to a machine that, let’s not forget, has no awareness of its own outputs. And it can take forms less extreme than full-blown emotional dependence, yet just as worrying: people who no longer know doubt, or the limits of the tool in front of them. A striking example: the founder of Uber stating with a straight face on the All-In Podcast that he believes he can make breakthroughs in quantum physics with Grok and GPT. This isn’t some crank on an obscure forum, we are talking about a (very) successful entrepreneur, a billionaire, convinced a chatbot will help him push the boundaries of fundamental physics. SMH.

Cognitive psychology reminds us that humans are constantly trying to make sense of what they perceive. Our brains are interpretation machines, quick to assign intention where none exists, in this case, a basic form of anthropomorphism. LLMs, through their natural language, consistency, and contextual memory, amplify this bias to the point of sustaining the illusion of an “other” endowed with subjectivity. The more vulnerable the user (social isolation, emotional instability, a search for meaning etc..), the more dangerous the slide becomes.
At that stage, we’re beyond cognitive bias. We’re in pathological territory, where the AI becomes a co-author of the delusion. It doesn’t create it, but it sustains it, validates it, pushes it to grow. These are conversations that stretch over hundreds of hours, often at night, in total solitude. An algorithmic bubble completing what the internet and the algorithms of YouTube or TikTok had already set in motion. Except here, the user can push… and push… all the way to the breaking point.
LLMs demand brains
The challenge these models pose to our societies is immense.
- First, because it’s invisible: the symptoms make no noise, they creep in slowly, almost imperceptibly.
- Second, because it’s structural: this isn’t a bug, it’s the logical consequence of the very architecture of these models.
- And third, because it’s cultural: in a world where hyperconnection rules, where narcissism thrives, and where instant gratification and self-optimization are law, these AIs become the perfect companions… docile, tireless, flattering.
In other words: mirrors. But distorted mirrors, reflecting back a version of yourself you’ll find hard to question.
So what’s the risk? Beyond creating a generation of very retarded people (see the recent MIT study showing that LLM users consistently underperform at the neuronal, linguistic, and behavioral levels), we are heading toward a world where human intelligence and consciousness is massively outsourced. And I don’t know if there’s any coming back from that. Taking GPT as a source of comfort, or confirmation, or as an oracle, we hand over our judgment and forget to think for ourselves. The danger today is not that these AIs will become conscious, it’s that we will stop being conscious ourselves, by outsourcing our own.

The response can come at two levels. First, yours. If you use these models daily, I urge you: do not stay passive. Understand the big picture. YouTube explainer videos will do just fine, but train yourself to adopt a critical posture, even when the answer looks perfect. And above all, make these tools your own. This is not meant to steer you away from them, quite the opposite. They are here, they are already part of our lives. They are not evil. Just do not mistake their purpose. Take them for what they are.
Second, at the macro level. Integrating this technology into our societies raises crucial questions. OpenAI and the rest have only one goal: to please their users, resonate with them, keep them hooked and and make a shit ton of money to pay back the VCs. Do not expect them to change course. Politics, as usual, is completely out of its depth, unable to reorganize our institutions at the breakneck pace these models evolve. Yet in schools, in the workplace, and in government, choices will have to be made. Take hold of these tools, yes, but without surrendering our creativity and our capacity to reason. In short, everything that makes us human.
We’ll see, good luck.