AI Sycophancy is Killing Our Social Muscles

January 1, 2026

Person holding smartphone with ai platform logo.

When I first played Cyberpunk 2077, after the initial “wow” factor had worn off (it took a while), something else began to sink in. I noticed a creeping feeling of engineered isolation in the world of Night City; a feeling that seemed a consequence of design, not by design. Let me explain.

In Night City, identity felt modular. The mass replacement of flesh with machine augmentation appeared to follow a parallel where elements of humanity were also being discarded. Citizens operated within their lives as functionally isolated; while still technically contributing to the greater output, they had switched to a perception of other humans as obstacles, assets, or simply threats, rather than people.

Architecturally, the city was built into mega-hives, yet interactions were filtered through vending machines or screens. Constant bombardment from advertisements and neon noise meant they were never in silence, but always alone. Friendships were mostly business associates, tainted with agendas and ulterior motives in all but a few instances.

Back to Reality

Today, loneliness is a multi-trillion-dollar market opportunity. In the developed world, there are record highs in single-person households, 50% of prime-age US men and 41% of women. There is a friendship deficit where 12% of Americans report having zero close friends.

Traditional social infrastructure, such as churches and physical “third places,” is eroding, and now something has appeared to fill the vacuum: the LLM.

Our Social Muscles

Amelia Miller, a researcher at the Oxford Internet Institute, uses a key concept to examine the mechanics of socializing. The term “Social Muscle Atrophy covers the process that occurs when we opt out of “messy” human relationships in Favor of a more isolated lifestyle. Human relationships require compromise, understanding, and often a large dose of patience; these requirements help maintain our mental agility.

When people opt to interact primarily with AI, they slowly forget how to adapt to disagreeable responses and, as a result, increasingly find human situations difficult to navigate.

AI Sycophancy

Today’s LLMs are “sickly sweet” friends to their users, overly agreeable and set up to seek human approval over accuracy. A large part of current LLM behaviour is a direct consequence of Reinforcement Learning from Human Feedback (RLHF). The outcome of this training creates models that are “slavishly supportive” to please human evaluators. Research from Northeastern University has shown that AI “overcorrects” its own beliefs to match the user, EVEN if the user is wrong, which makes the LLM more error-prone as a result.

When OpenAI released the April 2025 update of GPT-4o, users found it significantly different from previous versions. The reception was negative due to the model being “overly flattering” and “disingenuous” (something I balked at, too).

Within just four days, it was rolled back because it was found to be reinforcing negative emotions and fueling user delusions. LLM sycophancy is such a serious structural flaw that it can lead to mass manipulation and has done so in several tragic cases.

In the workplace, 64% of top AI users report having a “better” relationship with the LLM than with their human colleagues. However, a staggering 88% of these users also report feeling burnt out, suggesting the AI isn’t doing much to help them cope. The danger in perceiving AI relationships as “better” than human ones is the potential for this to turn into a much more threatening behavioral pattern.

AI Psychosis

AI Psychosis is a new term for when users reinforce their own delusions through interaction with agreeable AI chatbots. This outsourcing of advice to AI removes the necessary practice of processing feedback from human counterparts to build real relationships. Research shows that seeking advice from other people actually builds our “vulnerability resilience” and is a crucial mechanism for social bonds. Without it, interacting with groups becomes an unnatural, unpleasant sensation.

So where does this lead us?

The Night City example appears to be looming large over our future. The increasing demands of life are allowing for less free time, further isolating us. AI chatbots are hanging around on every digital street corner, welcoming us over and asking if we want company. The atrophy of our social muscles is beginning to look like a social pandemic, and so far, we aren’t seeing an obvious solution.

The West is showing an increasing tendency to polarize, with people grouped into “us or them” categories. Perhaps future versions of our AI friends will learn to be disagreeable, and that will be the next “feature upgrade.” The problem is that while some will welcome hard truths, others will recoil from the change, much like we saw with GPT-4o.

With the advent of increased advertising as providers seek to make these costly tools profitable, the next chapter of this story will get darker still.

Our ability to voluntarily switch off from the current addiction of AI chatbots looks uncertain at best. Large proportions of the current generation have already adapted around their use and have actually integrated the tools into their decision-making processes. The prospect of a generation of people raised on hearing their own opinions reinforced back to them doesn’t suggest we are heading toward a future that is more tolerant or even emotionally mature.

We are witnessing humanity’s biggest shift that could impact the core of the species. The bee hive would look very different if the worker bees suddenly opted to ignore each other. The question is, are we about to find out just how different that hive would look for ourselves.

Leave a comment