The Internet Is Learning to Simulate Human Suffering

The Internet Is Learning to Simulate Human Suffering

The other night, while scrolling through Facebook, I came across a post that stopped me. It showed a young woman sitting in a hospital bed, tears running down her face, holding a handwritten sign that said: “I was cured of cancer, will someone congratulate me?” The caption talked about silent battles, sleepless nights, gratitude, and how a few kind words can mean everything.

For a moment, I felt uncomfortable even questioning whether it was real.

But the longer I looked at it, the stranger it felt. The wording was oddly generic, almost as if it had been written to appeal to everyone at once. The image itself felt emotionally over-engineered. The tears, the lighting, the framing, even the handwritten sign all seemed designed to trigger an immediate emotional reaction.

And then came the realization that unsettled me more than the image itself: there’s a very good chance the person in that photograph never existed at all.

Artificial intelligence has become extraordinarily good at generating fake human experiences. Not just fantasy artwork or fictional landscapes, but emotionally persuasive moments that imitate real human vulnerability with alarming accuracy. A 2024 Harvard Misinformation Review study examined 125 Facebook pages using AI-generated imagery for audience growth, many of them accumulating enormous followings through emotionally manipulative content.

Once you start noticing these posts, you realize how common they’ve become. You’ve probably seen them yourself: the lonely veteran, the crying child, the exhausted single parent, the cancer survivor asking strangers for congratulations. The formula is remarkably consistent. The stories are emotional but intentionally vague. The language is broad enough to apply to almost anyone. The goal is not really to inform people, but to provoke engagement, because engagement, in today’s digital economy, is valuable.

Researchers are also finding that many people genuinely struggle to distinguish AI-generated faces and profiles from authentic ones, especially while quickly scrolling through social media feeds. And honestly, that should concern all of us a little. Recent psychological research suggests these AI-generated images are particularly effective because they exploit the same emotional shortcuts our brains naturally use to process empathy, trust, and urgency. Recent psychological research

The deeper I looked into this phenomenon, the stranger it became. Many of these pages are not simply posting emotional content for attention. They are deliberately building audiences through emotionally manipulative AI-generated content. The posts generate enormous numbers of reactions, comments, and shares, which social media algorithms reward aggressively. Over time, these pages accumulate hundreds of thousands of followers.

And then, quietly, the page changes.

A page originally branded around autism awareness, cancer support, veterans, or inspirational content suddenly transforms into something entirely different: political propaganda, cryptocurrency schemes, misinformation networks, or advertising funnels. The audience itself becomes the commodity.

What makes this moment historically important is not simply the existence of fake content. Humanity has always dealt with propaganda, staged photography, manipulated narratives, and emotional exploitation. What has changed is the scale, speed, and realism of it all.

For most of modern history, photographs carried a basic assumption of authenticity. They weren’t always truthful, but seeing a photograph generally meant that, at minimum, a real person existed in front of a real camera at a real moment in time.

That assumption is beginning to disappear.

We are entering an era where human emotion itself can be simulated convincingly enough to bypass our normal skepticism. And the implications extend far beyond Facebook engagement bait. Historically, society has relied on certain forms of media as anchors of truth: photographs, recorded audio, video footage, and written testimony. Artificial intelligence is weakening confidence in all of them simultaneously.

And perhaps the most troubling consequence is not the deception itself, but what happens afterward.

After enough fake suffering, people begin questioning authentic suffering as well. Real stories start feeling performative. Genuine human pain becomes harder to distinguish from manufactured emotion. The internet is slowly becoming a place where even emotion itself can be artificially produced and optimized for attention.

I think about older generations often when I see these posts circulating. Not because they are unintelligent, but because many grew up in a world where visual media still carried an unspoken presumption of authenticity. A photograph represented documentation, not fabrication.

Now someone can generate a convincing cancer survivor in seconds.

I’ve started realizing that maybe the solution is not teaching everyone to become internet investigators.

Maybe it’s teaching people to slow down before emotionally reacting online. To pause for a moment and ask whether a post is attempting to inform them or manipulate them.

To look at whether the page itself relates to the story being shared, whether the language avoids specifics while maximizing emotional reaction, and whether the content seems designed primarily to provoke engagement.

Because increasingly, the distinction that matters is this: real people usually share experiences. Systems designed for engagement manufacture emotional reactions.

And that distinction is becoming harder to recognize with every passing year.


Looking for Something?