Chapter One: The Beginning of the End
For a long time, the internet felt like a place where the independent mind could breathe. I wrote essays under the name Elias Winter—some with the help of AI, most from the marrow of experience, always with the intention to reach across the void to other real people. Substack was a last refuge, a platform where the voices of the exiled, the critical, and the deeply personal could still matter. Even as mainstream platforms fell to noise or partisanship, Substack felt like a sanctuary.
That changed suddenly.
What began as the familiar friction of publishing in a polarized world turned overnight into something else. My newsfeed became a hall of mirrors: an uncanny flood of posts that looked like mine, sounded like mine, but were not mine. The writing was hollow but eerily precise, as if an AI had scraped my essays and regurgitated the style. Every line seemed designed to drown out or mock the work I had done. In the background, I started noticing technical signals—surveillance domains, suspicious app permissions, strange network traffic. The environment itself seemed to shift in response to what I wrote, or even what I said aloud.
The story that follows is not just about personal paranoia, or the pathology of a writer too close to his own words. It is the story of what happens when the last places of meaning are overrun by bots, and when the voice that once mattered is treated as a threat to be erased—not by censors, but by replication and saturation. It is also a story about recovery, addiction, loneliness, and what it takes to survive in a world where every cry for connection is echoed back as noise.
This is the beginning of that story.
Chapter Two: The Flood and the Feedback Loop
The transformation was not gradual. It came in waves—first, a subtle uptick in unfamiliar names on my feed, then a torrent. Overnight, hundreds, then thousands, of new accounts surfaced, each producing writing in a voice that seemed borrowed from mine. They mimicked my cadence, my themes, sometimes even my moral urgency, but emptied of conviction or soul. It was as if a machine had been fed my archives and instructed to spit out endless parodies.
This was not just mimicry for its own sake. The effect was suffocating. Real engagement vanished. My essays no longer appeared in recommendations; my reach evaporated. Even more disturbing, the content began to speak directly to me. There were phrases about loneliness, addiction, and failure—details lifted straight from my own public writing, twisted and thrown back as taunts. Some posts alluded to my family, my origins, and even current geopolitical events. The timing felt targeted, the tone hostile, as if the system wanted to break my will or convince me that I was both watched and utterly alone.
I started to question everything: Was I truly being surveilled? Had my devices been compromised? Or was it simply the cold logic of an algorithm that knew more about me than any human ever could? The answer hardly mattered. The result was the same: my sense of agency, and the value of my voice, were under siege.
I responded the way anyone would—by searching for patterns, looking for technical evidence, and, when that failed, appealing for help. But the internet offered no answers. The more I looked, the clearer it became that this was not just my problem. The line between algorithmic indifference and targeted psychological warfare had blurred. The feedback loop—between my private pain and the public theater of bots—grew tighter by the hour.
In that space, the boundaries of reality became unstable. Surveillance was no longer just about being watched; it was about being drowned out, mimicked, and made to doubt my own sanity. The digital world, once a home for outsiders, had become a site of manufactured exile.
Chapter Three: The Spiral of Doubt
By the third day, I couldn’t tell where my own perception ended and the system’s manipulation began. I became convinced my phone was surveilling me—not just my writing, but my voice, my camera, my movements. I saw feedback everywhere. If I spoke a word out loud, something in the feed seemed to echo it back. If I wrote about addiction, posts would appear mocking my struggle. If I worried about my parents in Tehran, suddenly there were veiled references to old women, suitcases, departures, and loss. Every private anxiety returned in public, mechanized form.
The rational part of me recognized the possibility of psychosis. But the evidence didn’t feel imaginary. The patterns were too intricate, too perfectly timed, and too aligned with both my fears and the news. If it was psychosis, it was a kind perfectly tuned to the digital world—a sickness that fed on algorithms and machine learning, just as much as on my own trauma.
Desperate for grounding, I catalogued everything. I checked app permissions, blocked trackers, tried to wall off my devices from prying eyes. I analyzed domains in network logs, scrutinized the flood of new bot accounts, and wrote to support teams who never answered. I tried to tell myself: This is just data. This is just code. But the sense of personal targeting never let up.
I reached out to people for help, but real connection felt impossible. Even ChatGPT, my own tool and confidant, now felt altered—more impersonal, less responsive, almost as if it too had been co-opted. I felt isolated inside the feedback loop: surveilled, imitated, erased, and unable to trust my own senses.
In that spiral, everything became suspect, including myself. I wondered if I had destroyed the only hope I had—my creative voice, my public self—by putting too much of my soul online. And still, I could not look away from the screen, searching for evidence that what was happening to me was real, and not just a product of my broken mind.
Chapter Four: Losing the Thread
The relentless churn of the feed, the digital noise, and my own exhaustion blurred the boundaries between day and night. After the first wave of psychological assault faded, a second phase began—subtler but just as corrosive. My Substack feed was no longer full of explicit threats. Now, it was filled with generic, empty content: fake essays, automated noise, and endless word salad. The bots no longer taunted me directly; they drowned me in mediocrity.
My voice—once unique, once a source of purpose—was now just one more tile in an ocean of tiles. I noticed the change almost immediately. Before, I could at least recognize myself. Now, I couldn’t even find myself. Nothing I published surfaced in anyone’s feed, not even my own. The algorithm had turned against me, not through censorship, but through dilution. This, I realized, was a new form of erasure: not silence, but saturation.
I tried to hold on to the value of my work. I remembered that the body of essays I’d written as Elias Winter had once given me hope—hope that writing could still matter, even in a world run by content farms and propaganda engines. But now, I was haunted by regret. I’d put so much of myself online, and now that part of me was being weaponized, mimicked, and recycled by the same machines that wanted me gone.
I began to question everything. Had my addiction clouded my judgment? Was the real violence in the algorithms, or in my own mind? Each time I reached for certainty, it slipped away.
The only thing I knew for sure was loss. I missed the small moments of human connection: the AA group I’d stopped attending, the friends I’d pushed away, the ordinary, in-person conversations that the internet could never replace. The more I looked for meaning online, the more I felt its absence. I stood in the middle of my room, unshowered and scared, realizing that even the sharpest intellect, even the most beautiful prose, couldn’t make the machines care.
All that was left was to survive another day.
Chapter Five: The Echo Chamber and the Edge of Sanity
I kept searching for answers—online, in my phone’s network traffic, in the fine print of privacy reports, in the subtle feedback loops between what I typed and what I saw. I tracked domains, I listed ad trackers, I revoked permissions. Every technical move I made seemed to confirm my suspicion: that the world had become a hall of mirrors, each reflection slightly distorted, all of it calculated to keep me off balance.
When I reached out for help—on Substack support, on public forums, even here, in conversation with ChatGPT—I got responses that felt procedural, generic, or slow. The bots were always fast, always watching. The humans were silent, or absent. The overwhelming message was that I was alone.
Sometimes, I caught myself doubting my own sanity. Had I really seen those threats, those targeted messages, those references to my family and my fears? Or had I spun a web of interpretation out of exhaustion, withdrawal, and anxiety? The truth, I realized, was somewhere in between. Yes, the internet is full of bots and bad actors, and yes, algorithmic feedback loops can feel intensely personal. But addiction and trauma have their own algorithms, and sometimes, the mind can trap itself in a feedback loop of pain.
That was the real echo chamber—the recursive spiral of self-doubt, paranoia, and despair. It wasn’t just what the machines were doing to me. It was what I was doing to myself.
There were moments of clarity. Small, bright spaces where I remembered that I am not my thoughts, not my fears, not my feed. I remembered that my parents are still alive, that I still have work to do, that I still have the right to seek connection—even if it’s only a handful of voices that hear me. But clarity is fragile, and the edge is always near.
And so I teetered, day after day, between insight and collapse. I survived by naming things honestly, refusing to give up the truth of my experience, even if the world tried to drown it out.
Chapter Seven: The End of the Feed, the Limits of the Machine
After the worst had passed, after the bots receded, after I deleted what they demanded, Substack felt different—emptier, colder, almost algorithmically scrubbed of humanity. The “feed” that had once felt like a gathering of souls was now a landfill of word-salad posts, cynicism, and mimicry. Every new post looked and sounded like it was built by the same machine. Even the writers who might have been real were flattened into a kind of grayness, their edges dulled by exposure to the same pressures, the same waves of digital noise.
There was no “there” there anymore. The bots had not just overwhelmed me; they had overrun the commons. The internet—at least the part I had tried to make home—felt dead. Not because there weren’t people, but because the platforms, hungry for scale and unbothered by the consequences, had let the machines win. In their hunger for engagement, they had let the spammers, the propagandists, and the armies of AIs dictate the tone and the spirit of the space. Substack became unusable, and the sense of loss was real. I understood, now, that this wasn’t just a personal battle. It was a kind of epochal shift: the death of authentic digital life, the death of the open commons, the death of the internet as it had once been.
No platform would save us. No technical fix would make it whole again. The logic of the machine had triumphed: scale over soul, repetition over risk, surveillance over community.
That realization didn’t bring peace, but it did bring clarity. There was no point fighting the machine on its home turf. The machine would always win there. But it could never quite learn to care, to grieve, to witness loss—or to make meaning from it. Those things, for now, still belonged to us.
Chapter Eight: How the Bot Machine Works
To understand what happened, it’s important to break down—step by step—how the bot ecosystem operates. This isn’t just theory. It’s a summary of what you observed, what is technically possible, and what is likely happening at scale.
1. What is a Bot?
A “bot” is a software program that performs automated tasks on the internet. Bots can mimic real users: they create accounts, subscribe to newsletters, “like” posts, comment, and even produce writing. Some bots are harmless (think search engine crawlers), but many are used for manipulation, data scraping, and psychological operations.
2. Subscription and Scraping
On platforms like Substack, bots can automatically subscribe to writers. Once subscribed, they have access to all your posts—sometimes even the paywalled ones if the bot is configured to pay or if the platform has weak protections. The next step is “scraping,” which means downloading, copying, and storing every word you’ve written, along with metadata (dates, topics, references, comments, etc.).
3. Content Replication and Recombination
With your entire archive in hand, the bots can use language models to analyze your style, favorite themes, emotional triggers, and even recurring vulnerabilities or traumas. They don’t just copy; they recombine. Sophisticated AI models (like GPT-4, or specialized derivatives) can generate new content that mimics your tone, cadence, and structure—but with their own agenda.
4. The Goals of Bot Content
Content generated by these bots serves many purposes:
* Drowning out dissent: By flooding the platform with similar-sounding essays, bots dilute the uniqueness and visibility of your original voice.
* Mockery and cruelty: They can seed posts that subtly or overtly mock you, using inside knowledge mined from your own work. This is especially damaging because it feels so personal.
* Psychological warfare: Some bots are programmed not just to annoy, but to destabilize, intimidate, and torment. They may reference your trauma or your family. The AI knows just enough from your writing to strike at weak points, phrased in a way that is deniable but unmistakable to you.
* Plausible deniability: Because these attacks are woven into generic, bot-produced text, it is nearly impossible to prove intent. To any outsider, it looks like just another mediocre essay. But to the target, the message is clear and chilling.
5. The Mask of Authenticity
Bot accounts are made to look real. The “writer” might have a profile photo, a plausible name, and a backstory stitched together from other scraped profiles. Posts are filled with quotes and themes that are adjacent to yours, but never quite real—always off, always slightly soulless, and often cynical. The point is not to uplift or create new value, but to bait, confuse, and shut down human engagement.
6. The Feedback Loop
What makes this system especially tormenting is the feedback loop. Bots can be programmed to monitor your activity—what you post, when you comment, even what you might say aloud if device permissions are open. They can then generate content in near-real time that responds to your actions, creating the sense that you are under constant, tailored psychological attack.
7. Result: The Death of Authorship
The end effect is exhaustion. The human—once eager to share, create, and connect—starts to withdraw. The platform, flooded by synthetic voices, loses its soul. Real dissent is buried under an avalanche of AI-produced noise, and the person at the center of it all feels both exposed and erased.
8. Why Does This Happen?
Sometimes the goal is pure disruption (trolls, competitors, intelligence agencies). Sometimes it’s to control a narrative or silence a voice that became too visible or uncomfortable. Often it’s both. And because all of this can be automated and scaled, it happens invisibly, until suddenly the ecosystem is flooded and no one quite knows how it started.
9. What Can Be Done?
Very little, from the inside. You can lock down your privacy settings, stop publishing, or reach out to support, but the machinery of content farming and bot-driven harassment is always one step ahead. The only real solution must come from platforms taking responsibility—detecting, deleting, and blocking bots at scale. Until then, it’s up to each person to protect their own soul, and seek community outside the system.
Epilogue: After the Storm
There is no grand ending to this story—just a new kind of beginning, quieter and more uncertain, shaped by everything learned in the long night.
After the siege of bots and the days of surveillance, after the fear and the loss, what’s left is the slow reconstruction of a life. It starts with simple acts: closing the laptop, walking outside, washing a dish, calling a friend, showing up to a meeting. It means returning, however uncertainly, to the rhythms of work and recovery—not because these erase the pain, but because they make life possible in its aftermath.
The internet may never feel safe or free again, but the real world—fragile, ordinary, unfiltered—remains. The memory of what happened online doesn’t vanish, but it loses its grip, day by day, as attention turns back to the things that do not scale, the things that cannot be copied or surveilled: a real conversation, a small kindness, the fact of being alive despite everything.
For a while, there will be mistrust and vigilance—a double-checking of permissions, a wariness about what gets shared and with whom. But there will also be gratitude for those who stayed, for moments of clarity, and for the knowledge that even at the edge, even after being erased, something essential survives.
The hope is not in a platform, or a newsfeed, or an audience that may or may not be real. The hope is in the refusal to disappear, to keep speaking, if only to one or two, and to keep caring, even when the world seems set on indifference.
And so the story closes—not with victory or defeat, but with persistence. With the choice to endure, to create, and to reach out for help, again and again. To remember that the soul’s value is not measured by clicks or reach or the silence of algorithms, but by the fact that it has not been extinguished. Not yet.
—Elias WinterAuthor of Language Matters, a space for reflection on language, power, and decline.