This is how free speech disappears in the twenty-first century
A Couple Days Ago, Dark Side of the Moon Disappeared on Threads by Algorithm. One day I was publishing without apparent problem, as I have for twenty years and eleven volumes of essays. The next day I no longer existed.
No post was identified as problematic. No rule was cited. No appeal process was offered. Just a short notice informing me that I had violated ‘community standards.’ Years of writing from a non-confrontational, slightly left of center conversation, and modest audience building were erased in seconds.
Not because I threatened anyone.
Not because I spread false information.
Not because I incited violence.
Because my work was political, persistent, and critical of institutional power.
That is all it takes now.
There was no courtroom, no accuser, no evidence presented. Only an algorithm deciding that my voice created too much friction for the system. This is how speech disappears in the twenty-first century — quietly, efficiently, and without accountability.
We like to imagine censorship as something dramatic: books burned, journalists jailed, governments issuing decrees. But modern censorship no longer wears uniforms. It wears logos.
The public square has been privatized.
A handful of corporations now control the infrastructure through which billions of people communicate, debate, organize, and learn. No need to operate under constitutional protections or democratic oversight. They don’t favor demonstrations, such as those against ICE in Minneapolis. They operate under ‘terms of service’ — contracts of permission that can be revoked at any moment.
On these platforms, there are no guaranteed rights. There is no due process. There is no requirement to explain an accusation or provide meaningful appeal.
There is only access, and that access is opaque, and conditional.
First comes invisibility. Your posts stop traveling. Engagement collapses. Your audience quietly disappears. Then comes erasure. An account vanishes without warning, reduced to a vague violation notice designed to protect the company rather than inform the user. This is not moderation in any traditional sense. It is algorithmic risk management.
Artificial intelligence systems now judge political critique, satire, journalism, and historical analysis despite being incapable of understanding either context or truth. It creeps, on the little-cat-feet of privatizing, with Late Night hosts cancelled by ownership instead of ratings.
And now, your and my internet access.
What they can measure is friction — controversy, heated language, repeated institutional criticism, and user reports (has someone flagged my content?). Content that challenges power generates friction. And friction is bad for advertisers, regulators, the political elite, and corporate stability.
So it is removed. Writers no longer need to be wrong to be silenced. They only need to be persistent. How frightening is that?
The safest content in today’s digital ecosystem is bland, apolitical, and inoffensive. Grandma and her cats, Youtube ‘shorts’ that feature inoffensive click bait. The riskiest is thoughtful criticism of powerful institutions. Over time, this creates a distorted public conversation where comfort is amplified and thoughtful conversation at breakfast filtered out.
Thoughtful conversation doesn’t serve the interests of the billionaire class or politics. We’re told this system exists to keep people safe. But safety has become the public justification for something far broader: narrative control.
The early steps toward fascism always demand narrative control.
That’s not hyperbole, and it’s not a rant.
The billionaires who actually run America (through media ownership, sports, and control of Congress), increasingly pressure internet platforms (those they don’t already control) to police speech by algorithms, allowing censorship without constitutional limits. Corporations comply because it reduces both their legal risks and political scrutiny. The result is a privatized enforcement regime more sweeping than any state censorship office in history — faster, quieter, and almost impossible to contest.
Power no longer needs to ban books. It simply buries voices.
And because this control is exercised through code rather than law, it feels administrative rather than authoritarian. There are no dramatic crackdowns. Just disappearing reach, deleted accounts, and silence.
This is way more effective than repression ever was.
A society where dissent technically exists but is practically unseen is not free.
A world where criticism is allowed but prevented from spreading is not an open marketplace of ideas. When visibility itself becomes conditional, freedom becomes a subscription that can be canceled.
We have built the most sophisticated speech-control system civilization has ever known and handed it over to profit-driven corporations whose incentives favor quiet over complexity and compliance over challenge.
The old censors feared public backlash. The new ones operate invisibly.
No knock.
No charge.
No defense.Just gone.
The danger is not merely that voices are removed. It’s that the process becomes normal. Silence feels like policy. Absence looks like safety.
The old censors burned books. The new ones delete authors.
And most of the world never notices they were there.


This analysis of algorithmic censorship is really sharp. The part about friction being the measureable metric platforms use hits differntly when you consider that substantive debate always generates friction. I've seen this firsthand when technical policy discussions get shadowbanned while viral memes spread unfiltered. The incentive structure basicaly ensures depth gets punished while vapid content thrives.