LePartisan.info À propos Podcasts Fil web Écologie BLOGS Revues Médias
🖋 Cory DOCTOROW
Science fiction author, activist and journalist

PLURALISTIC


▸ les 10 dernières parutions

14.03.2026 à 16:15

Pluralistic: Corrupt anticorruption (14 Mar 2026)

Cory Doctorow

Texte intégral (3613 mots)


Today's links



A Chinese porcelain sculpture depicting a Maoist struggle session; a cadre in uniform and CCCP cap stands by while a Party official forces a man in a dunce cap to his knees. The image has been altered. The cadre now has the 'fat baby' JD Vance head. The Party official has orange skin, Trump hair, and Trump's eyes and mouth. Both figures have US flag lapel pins. Behind them is a soiled US flag.

Corrupt anticorruption (permalink)

An amazing thing happened this week: a whopping bipartisan Senate majority (89:10!) passed Elizabeth Warren's housing bill, which severely limits private equity companies' ability to buy single-family homes to turn into rental properties:

https://prospect.org/2026/03/13/elizabeth-warrens-amazingly-progressive-housing-bill/

It's a big deal. Since the Great Financial Crisis, US home ownership has fallen sharply, while corporate landlordism has skyrocketed. Rents are through the roof, and private equity bosses boast about gouging their tenants, with the CEO of Blackstone's Invitation Homes ordering the lickspittles to "juice this hog" with endless junk fees and calculated negligence:

https://www.aol.com/juice-hog-real-estate-companies-080301813.html

The corporate takeover of the housing market didn't fall out of the sky. It was a policy of the Obama administration, which directed the mass selloff of homes (foreclosed on by bailed-out banks) to corporate buyers:

https://www.thebignewsletter.com/p/boom-senate-votes-to-block-private

Sunsetting the American dream of home-ownership is the final straw. After all, once America killed off labor rights, the only path to wealth accumulation left for working people was assuming crippling debt to buy a house in hopes that its value would go up forever:

https://pluralistic.net/2021/06/06/the-rents-too-damned-high/

The affordability crisis isn't solely a matter of high shelter costs (we see you, grocery greedflation, health care and education!), but housing costs are totally out of control. Mamdani's earth-shaking mayoral campaign centered affordability, with housing taking center stage:

https://gothamist.com/news/mamdani-wants-to-take-buildings-from-bad-nyc-landlords-this-bill-could-make-it-happen

Trump – whose most important skill is his ability to sense vibe-shifts in his base – noticed, and started to make mouth sounds about tackling the affordability crisis, specifically blaming private equity landlords for high rents:

https://www.whitehouse.gov/fact-sheets/2026/01/fact-sheet-president-donald-j-trump-stops-wall-street-from-competing-with-main-street-homebuyers/

But this isn't just a story about a stopped clock being right every now and again. It's a story about boss-politics anti-corruption, in which anti-corruption is pursued to corrupt ends.

From 2012-2015, Xi Jinping celebrated his second term as the leader of China with a mass purge undertaken in the name of anti-corruption. Officials from every level of Chinese politics were fired, and many were imprisoned. This allowed Xi to consolidate his control over the CCP, which culminated in a rule-change that eliminated term-limits, paving the way for Xi to continue to rule China for so long as he breathes and wills to power.

Xi's purge exclusively targeted officials in his rivals' power-base, kneecapping anyone who might have blocked his power-grab. But just because Xi targeted his rivals' princelings and foot-soldiers, it doesn't mean that Xi was targeting the innocent. A 2018 paper by an economist (Peter Lorentzen, USF) and a political scientist (Xi Lu, NUS) concluded that Xi's purge really did target corrupt officials:

https://web.archive.org/web/20181222163946/https://peterlorentzen.com/wp-content/uploads/2018/11/Lorentzen-Lu-Crackdown-Nov-2018-Posted-Version.pdf

The authors reached this conclusion by referencing the data published in the resulting corruption trials, which showed that these officials accepted and offered bribes and feathered their allies' nests at public expense.

In other words, Xi didn't cheat by framing innocent officials for crimes they didn't commit. The way Xi cheated was by exclusively targeting his rivals' allies. Lorentzen and Lu's paper make it clear that Xi could easily have prosecuted many corrupt officials in his own power base, but he left them unmolested.

This is corrupt anti-corruption. In an environment in which everyone in power is crooked, you can exclusively bring legitimate prosecutions, and still be doing corruption. You just need to confine your prosecutions to your political enemies, whether or not they are more guilty than your allies (think here of the GOP dragging the Clintons into Epstein depositions).

14 years later, Xi's anti-corruption purges continue apace, with 100 empty seats at this year's National People's Congress, whose former occupants are freshly imprisoned or awaiting trial:

https://www.bbc.com/news/articles/c78xxyyqwe7o

I don't know the details of all 100 prosecutions, but China absolutely has a corruption problem that goes all the way to the upper echelon of the state. I find it easy to believe that the officials Xi has targeted are guilty – and I also wouldn't be surprised to hear that they are all supporters of Xi's internal rivals for control of the CCP.

As the Epstein files demonstrate, anyone hoping to conduct a purge of America's elites could easily do so without having to frame anyone for crimes they didn't commit (remember, Epstein didn't just commit sex crimes – he was also a flagrant financial criminal and he implicated his network in those crimes).

It's not just Epstein. As America's capital classes indulge their incestuous longings with an endless orgy of mergers, it's corporate Habsburg jaws as far as the eye can see. These mergers are all as illegal as hell, but if you fire a mouthy comedian, you can make serious bank:

https://www.aljazeera.com/economy/2025/7/18/cbs-cancels-colberts-late-show-amid-pending-paramount-skydance-merger

And if you pay the right MAGA chud podcaster a million bucks, he'll grease your $14b merger through the DoJ:

https://pluralistic.net/2026/02/13/khanservatives/#kid-rock-eats-shit

And once these crooks merge to monopoly, they embark on programs of lawlessness that would shame Al Capone, but again, with the right podcaster on your side, you can keep on "robbing them blind, baby!"

https://www.thebignewsletter.com/p/a-wild-day-as-trump-doj-settles-with

The fact that these companies are all guilty is a foundational aspect of Trumpism. Boss-politics antitrust – and anti-corruption – doesn't need to manufacture evidence or pretexts to attack Trump's political rivals:

https://pluralistic.net/2026/02/13/khanservatives/#kid-rock-eats-shit

When everyone is guilty, you have a target-rich environment for extorting bribes:

https://www.nytimes.com/2026/03/13/business/tiktok-investors-set-to-pay-10-billion-fee-to-trump-administration.html

Just because the anti-corruption has legit targets, it doesn't follow that the whole thing isn't corrupt.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Full text of Bruce Sterling’s ETECH speech from last week https://web.archive.org/web/20060406025248/http://www.viridiandesign.org/2006/03/viridian-note-00459-emerging.html

#20yrsago HOWTO build a glowing throne out of 4k AOL CDs https://web.archive.org/web/20060408174929/https://stupidco.com/aol_throne_intro.html

#20yrsago How Sweden’s “Pirate Bay” site resists the MPAA https://web.archive.org/web/20060423222220/https://www.wired.com/news/technology/1,70358-0.html

#15yrsago Stephen King sticks up for unions https://www.youtube.com/watch?v=x1vW1zPmnKQ

#15yrsago Largest Wisconsin protests ever: 85,000+ people in Madison’s streets https://web.archive.org/web/20110319152841/http://www.huffingtonpost.com/2011/03/12/wisconsin-protesters-refu_n_834927.html

#15yrsago Sphere of tentacles https://web.archive.org/web/20110315170007/http://www.niradar.com/portfolio.asp?portfolio_id=325&off_set=8&selected_id=58734&pointer=16

#15yrsago Venn diagram illustrates all the different European unions, councils, zones and suchlike https://web.archive.org/web/20110313034335/http://bigthink.com/ideas/31556

#10yrsago Obama: cryptographers who don’t believe in magic ponies are “fetishists,” “absolutists” https://web.archive.org/web/20160312000011/https://theintercept.com/2016/03/11/obama-wants-nonexistent-middle-ground-on-encryption-warns-against-fetishizing-our-phones/

#10yrsago Donald Trump hires plainclothes security to investigate and interdict protesters https://www.politico.com/story/2016/03/donald-trump-rally-protester-crack-down-220407?lo=ap_b1

#1yrago Firing the refs doesn't end the game https://pluralistic.net/2025/03/12/epistemological-void/#do-your-own-research

#1yrago The future of Amazon coders is the present of Amazon warehouse workers https://pluralistic.net/2025/03/13/electronic-whipping/#youre-next


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1035 words today, 49526 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/
https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

PDF

13.03.2026 à 03:17

Pluralistic: Three more AI psychoses (12 Mar 2026)

Cory Doctorow

Texte intégral (7072 mots)


Today's links



A cross-section of a man's head. His brain has been replaced with an intricate mass of wooden gearing, being pumped and cranked by three 16th century druges. Behind them is a blown up view of a microchip.  Behind the head is a stylized illustration of grey matter, blown out with lots of saturation and blended in places with tumbled rocks.

Three more AI psychoses (permalink)

"AI psychosis" is one of those terms that is incredibly useful and also almost certainly going to be deprecated in smart circles in short order because it is: a) useful; b) easily colloquialized to describe related phenomena; and c) adjacent to medical issues, and there's a group of people who feel very strongly any metaphor that implicates human health is intrinsically stigmatizing and must be replaced with an awkward, lengthy phrase that no one can remember and only insiders understand.

So while we still can, let us revel in this useful term to talk about some very real pathologies in our world.

Formally, "AI psychosis" describes people who have delusions that are possibly induced, and definitely reinforced and magnified, by a chatbot. AI psychosis is clearly alarming for people whose loved ones fall prey to it, and it has been the subject of much press and popular attention, especially in the extreme cases where it has resulted in injury or death.

It's possible for AI psychosis to be both a new and alarming phenomenon and also to be on a continuum with existing phenomena. Paranoid delusions aren't new, of course. Take "Morgellons Disease," a psychosomatic belief that you have wires growing in your body, which causes sufferers to pick at their skin to the point of creating suppurating wounds. Morgellons emerged in the 2000s, but the name refers to a 17th-century case-report of a patient who suffered from a similar delusion:

https://en.wikipedia.org/wiki/A_Letter_to_a_Friend

Morgellons is both a 400 year old phenomenon and an internet pathology. How can that be? Because the internet makes it easier for people with sparsely distributed traits to locate one another, which is why the internet era is characterized by the coherence of people with formerly fringe characteristics into organized blocs, for better (gender minorities, #MeToo) and worse (Nazis).

Morgellons is rare, but if you suffer from it, it's easy for you to locate virtually every other person in the world with the same delusion and for all of you to reinforce and egg on your delusional beliefs.

Morgellons isn't the only delusion that the internet reinforces, of course. "Gang stalking delusion" is a belief in a shadowy gang of sadistic tormentors who sneak hidden messages into song lyrics and public signage and innuendo in overheard snatches of other people's conversations. It is an incredibly damaging delusion that ruins people's lives.

Gang stalking delusion isn't new, either – as with Morgellons, there are historical accounts of it going back centuries. But the internet supercharged gang stalking delusion by making it easy for GSD sufferers to find one another and reinforce one another's beliefs, helping each other spin elaborate explanations for why the relatives, therapists, and friends who try to help them are actually in on the conspiracy. The result is that GSD sufferers end up ever more isolated from people who are trying mightily to save them, and more connected to people who drive them to self-harm.

Enter chatbots. Ready access to eager-to-please LLMs at every hour of the day or night means that you don't even have to find a forum full of people with the same delusion as you, nor do you have to wait for a reply to your anguished message. The LLM is always there, ready to fire back a "yes-and" improv-style response that drives you deeper and deeper into delusion:

https://pluralistic.net/2025/09/17/automating-gang-stalking-delusion/

It's possible that there are delusions that are even more rare than GSD or Morgellons that AI is surfacing. Imagine if you were prone to fleeting delusional beliefs (and whomst amongst us hasn't experienced the bedrock certainty that we put something down right here, only to find it somewhere else and not have any idea how that happened?). Under normal circumstances, these cognitive misfires might be fleeting moments of discomfort, quickly forgotten. But if you are already habituated to asking a chatbot to explain things you don't understand, it might well yes-and you into an internally consistent, entirely wrong belief – that is, a delusion.

Think of how often you noticed "42" after reading Hitchhiker's Guide to the Galaxy, or how many times "6-7" crops up once you've experienced a baseline of exposure to adolescents. Now imagine that an obsequious tale-spinner was sitting at your elbow, helpfully noting these coincidences and fitting them into a folie-a-deux mystery play that projected a grand, paranoid narrative onto the world. Every bit of confirming evidence is lovingly cataloged, all disconfirming evidence is discounted or ignored. It's fully automated luxury QAnon – a self-baking conspiracy that harnesses an AI in service to driving you deeper and deeper into madness:

That's the original "AI psychosis" that the term was coined to describe. As Sam Cole notes in her excellent "How to Talk to Someone Experiencing 'AI Psychosis,'" mental health practitioners are not entirely comfortable with the "psychosis" label:

https://www.404media.co/ai-psychosis-help-gemini-chatgpt-claude-chatbot-delusions/

"Psychosis" here is best understood as an analogy, not a diagnosis, and, as already noted, there is a large cohort of very persistent people who make it their business to eradicate analogies that make reference to medical or health-related phenomena. But these analogies are very hard to kill, because they do useful work in connecting unfamiliar, novel phenomena with things we already understand.

It's true that these analogies can be stigmatizing, but they needn't be. As someone with an autoimmune disorder, I am not bothered by people who would also describe ICE as an autoimmune disorder in which antibodies attack the host, threatening its very life. I am capable of understanding "autoimmune disorder" as referring to both a literal, medical phenomenon; and a figurative, political one. I have never found myself confusing one for the other.

"AI psychosis" is one of those very useful analogies, and you can tell, because "AI psychosis" has found even more metaphorical uses, describing other bad beliefs about AI. Today, I want to talk about three of these AI psychoses, and how they relate to one another: the investor AI delusion, the boss AI delusion, and the critic AI delusion.

Let's start with the investors' delusion. AI started as an investment project from the usual suspects: venture capitalists, private wealth funds, and tech monopolists with large cash reserves and ready access to loans during the cheap credit bubble. These entities are accustomed to making large, long-shot bets, and they were extremely motivated to find new markets to grow into and take over.

Growing companies need to keep growing, but not because they have "the ideology of a tumor." Growing companies' imperative to keep growing isn't ideological at all – it's material. Growth companies' stock trade at a high multiple of their "price to earnings ratio" (PE ratio), which means that they can use their stock like money when buying other companies and hiring key employees.

But once those companies' growth slows down, investors revalue those shares at a much lower PE multiplier, which makes individual executives at the company (who are primarily paid in stock) personally much poorer, prompting their departure, while simultaneously kneecapping the company's ability to grow through acquisition and hiring, because a company with a falling share price has to buy things with cash, not stock. Companies can make more of their own stock on demand, simply by typing zeroes into a spreadsheet – but they can only get cash by convincing a customer, creditor or investor to part with some of their own:

https://pluralistic.net/2025/03/06/privacy-last/#exceptionally-american

Tech companies have absurdly large market shares – think of Google's 90% search dominance – and so they've spent 15+ years coming up with increasingly absurd gambits to convince investors that they will continue to grow by capturing other markets. At first, these companies claimed that they were on the verge of eating one another's lunches (Google would destroy Facebook with G+; Facebook would do the same to Youtube with the "pivot to video").

This has a real advantage in that one need not speculate about the potential value of Facebook's market – you only have to look at Facebook's quarterly reports. But the downside is that Facebook has its own ideas about whether Google is going to absorb its market, and they are prone to forcefully make the case that this won't happen.

After a few tumultuous years, tech giants switched to promoting growth via speculative new markets – metaverse, web3, crypto, blockchain, etc. Speculative new markets are speculative, and the weakness of that is that no one can say how big those markets might be. But that's also the strength of those markets, because if no one can say how big those markets might be, then who's to say that they won't be very big indeed?

There's a different advantage to confining your concerns to imaginary things: imaginary things don't exist, so they don't contest your public statements about them, nor do they make demands on you. Think of how the right concerns itself with imaginary children (unborn babies, children in Wayfair furniture; children in nonexistent pizza parlor basements, children undergoing gender confirmation surgery). These are very convenient children to advocate for, since, unlike real children (hungry children, children killed in the Gaza genocide, children whose parents have been kidnapped by ICE, children whom Matt Goetz and Donald Trump trafficked for sex, children in cages at the US border, trans kids driven to self-harm and suicide after being denied care), nonexistent children don't want anything from you and they never make public pronouncements about whether you have their best interests at heart.

But as the AI project has required larger and larger sums to keep the wheels spinning, the usual suspects have started to run out of money, and now AI hustlers are increasingly looking to tap public markets for capital. They want you to invest your pension savings in their growth narrative machine, and they're relying on the fact that you don't understand the technology to trick you into handing over your money.

There's a name for this: it's called the "Byzantine premium" – that's the premium that an investment opportunity attracts by being so complicated and weird that investors don't understand it, making them easy to trick:

https://pluralistic.net/2022/03/13/the-byzantine-premium/

AI is a terrible economic phenomenon. It has lost more money than any other project in human history – $600-700b and counting, with trillions more demanded by the likes of OpenAI's Sam Altman. AI's core assets – data centers and GPUs – last 2-3 years, though AI bosses insist on depreciating them over five years, which is unequivocal accounting fraud, a way to obscure the losses the companies are incurring. But it doesn't actually matter whether the assets need to be replaced every two years, every three years, or every five years, because all the AI companies combined are claiming no more than $60b/year in revenue (that number is grossly inflated). You can't reach the $700b break-even point at $60b/year in two years, three years, or five years.

Now, some exceptionally valuable technologies have attained profitability after an extraordinarily long period in which they lost money, like the web itself. But these turnaround stories all share a common trait: they had good "unit economics." Every new web user reduced the amount of money the web industry was losing. Every time a user logged onto the web, they made the industry more profitable. Every generation of web technology was more profitable than the last.

Contrast this with AI: every user – paid or unpaid – that an AI company signs up costs them money. Every time that user logs into a chatbot or enters a prompt, the company loses more money. The more a user uses an AI product, the more money that product loses. And each generation of AI tech loses more money than the generation that preceded it.

To make AI look like a good investment, AI bosses and their pitchmen have to come up with a story that somehow addresses this phenomenon. Part of that story relies on the Byzantine premium: "Sure, you don't understand AI, but why would all these smart people commit hundreds of billions of dollars to AI if they weren't confident that they would make a lot of money from it?" In other words, "A pile of shit this big must have a pony underneath it somewhere!"

This is a great narrative trick, because it turns losing money into a virtue. If you've convinced a mark that the upside of the project is a multiple of the capital committed to it, then the more money you're losing, the better the investment seems.

So this is the first AI psychosis: the idea that we should bet the world's economy on these highly combustible GPUs and data centers with terrible unit economics and no path to break-even, much less profitability.

Investors' AI psychosis is cross-fertilized by our second form of AI psychosis, which is the bosses' AI psychosis: bosses' bottomless passion for firing workers and replacing them with automation.

Bosses are easy marks for anything that lets them fire workers. After all, the ideal firm is one that charges infinity for its outputs (hence the market's passion for monopolies) and pays nothing for its inputs (e.g. "academic publishing").

This means that the fact that a chatbot can't do your job isn't nearly as important as the fact that an AI salesman can convince your boss to fire you and replace you with a chatbot that can't do your job. Bosses keep replacing humans with defective chatbots, with catastrophic consequences, like Amazon's cloud service crashing:

https://www.techradar.com/pro/recent-aws-outages-blamed-on-ai-tools-at-least-two-incidents-took-down-amazon-services

Bosses are haunted by the ego-shattering knowledge that they aren't in the driver's seat: if the boss doesn't show up for work, everything continues to operate just fine. If the workers all stay home, the business grinds to a halt. In their secret hearts, bosses know that they're not in the driver's seat – they're in the back seat, playing with a Fisher Price steering wheel. AI dangles the possibility of wiring that toy steering wheel directly into the drive-train, so that the company's products go directly from the boss's imagination to the public without the boss having to ask people who know how to do things to execute their cockamamie schemes:

https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

This is a powerfully erotic proposition for bosses, the realization of the libidinal fantasy in which sky-high CEO salaries can be justified by the fact that everything that happens in the company is truly, directly attributable to the boss. Like the delusional person who can be led deeper and deeper into a fantasy world by a chatbot, a boss's delusion that they are worth thousands of times more than their workers makes them easy prey for a chatbot salesman that pushes them deeper and deeper into that delusion, until they bet the whole company on it.

Now we come to the third and final novel AI psychosis, the critics' psychosis, that AI is an abnormally terrible technology. This is a species of "criti-hype," which is when critics repeat the hyped-up claims of the companies they're targeting, but as criticism (think of all the people who believed and uncritically amplified the ad-tech industry's self-serving claims of being able to control our minds by "hacking our dopamine loops"):

https://peoples-things.ghost.io/youre-doing-it-wrong-notes-on-criticism-and-technology-hype/

AI is a normal technology. The people who made it, and the circumstances under which it was made, are normal. Its uses and abuses are normal. That doesn't make it good, but it does make it unexceptional:

https://www.normaltech.ai/p/a-guide-to-understanding-ai-as-normal

The exceptional part of AI isn't the technology, it's the bubble. There's nothing about AI per se that makes it exceptionally prone to devouring our natural resources, or endangering our jobs, or abetting war crimes. That's all because of the bubble, and the bubble relies on the idea that AI is exceptional, not normal. Repeating and amplifying claims about AI's exceptionalism helps the AI companies, because they rely on exceptionalism to keep the capital flowing and the bubble inflating.

AI is a normal technology. It's normal for a technology to be invented by unlikable and immoral people and institutions. Not every technology is invented by a shitty person, but shitty people and institutions are well represented (and possibly disproportionately represented) in the history of technology. Charles Babbage invented the idea of general purpose computers as a way of improving labor control on slave plantations:

https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/

Ada Lovelace wasn't interested in making slavery more efficient, but neither was she driven by pure scientific inquiry. She invented programming to help her bet on the horses (it didn't work):

https://en.wikipedia.org/wiki/Ada_Lovelace

The silicon transistor was co-invented by William Shockley, one of history's great pieces of shit, a eugenicist who was so committed to exterminating all non-white people that he never managed to ship a commercial product:

https://pluralistic.net/2021/10/24/the-traitorous-eight-and-the-battle-of-germanium-valley/

IBM built the tabulators for Auschwitz. HP were the Pentagon's go-to contractors for any tech project that was so dirty no one else would touch it. We only got Unix because Bell Labs committed so many antitrust violations that they weren't allowed to productize it themselves.

It's not exceptional for AI companies to have terrible, piece-of-shit founders. It's not exceptional for these companies to participate in war crimes. It's not exceptional for these founders to want to pauperize workers. It's not exceptional for these companies to lie about their products, bankrupt naive investors through stock swindles, and pitch themselves to investors as a way for capital to win the class war.

None of this means that AI companies are good, it just means that they are not exceptional. And because they aren't exceptional, the same dynamics that govern other technologies apply to AI companies' products. Their utility is a function of what they do, not who made them or how they were sold. The utility of AI products is based on whether people find ways to use them that make them happy – not whether the people who made those technologies are good people, or whether the funding for the technology was fraudulent, or whether other people use the technology to harm others.

Automation comes in two flavors: there's automation that produces things more quickly (and hence more cheaply), and there's automation that makes better things. Generally, capital prefers to use automation to increase the pace at which things are made, while workers prefer to use automation to improve the quality of the things they make.

Think of a hobbyist who pines for an automated soldering machine. That hobbyist longs to make board-level repairs and modifications that require precision that humans struggle to match. The hobbyist is a centaur, using a machine to help achieve human goals.

Now think of a factory owner who invests in an assembly line of the same machines: that boss wants to fire a bunch of workers and make the survivors of the purge take up the slack. The boss want to achieve corporate goals, to "sweat the assets," making maximum use of the soldering machines. The pace at which the line runs is set to be the maximum that the workers can match. The workers on the line are "reverse centaurs" – humans who are pressed into service as peripherals for machines, at a pace that is constantly at the very limit of their endurance.

Reverse centaurs are trapped in capital's automation plan – to make everything faster and cheaper. But that's the result of bosses. It's not the result of technology.

This is not to say that technology is apolitical. Only a fool would imagine that there are no politics embedded in technology. But you'd be a far greater fool if you asserted that the politics of a technology were simple, clear, and immutable.

Nor is this to say that when workers get to decide when and how to use technology, we will always make wise decisions. Perhaps the hobbyist who opts for an automated soldering machine will lose out on the opportunity to refine their hand-eye coordination in ways that will have many other benefits to their practice.

Or perhaps attempting to improve their hand-eye coordination to that point will wreck so many projects that they grow discouraged and give up altogether. Others' choices that seem unwise to you might have perfectly good explanations that aren't visible from your perspective. Ultimately, the world is a better place when workers get to decide which parts of their jobs they want to automate and which parts they want to lean into.

This is an extremely normal technological situation: for a new technology to be promoted and productized by shitty people who have grandiose goals that would be apocalyptic should they ever come to pass – and for some people to find uses of that technology that are nevertheless beneficial to them and their communities.

The belief that AI is an exceptionally bad technology (as opposed to an exceptionally bad economic bubble) drives AI critics into their own absurd culs-de-sac.

There are many, many skilled and reliable practitioners of technical and creative trades who've found extremely reasonable, normal ways in which AI has automated some part of their job. They aren't hyperventilating about how AI has changed everything forever and the world is about to end. They're not mistaking AI for god, or a therapist.

They're just treating AI like a normal technology, like a plugin. Programmers' tools have acquired useful automation plugins at regular intervals for decades – syntax checkers, advanced debuggers, automated wireframe utilities. For many programmers – including several of my acquaintance, whom I know to be both thoughtful and skilled – AI is another plugin, one they find useful enough to be modestly enthusiastic about.

It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale:

https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes

They're just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they won't always choose wisely, but that's normal too. There's plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.

It's only the belief that AI is exceptional – exceptionally wicked, but exceptional nevertheless – that leads critics to decide that they are a better judge of whether a skilled worker should or should not use certain automation tools, and to make that judgment not based on the quality of the work in question, but on the moral character of the tool itself.

AI is just normal. The bubble is what drives the environmental costs. If the only LLMs were a couple big data-centers at Sandia National Labs, no one would be particularly exercised about the water and energy demands they represented. Big scientific endeavors – from NASA launches to the large Hadron Collider – often come with immense material and energy needs. The bubble causes massive, wasteful, duplicative efforts that chase diminishing returns through farcical scale.

Nor are AI bros exceptional. The stock swindlers who've blown $700b (and counting) on AI aren't cyber-Svengalis with the power to cloud investors' minds. They're just running the same con that tech has been running ever since its returns started to taper off and survival became a matter of ginning up enthusiasm for speculative new ventures.

That doesn't mean those people aren't awful shits. Fuck those people. It just means that they're normal awful shits. We don't have to burnish their reputations by elevating them to the status of archdemons who taint everything they touch with unwashable sin. Sam Altman isn't Lex Luthor. He's just a conman:

https://open.substack.com/pub/garymarcus/p/breaking-sam-altmans-greed-and-dishonesty?r=8tdk6&utm_medium=ios

The fact that these bros are just normal assholes means that we don't have to treat everything they do as a sin. Scraping the entirety of human knowledge to make something new out of it isn't "stealing." Depending on why you're doing it, it can be archiving, or making a search engine:

https://pluralistic.net/2023/09/17/how-to-think-about-scraping/

Too many AI critics have started from the undeniable fact that these guys are odious creeps who boast about wanting to ruin the lives of workers and then worked backwards to find the sin. The sin isn't performing mathematical analysis on all the books ever written. That's actually kind of awesome. It's the kind of thing Aaron Swartz used to do – like when he ingested every law review article ever published and used it to trace the way that oil companies' donations to law schools resulted in profs writing articles about why Big Oil can't be held liable for trashing the planet:

https://web.archive.org/web/20111129181943/https://www.stanfordlawreview.org/print/article/punitive-damages-remunerated-research-and-legal-profession

AI bros' sin isn't making copies of published works. Hammering servers with badly behaved crawlers is a dick move and fuck them for doing it. But if these jerks made well-behaved scrapers that placed no abnormal demand on servers, it's not like their critics would say, "Oh, I guess it's fine, then."

AI bros' sin is running an economy-destroying, planet-wrecking stock swindle whose raison d'etre is pauperizing every worker and transferring 100% of the dying world's wealth to a small cadre of morbidly wealthy, eminently guillotineable plutes. Making plugins? That's not exceptional. It's just normal.

The fact that something is normal doesn't make it good. There's a lot of normal things that I'd like to throw into the Sun. But we don't do ourselves any favors when we amplify our enemies' self-aggrandizing narratives by accusing them of being exceptional, even when we mean "exceptionally evil." They're normal assholes.

Fuck 'em.

(Image: ZeptoBars, CC BY 3.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#15yrsago Notorious financier gets a “super-injunction” prohibiting the press from revealing that he is a banker https://www.telegraph.co.uk/finance/newsbysector/banksandfinance/8373535/Sir-Fred-Goodwin-former-RBS-chief-obtains-super-injunction.html

#10yrsago Shortly after her death, Harper Lee’s heirs kill cheap paperback edition of To Kill a Mockingbird https://newrepublic.com/article/131400/mass-market-edition-kill-mockingbird-dead

#10yrsago Web security company breached, client list (including KKK) dumped, hackers mock inept security https://arstechnica.com/information-technology/2016/03/after-an-easy-breach-hackers-leave-tips-when-running-a-security-company/

#10yrsago Microsoft spams corporate users with messages denigrating their IT departments https://web.archive.org/web/20160309195537/https://www.infoworld.com/article/3042397/microsoft-windows/admins-beware-domain-attached-pcs-are-sprouting-get-windows-10-ads.html

#10yrsago Cycle and Recycle: gorgeous photos of the European recycling process https://www.wired.com/2016/03/paul-bulteel-cycle-recyle-europe-recycles-tons-of-waste-and-its-pretty-gorgeous/

#10yrsago Fellowships for “Robin Hood” hackers to help poor people get access to the law https://web.archive.org/web/20160304221459/https://labs.robinhood.org/fellowship/

#10yrsago 3D printed battle-armor for cats https://web.archive.org/web/20160311224139/http://sinkhacks.com/making-3d-printed-cat-armor/

#10yrsago Great moments in the history of black science fiction https://web.archive.org/web/20160308034421/http://www.fantasticstoriesoftheimagination.com/a-crash-course-in-the-history-of-black-science-fiction/

#1yrago Daniel Pinkwater's "Jules, Penny and the Rooster" https://pluralistic.net/2025/03/11/klong-you-are-a-pickle-2/#martian-space-potato


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1081 words today, 48461 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/
https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

PDF

11.03.2026 à 20:11

Pluralistic: AI "journalists" prove that media bosses don't give a shit (11 Mar 2026)

Cory Doctorow

Texte intégral (5297 mots)


Today's links



A cutaway of a rocky underground, with a cylindrical brick cistern. Trapped in the prison is a 16th century drudge seated before a wheel on which rest a series of books that rotate along with the wheel.

AI "journalists" prove that media bosses don't give a shit (permalink)

Ed Zitron's a fantastic journalist, capable of turning a close read of AI companies' balance-sheets into an incandescent, exquisitely informed, eye-wateringly profane rant:

https://www.wheresyoured.at/the-ai-bubble-is-an-information-war/

That's "Ed, the financial sleuth." But Ed has another persona, one we don't get nearly enough of, which I delight in: "Ed the stunt journalist." For example, in 2024, Ed bought Amazon's bestselling laptop, "a $238 Acer Aspire 1 with a four-year-old Celeron N4500 Processor, 4GB of DDR4 RAM, and 128GB of slow eMMC storage" and wrote about the experience of using the internet with this popular, terrible machine:

https://www.wheresyoured.at/never-forgive-them/

It sucked, of course, but it sucked in a way that the median tech-informed web user has never experienced. Not only was this machine dramatically underpowered, but its defaults were set to accept all manner of CPU-consuming, screen-filling ad garbage and bloatware. If you or I had this machine, we would immediately hunt down all those settings and nuke them from orbit, but the kind of person who buys a $238 Acer Aspire from Amazon is unlikely to know how to do any of that and will suffer through it every day, forever.

Normally the "digital divide" refers to access to technology, but as access becomes less and less of an issue, the real divide is between people who know how to defend themselves from the cruel indifference of technology designers and people who are helpless before their enshittificatory gambits.

Zitron's stunt stuck with me because it's so simple and so apt. Every tech designer should be forced to use a stock configuration Acer Aspire 1 for a minimum of three hours/day, just as every aviation CEO should be required to fly basic coach at least one out of three flights (and one of two long-haul flights).

To that, I will add: every news executive should be forced to consume the news in a stock browser with no adblock, no accessibility plugins, no Reader View, none of the add-ons that make reading the web bearable:

https://pluralistic.net/2026/03/07/reader-mode/#personal-disenshittification

But in all honesty, I fear this would not make much of a difference, because I suspect that the people who oversee the design of modern news sites don't care about the news at all. They don't read the news, they don't consume the news. They hate the news. They view the news as a necessary evil within a wider gambit to deploy adware, malware, pop-ups, and auto-play video.

Rawdogging a Yahoo News article means fighting through a forest of pop-ups, pop-unders, autoplay video, interrupters, consent screens, modal dialogs, modeless dialogs – a blizzard of news-obscuring crapware that oozes contempt for the material it befogs. Irrespective of the words and icons displayed in these DOM objects, they all carry the same message: "The news on this page does not matter."

The owners of news services view the news as a necessary evil. They aren't a news organization: they are an annoying pop-up and cookie-setting factory with an inconvenient, vestigial news entity attached to it. News exists on sufferance, and if it was possible to do away with it altogether, the owners would.

That turns out to be the defining characteristic of work that is turned over to AI. Think of the rapid replacement of customer service call centers with AI. Long before companies shifted their customer service to AI chatbots, they shifted the work to overseas call centers where workers were prohibited from diverging from a script that made it all but impossible to resolve your problems:

https://pluralistic.net/2025/08/06/unmerchantable-substitute-goods/#customer-disservice

These companies didn't want to do customer service in the first place, so they sent the work to India. Then, once it became possible to replace Indian call center workers who weren't allowed to solve your problems with chatbots that couldn't resolve your problems, they fired the Indian call center workers and replaced them with chatbots. Ironically, many of these chatbots turn out to be call center workers pretending to be chatbots (as the Indian tech joke goes, "AI stands for 'Absent Indians'"):

https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain

"We used an AI to do this" is increasingly a way of saying, "We didn't want to do this in the first place and we don't care if it's done well." That's why DOGE replaced the call center reps at US Customs and Immigration with a chatbot that tells you to read a PDF and then disconnects the call:

https://pluralistic.net/2026/02/06/doge-ball/#n-600

The Trump administration doesn't want to hear from immigrants who are trying to file their bewildering paperwork correctly. Incorrect immigration paperwork is a feature, not a bug, since it can be refined into a pretext to kidnap someone, imprison them in a gulag long enough to line the pockets of a Beltway Bandit with a no-bid contract to operate an onshore black site, and then deport them to a country they have no connection with, generating a fat payout for another Beltway Bandit with the no-bid contract to fly kidnapped migrants to distant hellholes.

If the purpose of a customer service department is to tell people to go fuck themselves, then a chatbot is obviously the most efficient way of delivering the service. It's not just that a chatbot charges less to tell people to go fuck themselves than a human being – the chatbot itself means "go fuck yourself." A chatbot is basically a "go fuck yourself" emoji. Perhaps this is why every AI icon looks like a butthole:

https://velvetshark.com/ai-company-logos-that-look-like-buttholes

So it's no surprise that media bosses are so enthusiastic about replacing writers with chatbots. They hate the news and want it to go away. Outsourcing the writing to AI is just another way of devaluing it, adjacent to the existing enshittification that sees the news buried in popups, autoplays, consent dialogs, interrupters and the eleventy-million horrors that a stock browser with default settings will shove into your eyeballs on behalf of any webpage that demands them:

https://pluralistic.net/2024/05/07/treacherous-computing/#rewilding-the-internet

Remember that summer reading list that Hearst distributed to newspapers around the country, which turned out to be stuffed with "hallucinated" titles? At first, the internet delighted in dunking on Marco Buscaglia, the writer whose byline the list ran under. But as 404 Media's Jason Koebler unearthed, Buscaglia had been set up to fail, tasked with writing most of a 64-page insert that would have normally been the work of dozens of writers, editors and fact checkers, all on his own:

https://www.404media.co/chicago-sun-times-prints-ai-generated-summer-reading-list-with-books-that-dont-exist/

When Hearst hires one freelancer to do the work of dozens, they are saying, "We do not give a shit about the quality of this work." It is literally impossible for any writer to produce something good under those conditions. The purpose of Hearst's syndicated summer guide was to bulk out the newspapers that had been stripmined by their corporate owners, slimmed down to a handful of pages that are mostly ads and wire-service copy. The mere fact that this supplement was handed to a single freelancer blares "Go fuck yourself" long before you clap eyes on the actual words printed on the pages.

The capital class is in the grips of a bizarre form of AI psychosis: the fantasy of a world without people, where any fool idea that pops into a boss's head can be turned into a product without having to negotiate its creation with skilled workers who might point out that your idea is pretty fucking stupid:

https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

For these AI boosters, the point isn't to create an AI that can do the work as well as a person – it's to condition the world to accept the lower-quality work that will come from a chatbot. Rather than reading a summer reading list of actual books, perhaps you could be satisfied with a summer reading list of hallucinated books that are at least statistically probable book-shaped imaginaries?

The bosses dreaming up use-cases for AI start from a posture of profound and proud ignorance of how workers who do useful things operate. They ask themselves, "If I was a ______, how would I do the job?" and then they ask an AI to do that, and declare the job done. They produce utility-shaped statistical artifacts, not utilities.

Take Grammarly, a company that offers statistical inferences about likely errors in your text. Grammar checkers aren't a terrible idea on their face, and I've heard from many people who struggle to express themselves in writing (either because of their communications style, or because they don't speak English as a first language) for whom apps like Grammarly are useful.

But Grammarly has just rolled out an AI tool that is so obviously contemptuous of writing that they might as well have called it "Go fuck yourself, by Grammarly." The new product is called "Expert Review," and it promises to give you writing advice "inspired" by writers whose writing they have ingested. I am one of these virtual "writing teachers" you can pay Grammarly for:

https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews

This is not how writing advice works. When I teach the Clarion Science Fiction and Fantasy Writers' workshop, my job isn't to train the students to produce work that is strongly statistically correlated with the sentence structure and word choices in my own writing. My job – the job of any writing teacher – is to try and understand the student's writing style and artistic intent, and to provide advice for developing that style to express that intent.

What Grammarly is offering isn't writing advice, it's stylometry, a computational linguistics technique for evaluating the likelihood that two candidate texts were written by the same person. Stylometry is a very cool discipline (as is adversarial stylometry, a set of techniques to obscure the authorship of a text):

https://en.wikipedia.org/wiki/Stylometry

But stylometry has nothing to do with teaching someone how to write. Even if you want to write a pastiche in the style of some writer you admire (or want to send up), word choices and sentence structure are only incidental to capturing that writer's style. To reduce "style" to "stylometry" is to commit the cardinal sin of technical analysis: namely, incinerating all the squishy qualitative aspects that can't be readily fed into a model and doing math on the resulting dubious quantitative residue:

https://locusmag.com/feature/cory-doctorow-qualia/

If you wanted to teach a chatbot to teach writing like a writer, you would – at a minimum – have to train that chatbot on the instruction that writer gives, not the material that writer has published. Nor can you infer how a writer would speak to a student by producing a statistical model of the finished work that writer has published. "Published work" has only an incidental relationship to "pedagogical communication."

Critics of Grammarly are mostly focused on the effrontery of using writers' names without their permission. But I'm not bothered by that, honestly. So long as no one is being tricked into thinking that I endorsed a product or service, you don't need my permission to say that I inspired it (even if I think it's shit).

What I find absolutely offensive about Grammarly is not that they took my name in vain, but rather, that they reduced the complex, important business of teaching writing to a statistical exercise in nudging your work into a word frequency distribution that hews closely to the average of some writer's published corpus. This is Grammarly's fraud: not telling people that they're being "taught by Cory Doctorow," but rather, telling people that they are being "taught" anything.

Reducing "teaching writing" to "statistical comparisons with another writer's published work" is another way of saying "go fuck yourself" – not to the writers whose identities that Grammarly has hijacked, but to the customers they are tricking into using this terrible, substandard, damaging product.

Preying on aspiring writers is a grift as old as the publishing industry. The world is full of dirtbag "story doctors," vanity presses, fake literary agents and other flimflam artists who exploit people's natural desire to be understood to steal from them:

https://writerbeware.blog/

Grammarly is yet another company for whom "AI" is just a way to lower quality in the hopes of lowering expectations. For Grammarly, helping writers with their prose is an irritating adjunct to the company's main business of separating marks from their money.

In business theory, the perfect firm is one that charges infinity for its products and pays zero for its inputs (you know, "scholarly publishing"). For bosses, AI is a way to shift their firm towards this ideal.

In this regard, AI is connected to the long tradition of capitalist innovation, in which new production efficiencies are used to increase quantity at the expense of quality. This has been true since the Luddite uprising, in which skilled technical workers who cared deeply about the textiles they produced using complex machines railed against a new kind of machine that produced manifestly lower quality fabric in much higher volumes:

https://pluralistic.net/2023/09/26/enochs-hammer/#thats-fronkonsteen

It's not hard to find credible, skilled people who have stories about using AI to make their work better. Elsewhere, I've called these people "centaurs" – human beings who are assisted by machines. These people are embracing the socialist mode of automation: they are using automation to improve quality, not quantity.

Whenever you hear a skilled practitioner talk about how they are able to hand off a time-consuming, low-value, low-judgment task to a model so they can focus on the part that means the most to them, you are talking to a centaur. Of course, it's possible for skilled practitioners to produce bad work – some of my favorite writers have published some very bad books indeed – but that isn't a function of automation, that's just human fallibility.

A reverse centaur (a person conscripted to act as a peripheral to a machine) is trapped by the capitalist mode of automation: quantity over quality. Machines work faster and longer than humans, and the faster and harder a human can be made to work, the closer the firm can come to the ideal of paying zero for its inputs.

A reverse centaur works for a machine that is set to run at the absolute limit of its human peripheral's capability and endurance. A reverse centaur is expected to produce with the mechanical regularity of a machine, catching every mistake the machine makes. A reverse centaur is the machine's accountability sink and moral crumple-zone:

https://estsjournal.org/index.php/ests/article/view/260

AI is a normal technology, just another set of automation tools that have some uses for some users. The thing that makes AI signify "go fuck yourself" isn't some intrinsic factor of large language models or transformers. It's the capitalist mode of automation, increasing quantity at the expense of quality. Automation doesn't have to be a way to reduce expectations in the hopes of selling worse things for more money – but without some form of external constraint (unions, regulation, competition), that is inevitably how companies will wield any automation, including and especially AI.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#15yrsago History of the Disney Haunted Mansion’s stretching portraits https://longforgottenhauntedmansion.blogspot.com/2011/03/many-faces-ofthe-other-stretching.html

#15yrsago Readers Against DRM (logo) https://web.archive.org/web/20110311213843/https://readersbillofrights.info/RAD

#15yrsago Lost Souls: Audio adaptation of a classic vampire novel https://memex.craphound.com/2011/03/10/lost-souls-audio-adaptation-of-a-classic-vampire-novel/

#15yrsago Time‘s appraisal of the first WorldCon https://web.archive.org/web/20080906184034/https://time.com/time/magazine/article/0,9171,761661-1,00.html

#15yrsago Insipid thrift-store landscapes improved with monsters https://imgur.com/involuntary-collaborations-i-buy-other-peoples-landscape-paintings-yard-sales-goodwill-put-monsters-them-r-pics-2780-march-11-2011-Oujbl

#15yrsago Fight 8-track piracy with this 1976 record sleeve https://www.flickr.com/photos/supraterra/5516574440/in/pool-41894168726@N01

#15yrsago Michigan Republicans create “financial martial law”; appointees to replace elected local officials https://web.archive.org/web/20120409124750/http://www.dailytribune.com/articles/2011/03/10/news/doc4d78d0d4d764d009636769.txt

#10yrsago Lawsuit reveals Obama’s DoJ sabotaged Freedom of Information Act transparency https://web.archive.org/web/20160309183758/https://news.vice.com/article/it-took-a-foia-lawsuit-to-uncover-how-the-obama-administration-killed-foia-reform

#10yrsago If the FBI can force decryption backdoors, why not backdoors to turn on your phone’s camera? https://www.theguardian.com/technology/2016/mar/10/apple-fbi-could-force-us-to-turn-on-iphone-cameras-microphones

#10yrsago Disgruntled IS defector dumps full details of tens of thousands of jihadis https://web.archive.org/web/20160330061315/https://news.sky.com/story/1656777/is-documents-identify-thousands-of-jihadis

#10yrsago Using distributed code-signatures to make it much harder to order secret backdoors https://arstechnica.com/information-technology/2016/03/cothority-to-apple-lets-make-secret-backdoors-impossible/

#10yrsago Open Source Initiative says standards aren’t open unless they protect security researchers and interoperability https://web.archive.org/web/20190822053758/https://www.eff.org/deeplinks/2016/03/-are-only-open-if-they-protect-security-and-interoperability

#1yrago Eggflation is excuseflation https://pluralistic.net/2025/03/10/demand-and-supply/#keep-cal-maine-and-carry-on


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1031 words today, 47410 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/
https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

PDF

10.03.2026 à 16:23

Pluralistic: Ad-tech is fascist tech (10 Mar 2026)

Cory Doctorow

Texte intégral (4475 mots)


Today's links



Times Square, lit up by night. Every ad sprouts a giant CCTV bubble. A green smoke crawls over the landscape.

Ad-tech is fascist tech (permalink)

A core tenet of the enshittification hypothesis is that all the terrible stuff we're subjected to in our digital lives today is the result of foreseeable (and foreseen) policy choices, which created the enshittogenic policy environment in which the worst people's worst ideas make the most money:

https://pluralistic.net/2025/09/10/say-their-names/#object-permanence

Take commercial surveillance. Google didn't have to switch from content-based ads (which chose ads based on your search terms and the contents of webpages) to surveillance-based ads (which used dossiers on your searches, emails, purchases and physical movements to target ads to you, personally). The content-based ads made Google billions, but the company made a gamble that surveillance-based ads would make them more money.

That gamble had two parts: the first was that advertisers would pay more for surveillance ads. This is the part we all focus on – the collusion between people who want to sell us stuff and companies willing to spy on us to help them do it.

But the other half of the bet is far more important: namely, whether spying on us would cost Google anything. Would they face fines? Would users collect massive civil judgments over these privacy violations? Would Google face criminal charges? These are the critical questions, because even if advertisers are willing to pay a premium for surveillance ads, it only makes sense to collect that premium if the excess profit it represents is larger than the anticipated penalties for committing surveillance crimes.

What's more, advertisers and Google execs all work for their shareholders, in a psychotic "market system" in which the myth of "fiduciary duty" is said to require companies to hurt us right up to the point where the harms they inflict on the world cost them more than the additional profits those harms deliver:

https://pluralistic.net/2024/09/18/falsifiability/#figleaves-not-rubrics

But the policymakers who ultimately determine whether the fines, judgments and criminal penalties outstrip the profits from spying – they work for us. They draw their paychecks from the public purse in exchange for safeguarding our interests, and they have manifestly failed at this.

Why did Google decide to start spying on us? For the same reason your dog licks its balls: because they could. The last consumer privacy law to make it out of the US Congress was a 1988 bill that banned video-store clerks from disclosing your VHS rentals:

https://pluralistic.net/2025/10/31/losing-the-crypto-wars/#surveillance-monopolism

And yes, the EU did pass a comprehensive consumer privacy law, but then abdicated any duty to enforce the GDPR, because US Big Tech companies pretend to be Irish, and Ireland is a crime-haven that lets the tax-evaders who maintain the fiction of a Dublin HQ break any EU law they find inconvenient:

https://pluralistic.net/2025/12/01/erin-go-blagged/#big-tech-omerta

The most important question for Google wasn't "Will advertisers pay more for surveillance targeting?" It was "Will lawmakers clobber us for spying on the whole internet?" And the answer to that second question was a resounding no.

Why did policymakers fail us? It's not much of a mystery, I'm afraid. Policymakers failed us because cops and spies hate privacy laws and lobby like hell against them. Cops and spies love commercial surveillance, because the private sector's massive surveillance dossiers are an off-the-books trove of warrantless surveillance data that the government can't legally collect. What's more, even if the spying was legal, buying private sector surveillance data is much cheaper than creating a public sector surveillance apparatus to collect the same info:

https://pluralistic.net/2023/08/16/the-second-best-time-is-now/#the-point-of-a-system-is-what-it-does

The harms of mass commercial surveillance were never hard to foresee. 20 years ago, Radar magazine commissioned a story from me about "the day Google turned evil," and I turned in "Scroogled," which was widely shared and reprinted:

https://web.archive.org/web/20070920193501/https://radaronline.com/from-the-magazine/2007/09/google_fiction_evil_dangerous_surveillance_control_1.php/

Radar is long gone, though it's back in the news now, thanks to the revelation that it was financed via Jeffrey Epstein as part of his plan to both control and loot magazines and newspapers:

https://www.reddit.com/r/Epstein/comments/142bufo/radar_magazine_lines_up_financing_published_2004/

But the premise of "Scroogled" lives on. 20 years ago, I wrote a story in which the bloated, paranoid, lawless DHS raided ad-tech databases of behavioral data in order to target people for secret arrests, extraordinary rendition, and torture.

It took a minute, but today, the DHS is paying data-brokers and ad-tech giants like Google for commercial surveillance data that it is using to feed the systems that automatically decide who will be kidnapped, rendered and tortured by ICE:

https://www.theregister.com/2026/01/27/ice_data_advertising_tech_firms/

I want to be clear here: I'm not claiming any prescience – quite the reverse in fact. My point is that it just wasn't very hard to see what would happen if we let the surveillance advertising industry run wild. Our lawmakers were warned. They did nothing. They exposed us to this risk, which was both foreseeable and foreseen.

Nor did the ICE/ad-tech alliance drop out of the sky. The fascist mobilization of ad-tech data for a racist pogrom is the latest installment in a series of extremely visible, worsening weaponizations of commercial surveillance. Just last year, I testified before Biden's CFPB at hearings on a rule to kill the data-broker industry, where we heard from the Pentagon about ad-tech targeting of American military personnel with gambling problems for location-based ads that reached them in their barracks:

https://pluralistic.net/2025/02/20/privacy-first-second-third/#malvertising

Biden's CFPB passed the data broker-killing rule, but Trump and DOGE nuked it before it went into effect. Trump officials didn't offer any rationale for this, despite the fact that the testimony in that hearing included a rep from the AARP who described how data brokers let advertisers target seniors with signs of dementia (a core Trump voter bloc). I don't know for sure, but I have a sneaking suspicion that the Stephen Miller wing of the Trump coalition wanted data brokers intact so that they could use them to round up and imprison/torture/murder/enslave non-white people and Trump's political enemies.

Despite this eminently foreseeable outcome of the ad-tech industry, many perfectly nice people who made extremely nice salaries working in ad-tech are rather alarmed by this turn of events:

https://quoteinvestigator.com/2017/11/30/salary/

On Adxchanger.com, ad-tech exec David Nyurenberg writes, "The Privacy ‘Zealots’ Were Right: Ad Tech’s Infrastructure Was Always A Risk":

https://www.adexchanger.com/data-driven-thinking/the-privacy-zealots-were-right-ad-techs-infrastructure-was-always-a-risk/

Nyurenberg opens with a very important point – not only is ad-tech dangerous, it's also just not very good at selling stuff. The claims for the efficacy of surveillance advertising are grossly overblown, and used to bilk advertisers out of high premiums for a defective product:

https://truthset.com/the-state-of-data-accuracy-form/

There's another point that Nyurenberg doesn't make, but which is every bit as important: many of ad-tech's fiercest critics have abetted ad-tech's rise by engaging in "criti-hype" (repeating hype claims as criticism):

https://peoples-things.ghost.io/youre-doing-it-wrong-notes-on-criticism-and-technology-hype/

The "surveillance capitalism" critics who repeated tech's self-serving mumbo-jumbo about "hacking our dopamine loops" helped ad-tech cast itself in the role of mind-controlling evil sorcerers, which greatly benefited these self-styled Cyber-Rasputins when they pitched their ads to credulous advertisers:

https://pluralistic.net/HowToDestroySurveillanceCapitalism

Nyurenberg points to European privacy activists like Johnny Ryan and Max Schrems, who have chased American surveillance advertising companies out of the Irish courts and into other EU territories and even Europe's federal court, pointing out that these two (and many others!) have long warned the world about the way that this data would be weaponized. Johnny Ryan famously called ad-tech's "realtime bidding" system, "the largest data breach ever recorded":

https://committees.parliament.uk/writtenevidence/453/html/

Ryan is referring to the fact that you don't even have to buy an ad to amass vast databases of surveillance data about internet users. When you land on a webpage, every one of the little boxes where an ad will eventually show up gets its own high-speed auction in which your private data is dangled before anyone with an ad-tech account, who gets to bid on the right to shove an ad into your eyeballs. The losers of that auction are supposed to delete all your private data that they get to see through this process, but obviously they do not.

And Max Schrems has hollered from the mountaintops for years about the inevitability of authoritarian governments helping themselves to ad-tech data in order to suppress dissent and terrorize their political opposition:

https://www.bipc.com/european-high-court-finds-eu-us-privacy-shield-invalid

Nyurenberg says his friends in ad-tech are really upset that these (eminently foreseeable) outcomes have come to pass, but (he says), ad-tech bosses claim they have no choice but to collaborate with the Trump regime. After all, we've seen what Trump does to companies that don't agree to help him commit crimes:

https://apnews.com/article/anthropic-trump-pentagon-hegseth-ai-104c6c39306f1adeea3b637d2c1c601b

Nyurenberg closes by upbraiding his ad-tech peers for refusing to engage with their critics during the decades in which it would have been possible to do something to prevent this outcome. Ad-tech insiders dismissed privacy activists as unrealistic extremists who wanted to end advertising itself and accused ad-tech execs of wanting to create a repressive state system of surveillance. In reality, critics were just pointing out the entirely foreseeable repressive state surveillance that ad-tech would end up enabling.

I'm quite pleased to see Nyurenberg calling for a reckoning among his colleagues, but I think there's plenty of blame to spread around. Sure, the ad-tech industry built this fascist dragnet – but a series of governments around the world let them do it. There was nothing inevitable about mass commercial surveillance. It doesn't even work very well! Mass commercial surveillance is the public-private partnership from hell, where cops and spies shielded ad-tech companies from regulation in exchange for those ad-tech companies selling cops and spies unlimited access to their databases.

Our policymakers are supposed to work for us. They failed us. Don't let anyone tell you that the greed and depravity of ad-tech are the sole causes of Trump's use of ad-tech to decide who to kidnap and send to a Salvadoran slave-labor camp. Policymakers should have known. They did know. They had every chance to stop this. They did not.

(Image: Jakub Hałun, CC BY 4.0; Myotus, CC BY-SA 4.0; Lewis Clarke, CC BY-SA 2.0; modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Toronto transit fans to Commission: withdraw anagram map lawsuit threat https://web.archive.org/web/20060407230329/http://www.ttcrider.ca/anagram.php

#15yrsago BBC newsteam kidnapped, hooded and beaten by Gadaffi’s forces https://www.bbc.com/news/world-africa-12695077

#15yrsago Activists seize Saif Gadaffi’s London mansion https://web.archive.org/web/20110310091023/https://london.indymedia.org/articles/7766

#10yrsago Spacefaring and contractual obligations: who’s with me? https://memex.craphound.com/2016/03/09/spacefaring-and-contractual-obligations-whos-with-me/

#10yrsago Home Depot might pay up to $0.34 in compensation for each of the 53 million credit cards it leaked https://web.archive.org/web/20160310041148/https://www.csoonline.com/article/3041994/security/home-depot-will-pay-up-to-195-million-for-massive-2014-data-breach.html

#10yrsago How to make a tiffin lunch pail from used tuna fish cans https://www.instructables.com/Tiffin-Box-from-Tuna-Cans/

#10yrsago “Water Bar” celebrates the wonder and fragility of tap water https://www.minnpost.com/cityscape/2016/03/world-s-first-full-fledged-water-bar-about-open-minneapolis/

#10yrsago French Parliament votes to imprison tech execs for refusal to decrypt https://arstechnica.com/tech-policy/2016/03/france-votes-to-penalise-companies-for-refusing-to-decrypt-devices-messages/

#10yrsago Anti-censorship coalition urges Virginia governor to veto “Beloved” bill https://ncac.org/incident/coalition-to-virginia-governor-veto-the-beloved-bill

#10yrsago Washington Post: 16 negative stories about Bernie Sanders in 16 hours https://www.commondreams.org/views/2016/03/08/washington-post-ran-16-negative-stories-bernie-sanders-16-hours


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1038 words today, 46380 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/
https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

PDF
10 / 10
 Persos A à L
Carmine
Mona CHOLLET
Anna COLIN-LEBEDEV
Julien DEVAUREIX
Cory DOCTOROW
Lionel DRICOT (PLOUM)
EDUC.POP.FR
Marc ENDEWELD
Michel GOYA
Hubert GUILLAUD
Gérard FILOCHE
Alain GRANDJEAN
Hacking-Social
Samuel HAYAT
Dana HILLIOT
François HOUSTE
Tagrawla INEQQIQI
Infiltrés (les)
Clément JEANNEAU
Paul JORION
Christophe LEBOUCHER
Michel LEPESANT
 
 Persos M à Z
Henri MALER
Christophe MASUTTI
Jean-Luc MÉLENCHON
MONDE DIPLO (Blogs persos)
Richard MONVOISIN
Corinne MOREL-DARLEUX
Timothée PARRIQUE
Thomas PIKETTY
VisionsCarto
Yannis YOULOUNTAS
Michaël ZEMMOUR
LePartisan.info
 
  Numérique
Blog Binaire
Christophe DESCHAMPS
Louis DERRAC
Olivier ERTZSCHEID
Olivier EZRATY
Framablog
Fake Tech (C. LEBOUCHER)
Romain LECLAIRE
Tristan NITOT
Francis PISANI
Irénée RÉGNAULD
Nicolas VIVANT
 
  Collectifs
Arguments
Blogs Mediapart
Bondy Blog
Dérivation
Économistes Atterrés
Dissidences
Mr Mondialisation
Palim Psao
Paris-Luttes.info
ROJAVA Info
 
  Créatifs / Art / Fiction
Nicole ESTEROLLE
Julien HERVIEUX
Alessandro PIGNOCCHI
Laura VAZQUEZ
XKCD
🌓