Pluralistic

Cory Doctorow's blog

Doctorow is a science fiction author, activist and journalist

His latest book is ATTACK SURFACE, a standalone adult sequel to LITTLE BROTHER. He is also the author HOW TO DESTROY SURVEILLANCE CAPITALISM, nonfiction about conspiracies and monopolies; and of RADICALIZED and WALKAWAY, science fiction for adults, a YA graphic novel called IN REAL LIFE; and young adult novels like HOMELAND, PIRATE CINEMA and LITTLE BROTHER. His first picture book was POESY THE MONSTER SLAYER (Aug 2020). He maintains a daily blog at Pluralistic.net. He works for the Electronic Frontier Foundation, is a MIT Media Lab Research Affiliate, is a Visiting Professor of Computer Science at Open University, a Visiting Professor of Practice at the University of North Carolina’s School of Library and Information Science and co-founded the UK Open Rights Group. Born in Toronto, Canada, he now lives in Los Angeles.

Publié le 15.05.2024 à 18:36

Pluralistic: Even if you think AI search could be good, it won't be good (15 May 2024)


Today's links



A cane-waving carny barker in a loud checked suit and straw boater. His mouth has been replaced with the staring red eye of HAL9000 from Kubrick's '2001: A Space Odyssey.' He stands on a backdrop composed of many knobs, switches and jacks. The knobs have all been replaced with HAL's eye, too. Above his head hovers a search-box and two buttons reading 'Google Search' and 'I'm feeling lucky.' The countertop he leans on has been replaced with a code waterfall effect as seen in the credit sequences of the Wachowskis' 'Matrix' movies. Standing to his right on the countertop is a cartoon mascot with white gloves and booties and the head of a grinning poop emoji. He is striped with the four colors of the Google logo. To his left is a cluster of old mainframe equipment in miniature.

Even if you think AI search could be good, it won't be good (permalink)

The big news in search this week is that Google is continuing its transition to "AI search" – instead of typing in search terms and getting links to websites, you'll ask Google a question and an AI will compose an answer based on things it finds on the web:

https://blog.google/products/search/generative-ai-google-search-may-2024/

Google bills this as "let Google do the googling for you." Rather than searching the web yourself, you'll delegate this task to Google. Hidden in this pitch is a tacit admission that Google is no longer a convenient or reliable way to retrieve information, drowning as it is in AI-generated spam, poorly labeled ads, and SEO garbage:

https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse

Googling used to be easy: type in a query, get back a screen of highly relevant results. Today, clicking the top links will take you to sites that paid for placement at the top of the screen (rather than the sites that best match your query). Clicking further down will get you scams, AI slop, or bulk-produced SEO nonsense.

AI-powered search promises to fix this, not by making Google search results better, but by having a bot sort through the search results and discard the nonsense that Google will continue to serve up, and summarize the high quality results.

Now, there are plenty of obvious objections to this plan. For starters, why wouldn't Google just make its search results better? Rather than building a LLM for the sole purpose of sorting through the garbage Google is either paid or tricked into serving up, why not just stop serving up garbage? We know that's possible, because other search engines serve really good results by paying for access to Google's back-end and then filtering the results:

https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi

Another obvious objection: why would anyone write the web if the only purpose for doing so is to feed a bot that will summarize what you've written without sending anyone to your webpage? Whether you're a commercial publisher hoping to make money from advertising or subscriptions, or – like me – an open access publisher hoping to change people's minds, why would you invite Google to summarize your work without ever showing it to internet users? Never mind how unfair that is, think about how implausible it is: if this is the way Google will work in the future, why wouldn't every publisher just block Google's crawler?

A third obvious objection: AI is bad. Not morally bad (though maybe morally bad, too!), but technically bad. It "hallucinates" nonsense answers, including dangerous nonsense. It's a supremely confident liar that can get you killed:

https://www.theguardian.com/technology/2023/sep/01/mushroom-pickers-urged-to-avoid-foraging-books-on-amazon-that-appear-to-be-written-by-ai

The promises of AI are grossly oversold, including the promises Google makes, like its claim that its AI had discovered millions of useful new materials. In reality, the number of useful new materials Deepmind had discovered was zero:

https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs

This is true of all of AI's most impressive demos. Often, "AI" turns out to be low-waged human workers in a distant call-center pretending to be robots:

https://pluralistic.net/2024/01/31/neural-interface-beta-tester/#tailfins

Sometimes, the AI robot dancing on stage turns out to literally be just a person in a robot suit pretending to be a robot:

https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain

The AI video demos that represent "an existential threat to Hollywood filmmaking" turn out to be so cumbersome as to be practically useless (and vastly inferior to existing production techniques):

https://www.wheresyoured.at/expectations-versus-reality/

But let's take Google at its word. Let's stipulate that:

a) It can't fix search, only add a slop-filtering AI layer on top of it; and

b) The rest of the world will continue to let Google index its pages even if they derive no benefit from doing so; and

c) Google will shortly fix its AI, and all the lies about AI capabilities will be revealed to be premature truths that are finally realized.

AI search is still a bad idea. Because beyond all the obvious reasons that AI search is a terrible idea, there's a subtle – and incurable – defect in this plan: AI search – even excellent AI search – makes it far too easy for Google to cheat us, and Google can't stop cheating us.

Remember: enshittification isn't the result of worse people running tech companies today than in the years when tech services were good and useful. Rather, enshittification is rooted in the collapse of constraints that used to prevent those same people from making their services worse in service to increasing their profit margins:

https://pluralistic.net/2024/03/26/glitchbread/#electronic-shelf-tags

These companies always had the capacity to siphon value away from business customers (like publishers) and end-users (like searchers). That comes with the territory: digital businesses can alter their "business logic" from instant to instant, and for each user, allowing them to change payouts, prices and ranking. I call this "twiddling": turning the knobs on the system's back-end to make sure the house always wins:

https://pluralistic.net/2023/02/19/twiddler/

What changed wasn't the character of the leaders of these businesses, nor their capacity to cheat us. What changed was the consequences for cheating. When the tech companies merged to monopoly, they ceased to fear losing your business to a competitor.

Google's 90% search market share was attained by bribing everyone who operates a service or platform where you might encounter a search box to connect that box to Google. Spending tens of billions of dollars every year to make sure no one ever encounters a non-Google search is a cheaper way to retain your business than making sure Google is the very best search engine:

https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task

Competition was once a threat to Google; for years, its mantra was "competition is a click away." Today, competition is all but nonexistent.

Then the surveillance business consolidated into a small number of firms. Two companies dominate the commercial surveillance industry: Google and Meta, and they collude to rig the market:

https://en.wikipedia.org/wiki/Jedi_Blue

That consolidation inevitably leads to regulatory capture: shorn of competitive pressure, the companies that dominate the sector can converge on a single message to policymakers and use their monopoly profits to turn that message into policy:

https://pluralistic.net/2022/06/05/regulatory-capture/

This is why Google doesn't have to worry about privacy laws. They've successfully prevented the passage of a US federal consumer privacy law. The last time the US passed a federal consumer privacy law was in 1988. It's a law that bans video store clerks from telling the newspapers which VHS cassettes you rented:

https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act

In Europe, Google's vast profits let it fly an Irish flag of convenience, thus taking advantage of Ireland's tolerance for tax evasion and violations of European privacy law:

https://pluralistic.net/2023/05/15/finnegans-snooze/#dirty-old-town

Google doesn't fear competition, it doesn't fear regulation, and it also doesn't fear rival technologies. Google and its fellow Big Tech cartel members have expanded IP law to allow it to prevent third parties from reverse-engineer, hacking, or scraping its services. Google doesn't have to worry about ad-blocking, tracker blocking, or scrapers that filter out Google's lucrative, low-quality results:

https://locusmag.com/2020/09/cory-doctorow-ip/

Google doesn't fear competition, it doesn't fear regulation, it doesn't fear rival technology and it doesn't fear its workers. Google's workforce once enjoyed enormous sway over the company's direction, thanks to their scarcity and market power. But Google has outgrown its dependence on its workers, and lays them off in vast numbers, even as it increases its profits and pisses away tens of billions on stock buybacks:

https://pluralistic.net/2023/11/25/moral-injury/#enshittification

Google is fearless. It doesn't fear losing your business, or being punished by regulators, or being mired in guerrilla warfare with rival engineers. It certainly doesn't fear its workers.

Making search worse is good for Google. Reducing search quality increases the number of queries, and thus ads, that each user must make to find their answers:

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

If Google can make things worse for searchers without losing their business, it can make more money for itself. Without the discipline of markets, regulators, tech or workers, it has no impediment to transferring value from searchers and publishers to itself.

Which brings me back to AI search. When Google substitutes its own summaries for links to pages, it creates innumerable opportunities to charge publishers for preferential placement in those summaries.

This is true of any algorithmic feed: while such feeds are important – even vital – for making sense of huge amounts of information, they can also be used to play a high-speed shell-game that makes suckers out of the rest of us:

https://pluralistic.net/2024/05/11/for-you/#the-algorithm-tm

When you trust someone to summarize the truth for you, you become terribly vulnerable to their self-serving lies. In an ideal world, these intermediaries would be "fiduciaries," with a solemn (and legally binding) duty to put your interests ahead of their own:

https://pluralistic.net/2024/05/07/treacherous-computing/#rewilding-the-internet

But Google is clear that its first duty is to its shareholders: not to publishers, not to searchers, not to "partners" or employees.

AI search makes cheating so easy, and Google cheats so much. Indeed, the defects in AI give Google a readymade excuse for any apparent self-dealing: "we didn't tell you a lie because someone paid us to (for example, to recommend a product, or a hotel room, or a political point of view). Sure, they did pay us, but that was just an AI 'hallucination.'"

The existence of well-known AI hallucinations creates a zone of plausible deniability for even more enshittification of Google search. As Madeleine Clare Elish writes, AI serves as a "moral crumple zone":

https://estsjournal.org/index.php/ests/article/view/260

That's why, even if you're willing to believe that Google could make a great AI-based search, we can nevertheless be certain that they won't.

(Image: Cryteria, CC BY 3.0; djhughman, CC BY 2.0; modified)


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#20yrsago Mayor dispatches cops to bust blogger-critic https://web.archive.org/web/20040605152433/https://www.loiclemeur.com/english/2004/05/a_french_blogge.html

#20yrsago England’s love affair with the utility bill https://web.archive.org/web/20040706124142/https://cede.blogspot.com/2004_05_01_cede_archive.html#108455091554008455

#20yrsago RIAA’s funny bookkeeping turns gains into losses https://web.archive.org/web/20040607052730/http://www.kensei-news.com/bizdev/publish/factoids_us/article_23374.shtml

#20yrsago Read this and understand the P2P wars https://papers.ssrn.com/sol3/papers.cfm?abstract_id=532882

#15yrsago Sarah Palin’s legal team doesn’t understand DNS https://www.huffpost.com/entry/crackhocom-sarah-palins-n_n_202417

#15yrsago Was 1971 the best year to be born a geek? https://www.raphkoster.com/2009/05/14/the-perfect-geek-age/

#15yrsago Charlie Stross on the future of gaming http://www.antipope.org/charlie/blog-static/2009/05/login_2009_keynote_gaming_in_t.html

#15yrsago UK chiropractors try to silence critic with libel claim https://gormano.blogspot.com/2009/05/two-things.html

#15yrsgo The Yggyssey: Pinkwater takes on The Odyssey https://memex.craphound.com/2009/05/15/the-yggyssey-pinkwater-takes-on-the-odyssey/

#15yrsago Sony Pictures CEO: “Nothing good from the Internet, period.” https://wwd.com/feature/memo-pad-uniqlo-nabs-deyn-bad-internet-classic-martha-2136751-1496073/

#10yrsago FCC brings down the gavel on Net Neutrality https://www.eff.org/deeplinks/2014/05/prepare-take-action-defend-net-neutrality-heres-how-fcc-makes-its-rules

#10yrsago IETF declares war on surveillance https://www.rfc-editor.org/rfc/rfc7258.txt

#10yrsago Rob Ford: a night of drunk driving, racism, drugs, beating friends and demeaning his wife https://www.thestar.com/news/insight/rob-ford-one-wild-night-in-march/article_7167c4f4-2a92-5444-b0d1-54d16a6a3f5b.html

#10yrsago Aussie politician calls rival a “c*nt” in Parliament, gets away with it https://www.youtube.com/watch?v=5TsNL3uBw1g

#10yrsago Mozilla CAN change the industry: by adding DRM, they change it for the worse https://www.eff.org/deeplinks/2014/05/mozilla-and-drm

#10yrsago De-obfuscating Big Cable’s numbers: investment flat since 2000 https://www.techdirt.com/2014/05/14/cable-industrys-own-numbers-show-general-decline-investment-over-past-seven-years/

#10yrsago Nude closeups of people who are more than 100 years old https://web.archive.org/web/20140516055234/http://anastasiapottingerphotography.com/gallery/art/centenarians/

#10yrsago Cable lobbyists strong-arm Congresscritters into signing anti-Net Neutrality petition https://web.archive.org/web/20140527030122/http://www.freepress.net/blog/2014/05/12/tell-congress-dont-sign-cable-industry-letter-against-real-net-neutrality

#10yrsago London property bubble examined https://timharford.com/2014/05/when-a-man-is-tired-of-london-house-prices/

#5yrsago A year after Meltdown and Spectre, security researchers are still announcing new serious risks from low-level chip operations https://www.wired.com/story/intel-mds-attack-speculative-execution-buffer/

#5yrsago Jury awards $2b to California couple who say Bayer’s Roundup weedkiller gave them cancer https://www.cnn.com/2019/05/14/business/bayer-roundup-verdict/index.html

#5yrsago AT&T promised it would create 7,000 jobs if Trump went through with its $3B tax-cut, but they cut 23,000 jobs instead https://arstechnica.com/tech-policy/2019/05/att-promised-7000-new-jobs-to-get-tax-break-it-cut-23000-jobs-instead/

#5yrsago DOJ accuses Verizon and AT&T employees of participating in SIM-swap identity theft crimes https://www.vice.com/en/article/d3n3am/att-and-verizon-employees-charged-sim-swapping-criminal-ring

#5yrsago Collecting user data is a competitive disadvantage https://a16z.com/the-empty-promise-of-data-moats/

#5yrsago Three years after the Umbrella Revolution, Hong Kong has its own Extinction Rebellion chapter https://www.scmp.com/news/hong-kong/health-environment/article/3010050/hong-kongs-new-extinction-rebellion-chapter-looks

#5yrsago Lawyer involved in suits against Israel’s most notorious cyber-arms dealer targeted by its weapons, delivered through a terrifying Whatsapp vulnerability https://www.nytimes.com/2019/05/13/technology/nso-group-whatsapp-spying.html

#5yrsago The New York Times on Carl Malamud and his tireless battle to make the law free for all to read https://www.nytimes.com/2019/05/13/us/politics/georgia-official-code-copyright.html

#5yrsago Alex Stamos on the security problems of the platforms’ content moderation, and what to do about them https://memex.craphound.com/2019/05/15/alex-stamos-on-the-security-problems-of-the-platforms-content-moderation-and-what-to-do-about-them/

#5yrsago Axon makes false statements to town that bought its police bodycams, threatens to tase their credit-rating if they cancel the contract https://www.muckrock.com/news/archives/2019/may/09/algorithms-axon-fontana/

#5yrsago After retaliation against Googler Uprising organizers, a company-wide memo warns employees they can be fired for accessing “need to know” data <a article="" en="" href="https://www.buzzfeednews.com/article/carolineodonovan/google-execs-internal-email-on-data-leak-policy-rattles'>https://www.buzzfeednews.com/article/carolineodonovan/google-execs-internal-email-on-data-leak-policy-rattles</a>

#5yrsago Discovering whether your Iphone has been hacked is nearly impossible thanks to Apple’s walled garden <a href=" https:="" its-almost-impossible-to-tell-if-iphone-has-been-hacked"="" pajkkz="" www.vice.com="">https://www.vice.com/en/article/pajkkz/its-almost-impossible-to-tell-if-iphone-has-been-hacked

#5yrsago Foxconn promised it would do something with the empty buildings it bought in Wisconsin, but they’re still empty (still no factory, either) https://www.theverge.com/2019/5/13/18565408/foxconn-wisconsin-innovation-centers-factories-empty-tax-subsidy

#1yrago Ireland's privacy regulator is a gamekeeper-turned-poacher https://pluralistic.net/2023/05/15/finnegans-snooze/#dirty-old-town

#1yrago Google’s AI Hype Circle https://pluralistic.net/2023/05/14/googles-ai-hype-circle/


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Precaratize Bosses https://craphound.com/news/2024/04/28/precaratize-bosses/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

Publié le 13.05.2024 à 21:12

Pluralistic: AI "art" and uncanniness (13 May 2024)


Today's links



An old woodcut of a disembodied man's hand operating a Ouija board planchette. It has been modified to add an extra finger and thumb. It has been tinted green. It has been placed on a 'code waterfall' backdrop as seen in the credit sequences of the Wachowskis' 'Matrix' movies.

AI "art" and uncanniness (permalink)

When it comes to AI art (or "art"), it's hard to find a nuanced position that respects creative workers' labor rights, free expression, copyright law's vital exceptions and limitations, and aesthetics.

I am, on balance, opposed to AI art, but there are some important caveats to that position. For starters, I think it's unequivocally wrong – as a matter of law – to say that scraping works and training a model with them infringes copyright. This isn't a moral position (I'll get to that in a second), but rather a technical one.

Break down the steps of training a model and it quickly becomes apparent why it's technically wrong to call this a copyright infringement. First, the act of making transient copies of works – even billions of works – is unequivocally fair use. Unless you think search engines and the Internet Archive shouldn't exist, then you should support scraping at scale:

https://pluralistic.net/2023/09/17/how-to-think-about-scraping/

And unless you think that Facebook should be allowed to use the law to block projects like Ad Observer, which gathers samples of paid political disinformation, then you should support scraping at scale, even when the site being scraped objects (at least sometimes):

https://pluralistic.net/2021/08/06/get-you-coming-and-going/#potemkin-research-program

After making transient copies of lots of works, the next step in AI training is to subject them to mathematical analysis. Again, this isn't a copyright violation.

Making quantitative observations about works is a longstanding, respected and important tool for criticism, analysis, archiving and new acts of creation. Measuring the steady contraction of the vocabulary in successive Agatha Christie novels turns out to offer a fascinating window into her dementia:

https://www.theguardian.com/books/2009/apr/03/agatha-christie-alzheimers-research

Programmatic analysis of scraped online speech is also critical to the burgeoning formal analyses of the language spoken by minorities, producing a vibrant account of the rigorous grammar of dialects that have long been dismissed as "slang":

https://www.researchgate.net/publication/373950278_Lexicogrammatical_Analysis_on_African-American_Vernacular_English_Spoken_by_African-Amecian_You-Tubers

Since 1988, UCL Survey of English Language has maintained its "International Corpus of English," and scholars have plumbed its depth to draw important conclusions about the wide variety of Englishes spoken around the world, especially in postcolonial English-speaking countries:

https://www.ucl.ac.uk/english-usage/projects/ice.htm

The final step in training a model is publishing the conclusions of the quantitative analysis of the temporarily copied documents as software code. Code itself is a form of expressive speech – and that expressivity is key to the fight for privacy, because the fact that code is speech limits how governments can censor software:

https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech/

Are models infringing? Well, they certainly can be. In some cases, it's clear that models "memorized" some of the data in their training set, making the fair use, transient copy into an infringing, permanent one. That's generally considered to be the result of a programming error, and it could certainly be prevented (say, by comparing the model to the training data and removing any memorizations that appear).

Not every seeming act of memorization is a memorization, though. While specific models vary widely, the amount of data from each training item retained by the model is very small. For example, Midjourney retains about one byte of information from each image in its training data. If we're talking about a typical low-resolution web image of say, 300kb, that would be one three-hundred-thousandth (0.0000033%) of the original image.

Typically in copyright discussions, when one work contains 0.0000033% of another work, we don't even raise the question of fair use. Rather, we dismiss the use as de minimis (short for de minimis non curat lex or "The law does not concern itself with trifles"):

https://en.wikipedia.org/wiki/De_minimis

Busting someone who takes 0.0000033% of your work for copyright infringement is like swearing out a trespassing complaint against someone because the edge of their shoe touched one blade of grass on your lawn.

But some works or elements of work appear many times online. For example, the Getty Images watermark appears on millions of similar images of people standing on red carpets and runways, so a model that takes even in infinitesimal sample of each one of those works might still end up being able to produce a whole, recognizable Getty Images watermark.

The same is true for wire-service articles or other widely syndicated texts: there might be dozens or even hundreds of copies of these works in training data, resulting in the memorization of long passages from them.

This might be infringing (we're getting into some gnarly, unprecedented territory here), but again, even if it is, it wouldn't be a big hardship for model makers to post-process their models by comparing them to the training set, deleting any inadvertent memorizations. Even if the resulting model had zero memorizations, this would do nothing to alleviate the (legitimate) concerns of creative workers about the creation and use of these models.

So here's the first nuance in the AI art debate: as a technical matter, training a model isn't a copyright infringement. Creative workers who hope that they can use copyright law to prevent AI from changing the creative labor market are likely to be very disappointed in court:

https://www.hollywoodreporter.com/business/business-news/sarah-silverman-lawsuit-ai-meta-1235669403/

But copyright law isn't a fixed, eternal entity. We write new copyright laws all the time. If current copyright law doesn't prevent the creation of models, what about a future copyright law?

Well, sure, that's a possibility. The first thing to consider is the possible collateral damage of such a law. The legal space for scraping enables a wide range of scholarly, archival, organizational and critical purposes. We'd have to be very careful not to inadvertently ban, say, the scraping of a politician's campaign website, lest we enable liars to run for office and renege on their promises, while they insist that they never made those promises in the first place. We wouldn't want to abolish search engines, or stop creators from scraping their own work off sites that are going away or changing their terms of service.

Now, onto quantitative analysis: counting words and measuring pixels are not activities that you should need permission to perform, with or without a computer, even if the person whose words or pixels you're counting doesn't want you to. You should be able to look as hard as you want at the pixels in Kate Middleton's family photos, or track the rise and fall of the Oxford comma, and you shouldn't need anyone's permission to do so.

Finally, there's publishing the model. There are plenty of published mathematical analyses of large corpuses that are useful and unobjectionable. I love me a good Google n-gram:

https://books.google.com/ngrams/graph?content=fantods%2C+heebie-jeebies&amp;year_start=1800&amp;year_end=2019&amp;corpus=en-2019&amp;smoothing=3

And large language models fill all kinds of important niches, like the Human Rights Data Analysis Group's LLM-based work helping the Innocence Project New Orleans' extract data from wrongful conviction case files:

https://hrdag.org/tech-notes/large-language-models-IPNO.html

So that's nuance number two: if we decide to make a new copyright law, we'll need to be very sure that we don't accidentally crush these beneficial activities that don't undermine artistic labor markets.

This brings me to the most important point: passing a new copyright law that requires permission to train an AI won't help creative workers get paid or protect our jobs.

Getty Images pays photographers the least it can get away with. Publishers contracts have transformed by inches into miles-long, ghastly rights grabs that take everything from writers, but still shift legal risks onto them:

https://pluralistic.net/2022/06/19/reasonable-agreement/

Publishers like the New York Times bitterly oppose their writers' unions:

https://actionnetwork.org/letters/new-york-times-stop-union-busting

These large corporations already control the copyrights to gigantic amounts of training data, and they have means, motive and opportunity to license these works for training a model in order to pay us less, and they are engaged in this activity right now:

https://www.nytimes.com/2023/12/22/technology/apple-ai-news-publishers.html

Big games studios are already acting as though there was a copyright in training data, and requiring their voice actors to begin every recording session with words to the effect of, "I hereby grant permission to train an AI with my voice" and if you don't like it, you can hit the bricks:

https://www.vice.com/en/article/5d37za/voice-actors-sign-away-rights-to-artificial-intelligence

If you're a creative worker hoping to pay your bills, it doesn't matter whether your wages are eroded by a model produced without paying your employer for the right to do so, or whether your employer got to double dip by selling your work to an AI company to train a model, and then used that model to fire you or erode your wages:

https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids

Individual creative workers rarely have any bargaining leverage over the corporations that license our copyrights. That's why copyright's 40-year expansion (in duration, scope, statutory damages) has resulted in larger, more profitable entertainment companies, and lower payments – in real terms and as a share of the income generated by their work – for creative workers.

As Rebecca Giblin and I write in our book Chokepoint Capitalism, giving creative workers more rights to bargain with against giant corporations that control access to our audiences is like giving your bullied schoolkid extra lunch money – it's just a roundabout way of transferring that money to the bullies:

https://pluralistic.net/2022/08/21/what-is-chokepoint-capitalism/

There's an historical precedent for this struggle – the fight over music sampling. 40 years ago, it wasn't clear whether sampling required a copyright license, and early hip-hop artists took samples without permission, the way a horn player might drop a couple bars of a well-known song into a solo.

Many artists were rightfully furious over this. The "heritage acts" (the music industry's euphemism for "Black people") who were most sampled had been given very bad deals and had seen very little of the fortunes generated by their creative labor. Many of them were desperately poor, despite having made millions for their labels. When other musicians started making money off that work, they got mad.

In the decades that followed, the system for sampling changed, partly through court cases and partly through the commercial terms set by the Big Three labels: Sony, Warner and Universal, who control 70% of all music recordings. Today, you generally can't sample without signing up to one of the Big Three (they are reluctant to deal with indies), and that means taking their standard deal, which is very bad, and also signs away your right to control your samples.

So a musician who wants to sample has to sign the bad terms offered by a Big Three label, and then hand $500 out of their advance to one of those Big Three labels for the sample license. That $500 typically doesn't go to another artist – it goes to the label, who share it around their executives and investors. This is a system that makes every artist poorer.

But it gets worse. Putting a price on samples changes the kind of music that can be economically viable. If you wanted to clear all the samples on an album like Public Enemy's "It Takes a Nation of Millions To Hold Us Back," or the Beastie Boys' "Paul's Boutique," you'd have to sell every CD for $150, just to break even:

https://memex.craphound.com/2011/07/08/creative-license-how-the-hell-did-sampling-get-so-screwed-up-and-what-the-hell-do-we-do-about-it/

Sampling licenses don't just make every artist financially worse off, they also prevent the creation of music of the sort that millions of people enjoy. But it gets even worse. Some older, sample-heavy music can't be cleared. Most of De La Soul's catalog wasn't available for 15 years, and even though some of their seminal music came back in March 2022, the band's frontman Trugoy the Dove didn't live to see it – he died in February 2022:

https://www.vulture.com/2023/02/de-la-soul-trugoy-the-dove-dead-at-54.html

This is the third nuance: even if we can craft a model-banning copyright system that doesn't catch a lot of dolphins in its tuna net, it could still make artists poorer off.

Back when sampling started, it wasn't clear whether it would ever be considered artistically important. Early sampling was crude and experimental. Musicians who trained for years to master an instrument were dismissive of the idea that clicking a mouse was "making music." Today, most of us don't question the idea that sampling can produce meaningful art – even musicians who believe in licensing samples.

Having lived through that era, I'm prepared to believe that maybe I'll look back on AI "art" and say, "damn, I can't believe I never thought that could be real art."

But I wouldn't give odds on it.

I don't like AI art. I find it anodyne, boring. As Henry Farrell writes, it's uncanny, and not in a good way:

https://www.programmablemutter.com/p/large-language-models-are-uncanny

Farrell likens the work produced by AIs to the movement of a Ouija board's planchette, something that "seems to have a life of its own, even though its motion is a collective side-effect of the motions of the people whose fingers lightly rest on top of it." This is "spooky-action-at-a-close-up," transforming "collective inputs … into apparently quite specific outputs that are not the intended creation of any conscious mind."

Look, art is irrational in the sense that it speaks to us at some non-rational, or sub-rational level. Caring about the tribulations of imaginary people or being fascinated by pictures of things that don't exist (or that aren't even recognizable) doesn't make any sense. There's a way in which all art is like an optical illusion for our cognition, an imaginary thing that captures us the way a real thing might.

But art is amazing. Making art and experiencing art makes us feel big, numinous, irreducible emotions. Making art keeps me sane. Experiencing art is a precondition for all the joy in my life. Having spent most of my life as a working artist, I've come to the conclusion that the reason for this is that art transmits an approximation of some big, numinous irreducible emotion from an artist's mind to our own. That's it: that's why art is amazing.

AI doesn't have a mind. It doesn't have an intention. The aesthetic choices made by AI aren't choices, they're averages. As Farrell writes, "LLM art sometimes seems to communicate a message, as art does, but it is unclear where that message comes from, or what it means. If it has any meaning at all, it is a meaning that does not stem from organizing intention" (emphasis mine).

Farrell cites Mark Fisher's The Weird and the Eerie, which defines "weird" in easy to understand terms ("that which does not belong") but really grapples with "eerie."

For Fisher, eeriness is "when there is something present where there should be nothing, or is there is nothing present when there should be something." AI art produces the seeming of intention without intending anything. It appears to be an agent, but it has no agency. It's eerie.

Fisher talks about capitalism as eerie. Capital is "conjured out of nothing" but "exerts more influence than any allegedly substantial entity." The "invisible hand" shapes our lives more than any person. The invisible hand is fucking eerie. Capitalism is a system in which insubstantial non-things – corporations – appear to act with intention, often at odds with the intentions of the human beings carrying out those actions.

So will AI art ever be art? I don't know. There's a long tradition of using random or irrational or impersonal inputs as the starting point for human acts of artistic creativity. Think of divination:

https://pluralistic.net/2022/07/31/divination/

Or Brian Eno's Oblique Strategies:

http://stoney.sb.org/eno/oblique.html

I love making my little collages for this blog, though I wouldn't call them important art. Nevertheless, piecing together bits of other peoples' work can make fantastic, important work of historical note:

https://www.johnheartfield.com/John-Heartfield-Exhibition/john-heartfield-art/famous-anti-fascist-art/heartfield-posters-aiz

Even though painstakingly cutting out tiny elements from others' images can be a meditative and educational experience, I don't think that using tiny scissors or the lasso tool is what defines the "art" in collage. If you can automate some of this process, it could still be art.

Here's what I do know. Creating an individual bargainable copyright over training will not improve the material conditions of artists' lives – all it will do is change the relative shares of the value we create, shifting some of that value from tech companies that hate us and want us to starve to entertainment companies that hate us and want us to starve.

As an artist, I'm foursquare against anything that stands in the way of making art. As an artistic worker, I'm entirely committed to things that help workers get a fair share of the money their work creates, feed their families and pay their rent.

I think today's AI art is bad, and I think tomorrow's AI art will probably be bad, but even if you disagree (with either proposition), I hope you'll agree that we should be focused on making sure art is legal to make and that artists get paid for it.

Just because copyright won't fix the creative labor market, it doesn't follow that nothing will. If we're worried about labor issues, we can look to labor law to improve our conditions. That's what the Hollywood writers did, in their groundbreaking 2023 strike:

https://pluralistic.net/2023/10/01/how-the-writers-guild-sunk-ais-ship/

Now, the writers had an advantage: they are able to engage in "sectoral bargaining," where a union bargains with all the major employers at once. That's illegal in nearly every other kind of labor market. But if we're willing to entertain the possibility of getting a new copyright law passed (that won't make artists better off), why not the possibility of passing a new labor law (that will)? Sure, our bosses won't lobby alongside of us for more labor protection, the way they would for more copyright (think for a moment about what that says about who benefits from copyright versus labor law expansion).

But all workers benefit from expanded labor protection. Rather than going to Congress alongside our bosses from the studios and labels and publishers to demand more copyright, we could go to Congress alongside every kind of worker, from fast-food cashiers to publishing assistants to truck drivers to demand the right to sectoral bargaining. That's a hell of a coalition.

And if we do want to tinker with copyright to change the way training works, let's look at collective licensing, which can't be bargained away, rather than individual rights that can be confiscated at the entrance to our publisher, label or studio's offices. These collective licenses have been a huge success in protecting creative workers:

https://pluralistic.net/2023/02/26/united-we-stand/

Then there's copyright's wildest wild card: The US Copyright Office has repeatedly stated that works made by AIs aren't eligible for copyright, which is the exclusive purview of works of human authorship. This has been affirmed by courts:

https://pluralistic.net/2023/08/20/everything-made-by-an-ai-is-in-the-public-domain/

Neither AI companies nor entertainment companies will pay creative workers if they don't have to. But for any company contemplating selling an AI-generated work, the fact that it is born in the public domain presents a substantial hurdle, because anyone else is free to take that work and sell it or give it away.

Whether or not AI "art" will ever be good art isn't what our bosses are thinking about when they pay for AI licenses: rather, they are calculating that they have so much market power that they can sell whatever slop the AI makes, and pay less for the AI license than they would make for a human artist's work. As is the case in every industry, AI can't do an artist's job, but an AI salesman can convince an artist's boss to fire the creative worker and replace them with AI:

https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain

They don't care if it's slop – they just care about their bottom line. A studio executive who cancels a widely anticipated film prior to its release to get a tax-credit isn't thinking about artistic integrity. They care about one thing: money. The fact that AI works can be freely copied, sold or given away may not mean much to a creative worker who actually makes their own art, but I assure you, it's the only thing that matters to our bosses.


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#20yrsago Sofa of mousepads https://web.archive.org/web/20040531234157/http://www.engadget.com/entry/4721738962773695/

#20yrsago Open source games from 1978 https://www.atariarchives.org/basicgames/

#15yrsago Why RIAA lawsuits matter to the Free Software Foundation https://torrentfreak.com/the-war-on-sharing-why-the-fsf-cares-about-riaa-lawsuits-090513/

#15yrsago Jesse Ventura: I could make Chickenhawk Cheney confess to the Sharon Tate murders with a waterboard https://crooksandliars.com/heather/jesse-ventura-you-give-me-water-board-dick

#15yrsago French “three-strikes” copyright law passes — but may be dead anyway https://www.laquadrature.net/en/2009/05/13/solemn-burial-for-hadopi-in-french-national-assembly/

#15yrsago Business Software Alliance says that adopting copyright treaties doesn’t decrease piracy https://www.michaelgeist.ca/2009/05/wipo-and-bsa-data/

#15yrsago The Photographer: gripping graphic memoir about doctors in Soviet Afghanistan, accompanied by brilliant photos https://memex.craphound.com/2009/05/12/the-photographer-gripping-graphic-memoir-about-doctors-in-soviet-afghanistan-accompanied-by-brilliant-photos/

#10yrsago You are a Gmail user https://mako.cc/copyrighteous/google-has-most-of-my-email-because-it-has-all-of-yours

#10yrsago Forged certificates common in HTTPS sessions https://arstechnica.com/information-technology/2014/05/significant-portion-of-https-web-connections-made-by-forged-certificates/

#10yrsago McDonald’s Hot Coffee lawsuit: deliberate, corporatist urban legend https://priceonomics.com/how-a-lawsuit-over-hot-coffee-helped-erode-the-7th/

#10yrsago Spurious correlations: an engine for head-scratching coincidences http://www.tylervigen.com

#10yrsago NSA sabotaged exported US-made routers with backdoors https://www.theguardian.com/books/2014/may/12/glenn-greenwald-nsa-tampers-us-internet-routers-snowden

#10yrsago Podcast: Why it is not possible to regulate robots https://ia600209.us.archive.org/28/items/Cory_Doctorow_Podcast_272/Cory_Doctorow_Podcast_272_Why_it_is_not_possible_to_regulate_robots.mp3

#10yrsago Clapper’s ban on talking about leaks makes life difficult for crypto profs with cleared students https://twitter.com/mattblaze/status/464841668391624705

#10yrsago Ukip councillor sends cops to activist’s house, ask him to delete critical tweet https://www.theguardian.com/politics/2014/may/12/police-ask-blogger-remove-legitimate-tweet-ukip

#10yrsago Bletchley Park Trust erects “Berlin Wall” to cut off on-site computer history museum https://www.theguardian.com/technology/2014/may/12/bletchley-park-national-museum-computing-berlin-wall-restored-colossus-codebreaking

#5yrsago Vancouver’s housing bubble was driven by billions in laundered criminal proceeds https://www.seattletimes.com/business/billions-in-dirty-cash-helped-fuel-vancouver-b-c-s-housing-boom/

#5yrsago Supreme Court greenlights Apple customers’ lawsuit over App Store price-fixing https://www.wired.com/story/supreme-court-apple-decision-antitrust/

#5yrsago Amazon’s monopsony power: the other antitrust white meat https://www.promarket.org/2019/02/15/is-amazon-violating-the-sherman-act/

#5yrsago Trump supporters astonished to learn that the man they gave $20m to “build the wall” has nothing to show for it https://www.washingtonpost.com/nation/2019/05/11/group-raised-more-than-million-build-wall-now-some-supporters-want-answers/

#1yrsago Revenge of the Linkdumps https://pluralistic.net/2023/05/13/four-bar-linkage/#linkspittle


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Precaratize Bosses https://craphound.com/news/2024/04/28/precaratize-bosses/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

Publié le 11.05.2024 à 12:06

Pluralistic: Algorithmic feeds are a twiddler's playground (11 May 2024)


Today's links



A complex control panel whose knobs have all been replaced with the menacing red eye of HAL9000 from Kubrick's '2001: A Space Odyssey.' A skeletal figure on one side of the image reaches out a bony finger to twiddle one of the knobs.

Algorithmic feeds are a twiddler's playground (permalink)

Like Oscar Wilde, "I can resist anything except temptation," and my slow and halting journey to adulthood is really just me grappling with this fact, getting temptation out of my way before I can yield to it.

Behavioral economists have a name for the steps we take to guard against temptation: a "Ulysses pact." That's when you take some possibility off the table during a moment of strength in recognition of some coming moment of weakness:

https://archive.org/details/decentralizedwebsummit2016-corydoctorow

Famously, Ulysses did this before he sailed into the Sea of Sirens. Rather than stopping his ears with wax to prevent his hearing the sirens' song, which would lure him to his drowning, Ulysses has his sailors tie him to the mast, leaving his ears unplugged. Ulysses became the first person to hear the sirens' song and live to tell the tale.

Ulysses was strong enough to know that he would someday be weak. He expressed his strength by guarding against his weakness. Our modern lives are filled with less epic versions of the Ulysses pact: the day you go on a diet, it's a good idea to throw away all your Oreos. That way, when your blood sugar sings its siren song at 2AM, it will be drowned out by the rest of your body's unwillingness to get dressed, find your keys and drive half an hour to the all-night grocery store.

Note that this Ulysses pact isn't perfect. You might drive to the grocery store. It's rare that a Ulysses pact is unbreakable – we bind ourselves to the mast, but we don't chain ourselves to it and slap on a pair of handcuffs for good measure.

People who run institutions can – and should – create Ulysses pacts, too. A company that holds the kind of sensitive data that might be subjected to "sneak-and-peek" warrants by cops or spies can set up a "warrant canary":

https://en.wikipedia.org/wiki/Warrant_canary

This isn't perfect. A company that stops publishing regular transparency reports might have been compromised by the NSA, but it's also possible that they've had a change in management and the new boss just doesn't give a shit about his users' privacy:

https://www.fastcompany.com/90853794/twitters-transparency-reporting-has-tanked-under-elon-musk

Likewise, a company making software it wants users to trust can release that code under an irrevocable free/open software license, thus guaranteeing that each release under that license will be free and open forever. This is good, but not perfect: the new boss can take that free/open code down a proprietary fork and try to orphan the free version:

https://news.ycombinator.com/item?id=39772562

A company can structure itself as a public benefit corporation and make a binding promise to elevate its stakeholders' interests over its shareholders' – but the CEO can still take a secret $100m bribe from cryptocurrency creeps and try to lure those stakeholders into a shitcoin Ponzi scheme:

https://fortune.com/crypto/2024/03/11/kickstarter-blockchain-a16z-crypto-secret-investment-chris-dixon/

A key resource can be entrusted to a nonprofit with a board of directors who are charged with stewarding it for the benefit of a broad community, but when a private equity fund dangles billions before that board, they can talk themselves into a belief that selling out is the right thing to do:

https://www.eff.org/deeplinks/2020/12/how-we-saved-org-2020-review

Ulysses pacts aren't perfect, but they are very important. At the very least, creating a Ulysses pact starts with acknowledging that you are fallible. That you can be tempted, and rationalize your way into taking bad action, even when you know better. Becoming an adult is a process of learning that your strength comes from seeing your weaknesses and protecting yourself and the people who trust you from them.

Which brings me to enshittification. Enshittification is the process by which platforms betray their users and their customers by siphoning value away from each until the platform is a pile of shit:

https://en.wikipedia.org/wiki/Enshittification

Enshittification is a spectrum that can be applied to many companies' decay, but in its purest form, enshittification requires:

a) A platform: a two-sided market with business customers and end users who can be played off against each other;

b) A digital back-end: a market that can be easily, rapidly and undetectably manipulated by its owners, who can alter search-rankings, prices and costs on a per-user, per-query basis; and

c) A lack of constraint: the platform's owners must not fear a consequence for this cheating, be it from competitors, regulators, workforce resignations or rival technologists who use mods, alternative clients, blockers or other "adversarial interoperability" tools to disenshittify your product and sever your relationship with your users.

The founders of tech platforms don't generally set out to enshittify them. Rather, they are constantly seeking some equilibrium between delivering value to their shareholders and turning value over to end users, business customers, and their own workers. Founders are consummate rationalizers; like parenting, founding a company requires continuous, low-grade self-deception about the amount of work involved and the chances of success. A founder, confronted with the likelihood of failure, is absolutely capable of talking themselves into believing that nearly any compromise is superior to shuttering the business: "I'm one of the good guys, so the most important thing is for me to live to fight another day. Thus I can do any number of immoral things to my users, business customers or workers, because I can make it up to them when we survive this crisis. It's for their own good, even if they don't know it. Indeed, I'm doubly moral here, because I'm volunteering to look like the bad guy, just so I can save this business, which will make the world over for the better":

https://locusmag.com/2024/05/cory-doctorow-no-one-is-the-enshittifier-of-their-own-story/

(En)shit(tification) flows downhill, so tech workers grapple with their own version of this dilemma. Faced with constant pressure to increase the value flowing from their division to the company, they have to balance different, conflicting tactics, like "increasing the number of users or business customers, possibly by shifting value from the company to these stakeholders in the hopes of making it up in volume"; or "locking in my existing stakeholders and squeezing them harder, safe in the knowledge that they can't easily leave the service provided the abuse is subtle enough." The bigger a company gets, the harder it is for it to grow, so the biggest companies realize their gains by locking in and squeezing their users, not by improving their service::

https://pluralistic.net/2023/07/28/microincentives-and-enshittification/

That's where "twiddling" comes in. Digital platforms are extremely flexible, which comes with the territory: computers are the most flexible tools we have. This means that companies can automate high-speed, deceptive changes to the "business logic" of their platforms – what end users pay, how much of that goes to business customers, and how offers are presented to both:

https://pluralistic.net/2023/02/19/twiddler/

This kind of fraud isn't particularly sophisticated, but it doesn't have to be – it just has to be fast. In any shell-game, the quickness of the hand deceives the eye:

https://pluralistic.net/2024/03/26/glitchbread/#electronic-shelf-tags

Under normal circumstances, this twiddling would be constrained by counterforces in society. Changing the business rules like this is fraud, so you'd hope that a regulator would step in and extinguish the conduct, fining the company that engaged in it so hard that they saw a net loss from the conduct. But when a sector gets very concentrated, its mega-firms capture their regulators, becoming "too big to jail":

https://pluralistic.net/2022/06/05/regulatory-capture/

Thus the tendency among the giant tech companies to practice the one lesson of the Darth Vader MBA: dismissing your stakeholders' outrage by saying, "I am altering the deal. Pray I don't alter it any further":

https://pluralistic.net/2023/10/26/hit-with-a-brick/#graceful-failure

Where regulators fail, technology can step in. The flexibility of digital platforms cuts both ways: when the company enshittifies its products, you can disenshittify it with your own countertwiddling: third-party ink-cartridges, alternative app stores and clients, scrapers, browser automation and other forms of high-tech guerrilla warfare:

https://www.eff.org/deeplinks/2019/10/adversarial-interoperability

But tech giants' regulatory capture have allowed them to expand "IP rights" to prevent this self-help. By carefully layering overlapping IP rights around their products, they can criminalize the technology that lets you wrestle back the value they've claimed for themselves, creating a new offense of "felony contempt of business model":

https://locusmag.com/2020/09/cory-doctorow-ip/

A world where users must defer to platforms' moment-to-moment decisions about how the service operates, without the protection of rival technology or regulatory oversight is a world where companies face a powerful temptation to enshittify.

That's why we've seen so much enshittification in platforms that algorithmically rank their feeds, from Google and Amazon search to Facebook and Twitter feeds. A search engine is always going to be making a judgment call about what the best result for your search should be. If a search engine is generally good at predicting which results will please you best, you'll return to it, automatically clicking the first result ("I'm feeling lucky").

This means that if a search engine slips in the odd paid result at the top of the results, they can exploit your trusting habits to shift value from you to their investors. The congifurability of a digital service means that they can sprinkle these frauds into their services on a random schedule, making them hard to detect and easy to dismiss as lapses. Gradually, this acquires its own momentum, and the platform becomes addicted to lowering its own quality to raise its profits, and you get modern Google, which cynically lowered search quality to increase search volume:

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

And you get Amazon, which makes $38 billion every year, accepting bribes to replace its best search results with paid results for products that cost more and are of lower quality:

https://pluralistic.net/2023/11/06/attention-rents/#consumer-welfare-queens

Social media's enshittification followed a different path. In the beginning, social media presented a deterministic feed: after you told the platform who you wanted to follow, the platform simply gathered up the posts those users made and presented them to you, in reverse-chronological order.

This presented few opportunities for enshittification, but it wasn't perfect. For users who were well-established on a platform, a reverse-chrono feed was an ungovernable torrent, where high-frequency trivialities drowned out the important posts from people whose missives were buried ten screens down in the updates since your last login.

For new users who didn't yet follow many people, this presented the opposite problem: an empty feed, and the sense that you were all alone while everyone else was having a rollicking conversation down the hall, in a room you could never find.

The answer was the algorithmic feed: a feed of recommendations drawn from both the accounts you followed and strangers alike. Theoretically, this could solve both problems, by surfacing the most important materials from your friends while keeping you abreast of the most important and interesting activity beyond your filter bubble. For many of us, this promise was realized, and algorithmic feeds became a source of novelty and relevance.

But these feeds are a profoundly tempting enshittification target. The critique of these algorithms has largely focused on "addictiveness" and the idea that platforms would twiddle the knobs to increase the relevance of material in your feed to "hack your engagement":

https://www.theguardian.com/technology/2018/mar/04/has-dopamine-got-us-hooked-on-tech-facebook-apps-addiction

Less noticed – and more important – was how platforms did the opposite: twiddling the knobs to remove things from your feed that you'd asked to see or that the algorithm predicted you'd enjoy, to make room for "boosted" content and advertisements:

https://www.reddit.com/r/Instagram/comments/z9j7uy/what_happened_to_instagram_only_ads_and_accounts/

Users were helpless before this kind of twiddling. On the one hand, they were locked into the platform – not because their dopamine had been hacked by evil tech-bro wizards – but because they loved the friends they had there more than they hated the way the service was run:

https://locusmag.com/2023/01/commentary-cory-doctorow-social-quitting/

On the other hand, the platforms had such an iron grip on their technology, and had deployed IP so cleverly, that any countertwiddling technology was instantaneously incinerated by legal death-rays:

https://techcrunch.com/2022/10/10/google-removes-the-og-app-from-the-play-store-as-founders-think-about-next-steps/

Newer social media platforms, notably Tiktok, dispensed entirely with deterministic feeds, defaulting every user into a feed that consisted entirely of algorithmic picks; the people you follow on these platforms are treated as mere suggestions by their algorithms. This is a perfect breeding-ground for enshittification: different parts of the business can twiddle the knobs to override the algorithm for their own parochial purposes, shifting the quality:shit ratio by unnoticeable increments, temporarily toggling the quality knob when your engagement drops off:

https://www.forbes.com/sites/emilybaker-white/2023/01/20/tiktoks-secret-heating-button-can-make-anyone-go-viral/

All social platforms want to be Tiktok: nominally, that's because Tiktok's algorithmic feed is so good at hooking new users and keeping established users hooked. But tech bosses also understand that a purely algorithmic feed is the kind of black box that can be plausibly and subtly enshittified without sparking user revolts:

https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys

Back in 2004, when Mark Zuckerberg was coming to grips with Facebook's success, he boasted to a friend that he was sitting on a trove of emails, pictures and Social Security numbers for his fellow Harvard students, offering this up for his friend's idle snooping. The friend, surprised, asked "What? How'd you manage that one?"

Infamously, Zuck replied, "People just submitted it. I don't know why. They 'trust me.' Dumb fucks."

https://www.esquire.com/uk/latest-news/a19490586/mark-zuckerberg-called-people-who-handed-over-their-data-dumb-f/

This was a remarkable (and uncharacteristic) self-aware moment from the then-nineteen-year-old Zuck. Of course Zuck couldn't be trusted with that data. Whatever Jiminy Cricket voice told him to safeguard that trust was drowned out by his need to boast to pals, or participate in the creepy nonconsensual rating of the fuckability of their female classmates. Over and over again, Zuckerberg would promise to use his power wisely, then break that promise as soon as he could do so without consequence:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3247362

Zuckerberg is a cautionary tale. Aware from the earliest moments that he was amassing power that he couldn't be trusted with, he nevertheless operated with only the weakest of Ulysses pacts, like a nonbinding promise never to spy on his users:

https://web.archive.org/web/20050107221705/http://www.thefacebook.com/policy.php

But the platforms have learned the wrong lesson from Zuckerberg. Rather than treating Facebook's enshittification as a cautionary tale, they've turned it into a roadmap. The Darth Vader MBA rules high-tech boardrooms.

Algorithmic feeds and other forms of "paternalistic" content presentation are necessary and even desirable in an information-rich environment. In many instances, decisions about what you see must be largely controlled by a third party whom you trust. The audience in a comedy club doesn't get to insist on knowing the punchline before the joke is told, just as RPG players don't get to order the Dungeon Master to present their preferred challenges during a campaign.

But this power is balanced against the ease of the players replacing the Dungeon Master or the audience walking out on the comic. When you've got more than a hundred dollars sunk into a video game and an online-only friend-group you raid with, the games company can do a lot of enshittification without losing your business, and they know it:

https://www.theverge.com/2024/5/10/24153809/ea-in-game-ads-redux

Even if they sometimes overreach and have to retreat:

https://www.eurogamer.net/sony-overturns-helldivers-2-psn-requirement-following-backlash

A tech company that seeks your trust for an algorithmic feed needs Ulysses pacts, or it will inevitably yield to the temptation to enshittify. From strongest to weakest, these are:

  • Not showing you an algorithmic feed at all;

https://joinmastodon.org/

  • "Composable moderation" that lets multiple parties provide feeds:

https://bsky.social/about/blog/4-13-2023-moderation

  • Offering an algorithmic "For You" feed alongside of a reverse-chrono "Friends" feed, defaulting to friends;

https://pluralistic.net/2022/12/10/e2e/#the-censors-pen

  • As above, but defaulting to "For You"

Maturity lies in being strong enough to know your weaknesses. Never trust someone who tells you that they will never yield to temptation! Instead, seek out people – and service providers – with the maturity and honesty to know how tempting temptation is, and who act before temptation strikes to make it easier to resist.

(Image: Cryteria, CC BY 3.0; djhughman, CC BY 2.0; modified)


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#20yrsago Sony’s entertainment business is killing its electronics business https://memex.craphound.com/2004/05/10/sonys-entertainment-business-is-killing-its-electronics-business/

#20yrsago Pixel-counting can un-redact government docs https://cryptome.org/cia-decrypt.htm

#20yrsago MPAA’s Bizarro-world logic https://web.archive.org/web/20060508195757/http://www.foxnews.com/story/0,2933,119414,00.html

#20yrsago Stanislaw Lem is cranky! https://web.archive.org/web/20040513235656/http://www.mosnews.com/interview/2004/04/06/lem.shtml

#20yrsago Internet Archive’s Petabox: a 1,000 terabyte array https://archive.org/web/petabox.php

#15yrsago Pinkwater’s Neddiad: awesome YA novel with ghosts, fat alien cops, shamans, circus animals, triplanes, swordfighting, etc https://memex.craphound.com/2009/05/11/pinkwaters-neddiad-awesome-ya-novel-with-ghosts-fat-alien-cops-shamans-circus-animals-triplanes-swordfighting-etc/

#15yrsago Selling fiber broadband by inviting users to dig their own trenches https://arstechnica.com/tech-policy/2009/05/norwegian-isp-dig-your-own-fiber-trench-save-400/

#15yrsago Free ebooks’ effects on book-sales https://web.archive.org/web/20090515095615/http://bloggasm.com/did-random-houses-free-online-book-releases-affect-sales

#15yrsago Pirate Bay founder proposes to pay his fine with tiny, expensive-to-receive payments https://web.archive.org/web/20090514014403/http://www.blogpirate.org/2009/05/10/pirate-bay-founder-crafts-distributed-denial-of-dollars-attack/

#15yrsago Canadian MPs don’t want Parliament videos in the hands of citizens https://web.archive.org/web/20090512225709/http://www.thestar.com/sciencetech/article/632164

#15yrsago Cornell says no to restrictions on public domain materials https://web.archive.org/web/20090515100044/http://news.library.cornell.edu/com/news/PressReleases/Cornell-University-Library-Removes-All-Restrictions-on-Use-of-Public-Domain-Reproductions.cfm

#15yrsago Antifascist collages that made Hitler crazy https://web.archive.org/web/20090513034540/http://www.quazen.com/Arts/Visual-Arts/The-Extraordinary-Anti-Nazi-Photomontages-of-John-Heartfield.702053

#10yrsago Amazon patents taking pictures of stuff on a white background https://www.diyphotography.net/can-close-studio-amazon-patents-photographing-seamless-white/

#5yrsago Delta targets its workers with anti-union apps that push deceptive memes https://memex.craphound.com/2019/05/10/delta-targets-its-workers-with-anti-union-apps-that-push-deceptive-memes/

#5yrsago Ever, an “unlimited photo storage app,” secretly fed its users’ photos to a face-recognition system pitched to military customers https://www.nbcnews.com/tech/security/millions-people-uploaded-photos-ever-app-then-company-used-them-n1003371

#5yrsago A former college admissions dean explains the mundane reverse affirmative action that lets the rich send their kids to the front of the line https://www.vox.com/the-highlight/2019/5/1/18311548/college-admissions-secrets-myths

#5yrsago Sanders and AOC team up for an anti-loansharking bill that will replace payday lenders with post-office banking https://www.nakedcapitalism.com/2019/05/why-you-should-back-the-sanders-aoc-plan-to-cap-credit-card-interest-rates-at-15-re-launch-the-postal-savings-bank.html

#5yrsago Frontier receives $283.4m/year in taxpayer money, neglects network, rips off customers — and Trump’s FCC won’t investigate https://arstechnica.com/tech-policy/2019/05/ajit-pai-refuses-to-investigate-frontiers-horrible-telecom-service/

#5yrsago Google mistakenly handed out a reporter’s cellphone number to people searching for Facebook tech support https://www.vice.com/en/article/zmpm43/google-thought-my-phone-number-was-facebooks-and-it-ruined-my-life

#5yrsago After elderly tenant was locked in his apartment by his landlord’s stupid “smart lock,” tenants win right to use actual keys to enter their homes https://www.cnet.com/home/smart-home/tenants-win-rights-to-physical-keys-over-smart-locks-from-landlords/

#5yrsago Co-founder of Facebook calls for breakup of Facebook https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html

#5yrsago Bipartisan groups call on Congress to reinstate the Office of Technology Assessment, which Gingrich killed in 1995 https://www.techdirt.com/2019/05/10/broad-coalition-tells-congress-to-bring-back-office-technology-assessment/

#5yrsago Beto O’Rourke just hired a “senior advisor” who used to lobby for Keystone XL, Seaworld and private prisons https://theintercept.com/2019/05/11/beto-orourke-campaign-staff-lobbyist-keystone-xl/

#5yrsago Facebook’s “celebration” and “memories” algorithms are auto-generating best-of-terror-recruiting pages for extremist groups https://www.securityweek.com/whistleblower-says-facebook-generating-terror-content/

#5yrsago Chelsea Manning’s statement on the occasion of her release https://www.youtube.com/watch?v=TDZGRRk4MnM

#1yrago 'We buy ugly houses' is code for 'we steal vulnerable peoples' homes' https://pluralistic.net/2023/05/11/ugly-houses-ugly-truth/#homevestor

#1yrago Two principles to protect internet users from decaying platforms https://pluralistic.net/2023/05/10/soft-landings/#e2e-r2e


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Precaratize Bosses https://craphound.com/news/2024/04/28/precaratize-bosses/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

Publié le 09.05.2024 à 10:59

Pluralistic: AI is a WMD (09 May 2024)


Today's links



A lonely mud-brick well in a brown desert. It has been modified to add a 'caganar' - a traditional Spanish figure of a man crouching down and defecating - perched on the edge of the well. The caganar's head has been replaced with the menacing red eye of HAL9000 from Kubrick's '2001: A Space Odyssey.' The sky behind this scene has been blended with a 'code waterfall' effect as seen in the credit sequences of the Wachowskis' 'Matrix' movies.

AI is a WMD (permalink)

Fun fact: "The Tragedy Of the Commons" is a hoax created by the white nationalist Garrett Hardin to justify stealing land from colonized people and moving it from collective ownership, "rescuing" it from the inevitable tragedy by putting it in the hands of a private owner, who will care for it properly, thanks to "rational self-interest":

https://pluralistic.net/2023/05/04/analytical-democratic-theory/#epistocratic-delusions

Get that? If control over a key resource is diffused among the people who rely on it, then (Garrett claims) those people will all behave like selfish assholes, overusing and undermaintaining the commons. It's only when we let someone own that commons and charge rent for its use that (Hardin says) we will get sound management.

By that logic, Google should be the internet's most competent and reliable manager. After all, the company used its access to the capital markets to buy control over the internet, spending billions every year to make sure that you never try a search-engine other than its own, thus guaranteeing it a 90% market share:

https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task

Google seems to think it's got the problem of deciding what we see on the internet licked. Otherwise, why would the company flush $80b down the toilet with a giant stock-buyback, and then do multiple waves of mass layoffs, from last year's 12,000 person bloodbath to this year's deep cuts to the company's "core teams"?

https://qz.com/google-is-laying-off-hundreds-as-it-moves-core-jobs-abr-1851449528

And yet, Google is overrun with scams and spam, which find their way to the very top of the first page of its search results:

https://pluralistic.net/2023/02/24/passive-income/#swiss-cheese-security

The entire internet is shaped by Google's decisions about what shows up on that first page of listings. When Google decided to prioritize shopping site results over informative discussions and other possible matches, the entire internet shifted its focus to producing affiliate-link-strewn "reviews" that would show up on Google's front door:

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

This was catnip to the kind of sociopath who a) owns a hedge-fund and b) hates journalists for being pain-in-the-ass, stick-in-the-mud sticklers for "truth" and "facts" and other impediments to the care and maintenance of a functional reality-distortion field. These dickheads started buying up beloved news sites and converting them to spam-farms, filled with garbage "reviews" and other Google-pleasing, affiliate-fee-generating nonsense.

(These news-sites were vulnerable to acquisition in large part thanks to Google, whose dominance of ad-tech lets it cream 51 cents off every ad dollar and whose mobile OS monopoly lets it steal 30 cents off every in-app subscriber dollar):

https://www.eff.org/deeplinks/2023/04/saving-news-big-tech

Now, the spam on these sites didn't write itself. Much to the chagrin of the tech/finance bros who bought up Sports Illustrated and other venerable news sites, they still needed to pay actual human writers to produce plausible word-salads. This was a waste of money that could be better spent on reverse-engineering Google's ranking algorithm and getting pride-of-place on search results pages:

https://housefresh.com/david-vs-digital-goliaths/

That's where AI comes in. Spicy autocomplete absolutely can't replace journalists. The planet-destroying, next-word-guessing programs from Openai and its competitors are incorrigible liars that require so much "supervision" that they cost more than they save in a newsroom:

https://pluralistic.net/2024/04/29/what-part-of-no/#dont-you-understand

But while a chatbot can't produce truthful and informative articles, it can produce bullshit – at unimaginable scale. Chatbots are the workers that hedge-fund wreckers dream of: tireless, uncomplaining, compliant and obedient producers of nonsense on demand.

That's why the capital class is so insatiably horny for chatbots. Chatbots aren't going to write Hollywood movies, but studio bosses hyperventilated at the prospect of a "writer" that would accept your brilliant idea and diligently turned it into a movie. You prompt an LLM in exactly the same way a studio exec gives writers notes. The difference is that the LLM won't roll its eyes and make sarcastic remarks about your brainwaves like "ET, but starring a dog, with a love plot in the second act and a big car-chase at the end":

https://pluralistic.net/2023/10/01/how-the-writers-guild-sunk-ais-ship/

Similarly, chatbots are a dream come true for a hedge fundie who ends up running a beloved news site, only to have to fight with their own writers to get the profitable nonsense produced at a scale and velocity that will guarantee a high Google ranking and millions in "passive income" from affiliate links.

One of the premier profitable nonsense companies is Advon, which helped usher in an era in which sites from Forbes to Money to USA Today create semi-secret "review" sites that are stuffed full of badly researched top-ten lists for products from air purifiers to cat beds:

https://housefresh.com/how-google-decimated-housefresh/

Advon swears that it only uses living humans to produce nonsense, and not AI. This isn't just wildly implausible, it's also belied by easily uncovered evidence, like its own employees' Linkedin profiles, which boast of using AI to create "content":

https://housefresh.com/wp-content/uploads/2024/05/Advon-AI-LinkedIn.jpg

It's not true. Advon uses AI to produce its nonsense, at scale. In an excellent, deeply reported piece for Futurism, Maggie Harrison Dupré brings proof that Advon replaced its miserable human nonsense-writers with tireless chatbots:

https://futurism.com/advon-ai-content

Dupré describes how Advon's ability to create botshit at scale contributed to the enshittification of clients from Yoga Journal to the LA Times, "Us Weekly" to the Miami Herald.

All of this is very timely, because this is the week that Google finally bestirred itself to commence downranking publishers who engage in "site reputation abuse" – creating these SEO-stuffed fake reviews with the help of third parties like Advon:

https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse

(Google's policy only forbids site reputation abuse with the help of third parties; if these publishers take their nonsense production in-house, Google may allow them to continue to dominate its search listings):

https://developers.google.com/search/blog/2024/03/core-update-spam-policies#site-reputation

There's a reason so many people believed Hardin's racist "Tragedy of the Commons" hoax. We have an intuitive understanding that commons are fragile. All it takes is one monster to start shitting in the well where the rest of us get our drinking water and we're all poisoned.

The financial markets love these monsters. Mark Zuckerberg's key insight was that he could make billions by assembling vast dossiers of compromising, sensitive personal information on half the world's population without their consent, but only if he kept his costs down by failing to safeguard that data and the systems for exploiting it. He's like a guy who figures out that if he accumulates enough oily rags, he can extract so much low-grade oil from them that he can grow rich, but only if he doesn't waste money on fire-suppression:

https://locusmag.com/2018/07/cory-doctorow-zucks-empire-of-oily-rags/

Now Zuckerberg and the wealthy, powerful monsters who seized control over our commons are getting a comeuppance. The weak countermeasures they created to maintain the minimum levels of quality to keep their platforms as viable, going concerns are being overwhelmed by AI. This was a totally foreseeable outcome: the history of the internet is a story of bad actors who upended the assumptions built into our security systems by automating their attacks, transforming an assault that wouldn't be economically viable into a global, high-speed crime wave:

https://pluralistic.net/2022/04/24/automation-is-magic/

But it is possible for a community to maintain a commons. This is something Hardin could have discovered by studying actual commons, instead of inventing imaginary histories in which commons turned tragic. As it happens, someone else did exactly that: Nobel Laureate Elinor Ostrom:

https://www.onthecommons.org/magazine/elinor-ostroms-8-principles-managing-commmons/

Ostrom described how commons can be wisely managed, over very long timescales, by communities that self-governed. Part of her work concerns how users of a commons must have the ability to exclude bad actors from their shared resources.

When that breaks down, commons can fail – because there's always someone who thinks it's fine to shit in the well rather than walk 100 yards to the outhouse.

Enshittification is the process by which control over the internet moved from self-governance by members of the commons to acts of wanton destruction committed by despicable, greedy assholes who shit in the well over and over again.

It's not just the spammers who take advantage of Google's lazy incompetence, either. Take "copyleft trolls," who post images using outdated Creative Commons licenses that allow them to terminate the CC license if a user makes minor errors in attributing the images they use:

https://pluralistic.net/2022/01/24/a-bug-in-early-creative-commons-licenses-has-enabled-a-new-breed-of-superpredator/

The first copyleft trolls were individuals, but these days, the racket is dominated by a company called Pixsy, which pretends to be a "rights protection" agency that helps photographers track down copyright infringers. In reality, the company is committed to helping copyleft trolls entrap innocent Creative Commons users into paying hundreds or even thousands of dollars to use images that are licensed for free use. Just as Advon upends the economics of spam and deception through automation, Pixsy has figured out how to send legal threats at scale, robolawyering demand letters that aren't signed by lawyers; the company refuses to say whether any lawyer ever reviews these threats:

https://pluralistic.net/2022/02/13/an-open-letter-to-pixsy-ceo-kain-jones-who-keeps-sending-me-legal-threats/

This is shitting in the well, at scale. It's an online WMD, designed to wipe out the commons. Creative Commons has allowed millions of creators to produce a commons with billions of works in it, and Pixsy exploits a minor error in the early versions of CC licenses to indiscriminately manufacture legal land-mines, wantonly blowing off innocent commons-users' legs and laughing all the way to the bank:

https://pluralistic.net/2023/04/02/commafuckers-versus-the-commons/

We can have an online commons, but only if it's run by and for its users. Google has shown us that any "benevolent dictator" who amasses power in the name of defending the open internet will eventually grow too big to care, and will allow our commons to be demolished by well-shitters:

https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi

(Image: Cryteria, CC BY 3.0; Catherine Poh Huay Tan, Laia Balagueró, CC BY 2.0; modified)


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#20yrsago Japan jails academic for writing P2P app https://web.archive.org/web/20040512194433/http://straitstimes.asia1.com.sg/latest/story/0,4390,250207,00.htmlhttps://stopdesign.com/journal/2004/05/09/blogger.html

#20yrsago TheyRule: applying information design to corporate directorships https://theyrule.net

#20yrsago Don’t just protect the unconceived: protect the inanimate! https://fafblog.blogspot.com/2004_05_02_fafblog_archive.html#108411098508640046

#15yrsago Brit MP saw undercover cops egging crowd to riot at G20 https://www.theguardian.com/politics/2009/may/10/g20-policing-agent-provacateurs

#15yrsago Elsevier has an entire division dedicated to publishing fake advertorial “peer-reviewed” journals https://science.slashdot.org/story/09/05/09/1514235/more-fake-journals-from-elsevier

#15yrsago New York Times webteam nukes the careers of many journalists https://web.archive.org/web/20090511024122/http://www.thomascrampton.com/newspapers/reporter-to-ny-times-publisher-you-erased-my-career/

#15yrsago It’s Useful to Have a Duck/It’s Useful to Have a Boy: great board-book tells the story from two points of view https://memex.craphound.com/2009/05/08/its-useful-to-have-a-duck-its-useful-to-have-a-boy-great-board-book-tells-the-story-from-two-points-of-view/

#10yrsago Fast food workers around the world to strike on May 15 http://america.aljazeera.com/articles/2014/5/7/fast-food-workersuniteactivistsannounceglobalprotest.html

#10yrsago Former NSA boss defends breaking computer security (in the name of national security) https://www.wired.com/2014/05/alexander-defends-use-of-zero-days/

#10yrsago Tor: network security for domestic abuse survivors https://web.archive.org/web/20140509221534/http://betaboston.com/news/2014/05/07/as-domestic-abuse-goes-digital-shelters-turn-to-counter-surveillance-with-tor/

#10yrsago The Oversight: conspiracies, magic, and the end of the world https://memex.craphound.com/2014/05/08/the-oversight-conspiracies-magic-and-the-end-of-the-world/

#10yrsago Charlie Stross on NSA network sabotage https://www.antipope.org/charlie/blog-static/2014/05/the-snowden-leaks-a-meta-narra.html

#10yrsago Peter “brokep” Sunde launches campaign for Finnish Pirate Party MEP https://www.youtube.com/watch?v=fModmx3U8HI

#10yrsago Against the instrumental argument for surveillance https://www.theguardian.com/technology/blog/2014/may/09/cybersecurity-begins-with-integrity-not-surveillance

#10yrsago Congressmen ask ad companies to pretend SOPA is law, violate antitrust https://www.eff.org/deeplinks/2014/05/pols-ad-networks-pretend-we-passed-sopa-and-never-mind-about-antitrust

#10yrsago Japanese man arrested for 3D printing and firing guns https://kotaku.com/japanese-man-arrested-for-having-guns-made-with-a-3d-pr-1573358490

#5yrsago Americans with diabetes are forming caravans to buy Canadian insulin at 90% off https://www.cbc.ca/news/canada/nova-scotia/americans-diabetes-cross-canada-border-insulin-1.5125988

#5yrsago Big Tech is deleting evidence needed to prosecute war crimes, and governments want them to do more of it https://www.theatlantic.com/ideas/archive/2019/05/facebook-algorithms-are-making-it-harder/588931/

#5yrsago Buried in Uber’s IPO, an aggressive plan to destroy all public transit https://48hills.org/2019/05/ubers-plans-include-attacking-public-transit/

#5yrsago Test your understanding of evolutionary psychology with this rigorous quiz https://www.currentaffairs.org/2019/05/evolutionary-psychology-quiz

#5yrsago Why “collapse” (not “rot”) is the way to think about software problems https://hal.science/hal-02117588/document

#5yrsago Human Rights Watch reverse-engineered the app that the Chinese state uses to spy on people in Xinjiang https://www.hrw.org/video-photos/interactive/2019/05/02/china-how-mass-surveillance-works-xinjiang

#5yrsago Google will now delete your account activity on a rolling basis https://myactivity.google.com/activitycontrols/webandapp?view=item&otzr=1&pli=1

#5yrsago Charter’s new way to be terrible: no more prorated cancellations https://arstechnica.com/information-technology/2019/05/charter-squeezes-more-money-out-of-internet-users-with-new-cancellation-policy/

#1yrago KPMG audits the nursing homes it advises on how to beat audits https://pluralistic.net/2023/05/09/dingo-babysitter/#maybe-the-dingos-ate-your-nan

#1yrago California to smash prison e-profiteers https://pluralistic.net/2023/05/08/captive-audience/#good-at-their-jobs


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources: Jon Christian (https://futurism.com/).

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Precaratize Bosses https://craphound.com/news/2024/04/28/precaratize-bosses/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

Publié le 07.05.2024 à 09:22

Pluralistic: The disenshittified internet starts with loyal "user agents" (07 May 2024)


Today's links



A huge, menacing demon holds a hand-tinted old-timey urchin in a clawed hand, looking at him with malign fascination. The background is a Matrix credits reel-style 'code waterfall.' In the foreground are various figures attending to control panels associated with the era of electromechanical computers. The demon's eyes have been replaced with the menacing red eyes of HAL 9000 from Kubrick's '2001: A Space Odyssey.'

The disenshittified internet starts with loyal "user agents" (permalink)

There's one overwhelmingly common mistake that people make about enshittification: assuming that the contagion is the result of the Great Forces of History, or that it is the inevitable end-point of any kind of for-profit online world.

In other words, they class enshittification as an ideological phenomenon, rather than as a material phenomenon. Corporate leaders have always felt the impulse to enshittify their offerings, shifting value from end users, business customers and their own workers to their shareholders. The decades of largely enshittification-free online services were not the product of corporate leaders with better ideas or purer hearts. Those years were the result of constraints on the mediocre sociopaths who would trade our wellbeing and happiness for their own, constraints that forced them to act better than they do today, even if they were not any better:

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

Corporate leaders' moments of good leadership didn't come from morals, they came from fear. Fear that a competitor would take away a disgruntled customer or worker. Fear that a regulator would punish the company so severely that all gains from cheating would be wiped out. Fear that a rival technology – alternative clients, tracker blockers, third-party mods and plugins – would emerge that permanently severed the company's relationship with their customers. Fears that key workers in their impossible-to-replace workforce would leave for a job somewhere else rather than participate in the enshittification of the services they worked so hard to build:

https://pluralistic.net/2024/04/22/kargo-kult-kaptialism/#dont-buy-it

When those constraints melted away – thanks to decades of official tolerance for monopolies, which led to regulatory capture and victory over the tech workforce – the same mediocre sociopaths found themselves able to pursue their most enshittificatory impulses without fear.

The effects of this are all around us. In This Is Your Phone On Feminism, the great Maria Farrell describes how audiences at her lectures profess both love for their smartphones and mistrust for them. Farrell says, "We love our phones, but we do not trust them. And love without trust is the definition of an abusive relationship":

https://conversationalist.org/2019/09/13/feminism-explains-our-toxic-relationships-with-our-smartphones/

I (re)discovered this Farrell quote in a paper by Robin Berjon, who recently co-authored a magnificent paper with Farrell entitled "We Need to Rewild the Internet":

https://www.noemamag.com/we-need-to-rewild-the-internet/

The new Berjon paper is narrower in scope, but is still packed with material examples of the way the internet goes wrong and how it can be put right. It's called "The Fiduciary Duties of User Agents":

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827421

In "Fiduciary Duties," Berjon focuses on the technical term "user agent," which is how web browsers are described in formal standards documents. This notion of a "user agent" is a holdover from a more civilized age, when technologists tried to figure out how to build a new digital space where technology served users.

A web browser that's a "user agent" is a comforting thought. An agent's job is to serve you and your interests. When you tell it to fetch a web-page, your agent should figure out how to get that page, make sense of the code that's embedded in, and render the page in a way that represents its best guess of how you'd like the page seen.

For example, the user agent might judge that you'd like it to block ads. More than half of all web users have installed ad-blockers, constituting the largest consumer boycott in human history:

https://doc.searls.com/2023/11/11/how-is-the-worlds-biggest-boycott-doing/

Your user agent might judge that the colors on the page are outside your visual range. Maybe you're colorblind, in which case, the user agent could shift the gamut of the colors away from the colors chosen by the page's creator and into a set that suits you better:

https://dankaminsky.com/dankam/

Or maybe you (like me) have a low-vision disability that makes low-contrast type difficult to impossible to read, and maybe the page's creator is a thoughtless dolt who's chosen light grey-on-white type, or maybe they've fallen prey to the absurd urban legend that not-quite-black type is somehow more legible than actual black type:

https://uxplanet.org/basicdesign-never-use-pure-black-in-typography-36138a3327a6

The user agent is loyal to you. Even when you want something the page's creator didn't consider – even when you want something the page's creator violently objects to – your user agent acts on your behalf and delivers your desires, as best as it can.

Now – as Berjon points out – you might not know exactly what you want. Like, you know that you want the privacy guarantees of TLS (the difference between "http" and "https") but not really understand the internal cryptographic mysteries involved. Your user agent might detect evidence of shenanigans indicating that your session isn't secure, and choose not to show you the web-page you requested.

This is only superficially paradoxical. Yes, you asked your browser for a web-page. Yes, the browser defied your request and declined to show you that page. But you also asked your browser to protect you from security defects, and your browser made a judgment call and decided that security trumped delivery of the page. No paradox needed.

But of course, the person who designed your user agent/browser can't anticipate all the ways this contradiction might arise. Like, maybe you're trying to access your own website, and you know that the security problem the browser has detected is the result of your own forgetful failure to renew your site's cryptographic certificate. At that point, you can tell your browser, "Thanks for having my back, pal, but actually this time it's fine. Stand down and show me that webpage."

That's your user agent serving you, too.

User agents can be well-designed or they can be poorly made. The fact that a user agent is designed to act in accord with your desires doesn't mean that it always will. A software agent, like a human agent, is not infallible.

However – and this is the key – if a user agent thwarts your desire due to a fault, that is fundamentally different from a user agent that thwarts your desires because it is designed to serve the interests of someone else, even when that is detrimental to your own interests.

A "faithless" user agent is utterly different from a "clumsy" user agent, and faithless user agents have become the norm. Indeed, as crude early internet clients progressed in sophistication, they grew increasingly treacherous. Most non-browser tools are designed for treachery.

A smart speaker or voice assistant routes all your requests through its manufacturer's servers and uses this to build a nonconsensual surveillance dossier on you. Smart speakers and voice assistants even secretly record your speech and route it to the manufacturer's subcontractors, whether or not you're explicitly interacting with them:

https://www.sciencealert.com/creepy-new-amazon-patent-would-mean-alexa-records-everything-you-say-from-now-on

By design, apps and in-app browsers seek to thwart your preferences regarding surveillance and tracking. An app will even try to figure out if you're using a VPN to obscure your location from its maker, and snitch you out with its guess about your true location.

Mobile phones assign persistent tracking IDs to their owners and transmit them without permission (to its credit, Apple recently switch to an opt-in system for transmitting these IDs) (but to its detriment, Apple offers no opt-out from its own tracking, and actively lies about the very existence of this tracking):

https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar

An Android device running Chrome and sitting inert, with no user interaction, transmits location data to Google every five minutes. This is the "resting heartbeat" of surveillance for an Android device. Ask that device to do any work for you and its pulse quickens, until it is emitting a nearly continuous stream of information about your activities to Google:

https://digitalcontentnext.org/blog/2018/08/21/google-data-collection-research/

These faithless user agents both reflect and enable enshittification. The locked-down nature of the hardware and operating systems for Android and Ios devices means that manufacturers – and their business partners – have an arsenal of legal weapons they can use to block anyone who gives you a tool to modify the device's behavior. These weapons are generically referred to as "IP rights" which are, broadly speaking, the right to control the conduct of a company's critics, customers and competitors:

https://locusmag.com/2020/09/cory-doctorow-ip/

A canny tech company can design their products so that any modification that puts the user's interests above its shareholders is illegal, a violation of its copyright, patent, trademark, trade secrets, contracts, terms of service, nondisclosure, noncompete, most favored nation, or anticircumvention rights. Wrap your product in the right mix of IP, and its faithless betrayals acquire the force of law.

This is – in Jay Freeman's memorable phrase – "felony contempt of business model." While more than half of all web users have installed an ad-blocker, thus overriding the manufacturer's defaults to make their browser a more loyal agent, no app users have modified their apps with ad-blockers.

The first step of making such a blocker, reverse-engineering the app, creates criminal liability under Section 1201 of the Digital Millennium Copyright Act, with a maximum penalty of five years in prison and a $500,000 fine. An app is just a web-page skinned in sufficient IP to make it a felony to add an ad-blocker to it (no wonder every company wants to coerce you into using its app, rather than its website).

If you know that increasing the invasiveness of the ads on your web-page could trigger mass installations of ad-blockers by your users, it becomes irrational and self-defeating to ramp up your ads' invasiveness. The possibility of interoperability acts as a constraint on tech bosses' impulse to enshittify their products.

The shift to platforms dominated by treacherous user agents – apps, mobile ecosystems, walled gardens – weakens or removes that constraint. As your ability to discipline your agent so that it serves you wanes, the temptation to turn your user agent against you grows, and enshittification follows.

This has been tacitly understood by technologists since the web's earliest days and has been reaffirmed even as enshittification increased. Berjon quotes extensively from "The Internet Is For End-Users," AKA Internet Architecture Board RFC 8890:

Defining the user agent role in standards also creates a virtuous cycle; it allows multiple implementations, allowing end users to switch between them with relatively low costs (…). This creates an incentive for implementers to consider the users' needs carefully, which are often reflected into the defining standards. The resulting ecosystem has many remaining problems, but a distinguished user agent role provides an opportunity to improve it.

https://datatracker.ietf.org/doc/html/rfc8890

And the W3C's Technical Architecture Group echoes these sentiments in "Web Platform Design Principles," which articulates a "Priority of Constituencies" that is supposed to be central to the W3C's mission:

User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.

https://w3ctag.github.io/design-principles/

But the W3C's commitment to faithful agents is contingent on its own members' commitment to these principles. In 2017, the W3C finalized "EME," a standard for blocking mods that interact with streaming videos. Nominally aimed at preventing copyright infringement, EME also prevents users from choosing to add accessibility add-ons that go beyond the ones the streaming service permits. These services may support closed captioning and additional narration of visual elements, but they block tools that adapt video for color-blind users or prevent strobe effects that trigger seizures in users with photosensitive epilepsy.

The fight over EME was the most contentious struggle in the W3C's history, in which the organization's leadership had to decide whether to honor the "priority of constituencies" and make a standard that allowed users to override manufacturers, or whether to facilitate the creation of faithless agents specifically designed to thwart users' desires on behalf of manufacturers:

https://www.eff.org/deeplinks/2017/09/open-letter-w3c-director-ceo-team-and-membership

This fight was settled in favor of a handful of extremely large and powerful companies, over the objections of a broad collection of smaller firms, nonprofits representing users, academics and other parties agitating for a web built on faithful agents. This coincided with the W3C's operating budget becoming entirely dependent on the very large sums its largest corporate members paid.

W3C membership is on a sliding scale, based on a member's size. Nominally, the W3C is a one-member, one-vote organization, but when a highly concentrated collection of very high-value members flex their muscles, W3C leadership seemingly perceived an existential risk to the organization, and opted to sacrifice the faithfulness of user agents in service to the anti-user priorities of its largest members.

For W3C's largest corporate members, the fight was absolutely worth it. The W3C's EME standard transformed the web, making it impossible to ship a fully featured web-browser without securing permission – and a paid license – from one of the cartel of companies that dominate the internet. In effect, Big Tech used the W3C to secure the right to decide who would compete with them in future, and how:

https://blog.samuelmaddock.com/posts/the-end-of-indie-web-browsers/

Enshittification arises when the everyday mediocre sociopaths who run tech companies are freed from the constraints that act against them. When the web – and its browsers – were a big, contented, diverse, competitive space, it was harder for tech companies to collude to capture standards bodies like the W3C to secure even more dominance. As the web turned into Tom Eastman's "five giant websites filled with screenshots of text from the other four," that kind of collusion became much easier:

https://pluralistic.net/2023/04/18/cursed-are-the-sausagemakers/#how-the-parties-get-to-yes

In arguing for faithful agents, Berjon associates himself with the group of scholars, regulators and activists who call for user agents to serve as "information fiduciaries." Mostly, information fiduciaries come up in the context of user privacy, with the idea that entities that hold a user's data would have the obligation to put the user's interests ahead of their own. Think of a lawyer's fiduciary duty in respect of their clients, to give advice that reflects the client's best interests, even when that conflicts with the lawyer's own self-interest. For example, a lawyer who believes that settling a case is the best course of action for a client is required to tell them so, even if keeping the case going would generate more billings for the lawyer and their firm.

For a user agent to be faithful, it must be your fiduciary. It must put your interests ahead of the interests of the entity that made it or operates it. Browsers, email clients, and other internet software that served as a fiduciary would do things like automatically blocking tracking (which most email clients don't do, especially webmail clients made by companies like Google, who also sell advertising and tracking).

Berjon contemplates a legally mandated fiduciary duty, citing Lindsey Barrett's "Confiding in Con Men":

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3354129

He describes a fiduciary duty as a remedy for the enforcement failures of EU's GDPR, a solidly written, and dismally enforced, privacy law. A legally backstopped duty for agents to be fiduciaries would also help us distinguish good and bad forms of "innovation" – innovation in ways of thwarting a user's will are always bad.

Now, the tech giants insist that they are already fiduciaries, and that when they thwart a user's request, that's more like blocking access to a page where the encryption has been compromised than like HAL9000's "I can't let you do that, Dave." For example, when Louis Barclay created "Unfollow Everything," he (and his enthusiastic users) found that automating the process of unfollowing every account on Facebook made their use of the service significantly better:

https://slate.com/technology/2021/10/facebook-unfollow-everything-cease-desist.html

When Facebook shut the service down with blood-curdling legal threats, they insisted that they were simply protecting users from themselves. Sure, this browser automation tool – which just automatically clicked links on Facebook's own settings pages – seemed to do what the users wanted. But what if the user interface changed? What if so many users added this feature to Facebook without Facebook's permission that they overwhelmed Facebook's (presumably tiny and fragile) servers and crashed the system?

These arguments have lately resurfaced with Ethan Zuckerman and Knight First Amendment Institute's lawsuit to clarify that "Unfollow Everything 2.0" is legal and doesn't violate any of those "felony contempt of business model" laws:

https://pluralistic.net/2024/05/02/kaiju-v-kaiju/

Sure, Zuckerman seems like a good guy, but what if he makes a mistake and his automation tool does something you don't want? You, the Facebook user, are also a nice guy, but let's face it, you're also a naive dolt and you can't be trusted to make decisions for yourself. Those decisions can only be made by Facebook, whom we can rely upon to exercise its authority wisely.

Other versions of this argument surfaced in the debate over the EU's decision to mandate interoperability for end-to-end encrypted (E2EE) messaging through the Digital Markets Act (DMA), which would let you switch from, say, Whatsapp to Signal and still send messages to your Whatsapp contacts.

There are some good arguments that this could go horribly awry. If it is rushed, or internally sabotaged by the EU's state security services who loathe the privacy that comes from encrypted messaging, it could expose billions of people to serious risks.

But that's not the only argument that DMA opponents made: they also argued that even if interoperable messaging worked perfectly and had no security breaches, it would still be bad for users, because this would make it impossible for tech giants like Meta, Google and Apple to spy on message traffic (if not its content) and identify likely coordinated harassment campaigns. This is literally the identical argument the NSA made in support of its "metadata" mass-surveillance program: "Reading your messages might violate your privacy, but watching your messages doesn't."

This is obvious nonsense, so its proponents need an equally obviously intellectually dishonest way to defend it. When called on the absurdity of "protecting" users by spying on them against their will, they simply shake their heads and say, "You just can't understand the burdens of running a service with hundreds of millions or billions of users, and if I even tried to explain these issues to you, I would divulge secrets that I'm legally and ethically bound to keep. And even if I could tell you, you wouldn't understand, because anyone who doesn't work for a Big Tech company is a naive dolt who can't be trusted to understand how the world works (much like our users)."

Not coincidentally, this is also literally the same argument the NSA makes in support of mass surveillance, and there's a very useful name for it: scalesplaining.

Now, it's totally true that every one of us is capable of lapses in judgment that put us, and the people connected to us, at risk (my own parents gave their genome to the pseudoscience genetic surveillance company 23andme, which means they have my genome, too). A true information fiduciary shouldn't automatically deliver everything the user asks for. When the agent perceives that the user is about to put themselves in harm's way, it should throw up a roadblock and explain the risks to the user.

But the system should also let the user override it.

This is a contentious statement in information security circles. Users can be "socially engineered" (tricked), and even the most sophisticated users are vulnerable to this:

https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security

The only way to be certain a user won't be tricked into taking a course of action is to forbid that course of action under any circumstances. If there is any means by which a user can flip the "are you very sure?" circuit-breaker back on, then the user can be tricked into using that means.

This is absolutely true. As you read these words, all over the world, vulnerable people are being tricked into speaking the very specific set of directives that cause a suspicious bank-teller to authorize a transfer or cash withdrawal that will result in their life's savings being stolen by a scammer:

https://www.thecut.com/article/amazon-scam-call-ftc-arrest-warrants.html

We keep making it harder for bank customers to make large transfers, but so long as it is possible to make such a transfer, the scammers have the means, motive and opportunity to discover how the process works, and they will go on to trick their victims into invoking that process.

Beyond a certain point, making it harder for bank depositors to harm themselves creates a world in which people who aren't being scammed find it nearly impossible to draw out a lot of cash for an emergency and where scam artists know exactly how to manage the trick. After all, non-scammers only rarely experience emergencies and thus have no opportunity to become practiced in navigating all the anti-fraud checks, while the fraudster gets to run through them several times per day, until they know them even better than the bank staff do.

This is broadly true of any system intended to control users at scale – beyond a certain point, additional security measures are trivially surmounted hurdles for dedicated bad actors and are nearly insurmountable hurdles for their victims:

https://pluralistic.net/2022/08/07/como-is-infosec/

At this point, we've had a couple of decades' worth of experience with technological "walled gardens" in which corporate executives get to override their users' decisions about how the system should work, even when that means reaching into the users' own computer and compelling it to thwart the user's desire. The record is inarguable: while companies often use those walls to lock bad guys out of the system, they also use the walls to lock their users in, so that they'll be easy pickings for the tech company that owns the system:

https://pluralistic.net/2023/02/05/battery-vampire/#drained

This is neatly predicted by enshittification's theory of constraints: when a company can override your choices, it will be irresistibly tempted to do so for its own benefit, and to your detriment.

What's more, the mere possibility that you can override the way the system works acts as a disciplining force on corporate executives, forcing them to reckon with your priorities even when these are counter to their shareholders' interests. If Facebook is genuinely worried that an "Unfollow Everything" script will break its servers, it can solve that by giving users an unfollow everything button of its own design. But so long as Facebook can sue anyone who makes an "Unfollow Everything" tool, they have no reason to give their users such a button, because it would concede more control over their Facebook experience, including the controls needed to use Facebook less.

It's been more than 20 years since Seth Schoen and I got a demo of Microsoft's first "trusted computing" system, with its "remote attestations," which would let remote servers demand and receive accurate information about what kind of computer you were using and what software was running on it.

This could be beneficial to the user – you could send a "remote attestation" to a third party you trusted and ask, "Hey, do you think my computer is infected with malicious software?" Since the trusted computing system produced its report on your computer using a sealed, separate processor that the user couldn't directly interact with, any malicious code you were infected with would not be able to forge this attestation.

But this remote attestation feature could also be used to allow Microsoft to block you from opening a Word document with Libreoffice, Apple Pages, or Google Docs, or it could be used to allow a website to refuse to send you pages if you were running an ad-blocker. In other words, it could transform your information fiduciary into a faithless agent.

Seth proposed an answer to this: "owner override," a hardware switch that would allow you to force your computer to lie on your behalf, when that was beneficial to you, for example, by insisting that you were using Microsoft Word to open a document when you were really using Apple Pages:

https://web.archive.org/web/20021004125515/http://vitanuova.loyalty.org/2002-07-05.html

Seth wasn't naive. He knew that such a system could be exploited by scammers and used to harm users. But Seth calculated – correctly! – that the risks of having a key to let yourself out of the walled garden were less than being stuck in a walled garden where some corporate executive got to decide whether and when you could leave.

Tech executives never stopped questing after a way to turn your user agent from a fiduciary into a traitor. Last year, Google toyed with the idea of adding remote attestation to web browsers, which would let services refuse to interact with you if they thought you were using an ad blocker:

https://pluralistic.net/2023/08/02/self-incrimination/#wei-bai-bai

The reasoning for this was incredible: by adding remote attestation to browsers, they'd be creating "feature parity" with apps – that is, they'd be making it as practical for your browser to betray you as it is for your apps to do so (note that this is the same justification that the W3C gave for creating EME, the treacherous user agent in your browser – "streaming services won't allow you to access movies with your browser unless your browser is as enshittifiable and authoritarian as an app").

Technologists who work for giant tech companies can come up with endless scalesplaining explanations for why their bosses, and not you, should decide how your computer works. They're wrong. Your computer should do what you tell it to do:

https://www.eff.org/deeplinks/2023/08/your-computer-should-say-what-you-tell-it-say-1

These people can kid themselves that they're only taking away your power and handing it to their boss because they have your best interests at heart. As Upton Sinclair told us, it's impossible to get someone to understand something when their paycheck depends on them not understanding it.

The only way to get a tech boss to consistently treat you well is to ensure that if they stop, you can quit. Anything less is a one-way ticket to enshittification.

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#20yrsago US torturers made screensavers out of atrocity photos https://www.salon.com/2007/07/23/torture/

#20yrsago Floppy RAID https://web.archive.org/web/20040202110812/http://ohlssonvox.8k.com/fdd_raid.htm

#15yrsago Chinese provincial government orders local officials to smoke more https://www.telegraph.co.uk/news/newstopics/howaboutthat/5271376/Chinese-ordered-to-smoke-more-to-boost-economy.html

#15yrsago San Francisco Muni begins to enforce imaginary no-photos policy https://web.archive.org/web/20090510023205/http://www.whatimseeing.com/2009/05/06/what-is-munis-photography-policy/

#15yrsago MPAA to teachers: don’t rip DVDs, just record your television with a camcorder https://vimeo.com/4520463

#15yrsago End of Overeating: the science of junk-food cravings https://memex.craphound.com/2009/05/07/end-of-overeating-the-science-of-junk-food-cravings/

#10yrsago Imagineer Rolly Crump on the 1964 NY World’s Fair: audio memoir https://itskindofacutestory.com/?p=135

#10yrsago Vi Hart explains Net Neutrality https://www.youtube.com/watch?v=NAxMyTwmu_M

#10yrsago Kids are mostly sexually solicited online by classmates, peers, teens https://www.zephoria.org/thoughts/archives/2014/05/05/sexual-predators.html

#5yrsago danah boyd explains the connection between the epistemological crisis and the rise of far-right conspiratorial thinking https://web.archive.org/web/20190427233128/https://points.datasociety.net/agnotology-and-epistemological-fragmentation-56aa3c509c6b

#5yrsago “Steering With the Windshield Wipers”: why nothing we’re doing to fix Big Tech is working https://locusmag.com/2019/05/cory-doctorow-steering-with-the-windshield-wipers/

#5yrsago Facebook hands hundreds of contractors in India access to its users’ private messages and private Instagram posts in order to help train an AI https://www.reuters.com/article/us-facebook-ai/facebook-labels-posts-by-hand-posing-privacy-questions-idUSKCN1SC01T/

#5yrsago People with diabetes are scouring the internet for a discontinued insulin pump that can be reprogrammed as an “artificial pancreas” https://www.theatlantic.com/science/archive/2019/04/looping-created-insulin-pump-underground-market/588091/

#5yrsago App lets you auction your San Francisco parking spot https://web.archive.org/web/20140506133800/http://blog.sfgate.com/techchron/2014/05/05/sell-your-s-f-street-parking-spot-for-20/

#5yrsago How the diverse internet became a monoculture https://www.canadaland.com/podcast/276-20-years-after-napster-cory-doctorow-on-what-went-wrong-2/

#5yrsago Apple’s growth strategy is a textbook case of antitrust abuse https://www.theverge.com/2019/5/6/18531570/apple-company-purchases-startups-tim-cook-buy-rate

#1yrago Don’t Curb Your Enthusiasm https://pluralistic.net/2023/05/07/dont-curb-your-enthusiasm/


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources: Danny O'Brien.

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Precaratize Bosses https://craphound.com/news/2024/04/28/precaratize-bosses/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

Publié le 06.05.2024 à 12:13

Pluralistic: Amazon illegally interferes with an historic UK warehouse election (06 May 2024)


Today's links



A hand depositing a ballot in a perspex ballot box on a black background. The box is full of yellow-green piss and the ballot features an angry robot made from Amazon boxes and the phrase 'I am not a robot.' The box has an Amazon logo across its top.

Amazon illegally interferes with an historic UK warehouse election (permalink)

Amazon is very good at everything it does, including being very bad at the things it doesn't want to do. Take signing up for Prime: nothing could be simpler. The company has built a greased slide from Prime-curiosity to Prime-confirmed that is the envy of every UX designer.

But unsubscribing from Prime? That's a fucking nightmare. Somehow the company that can easily figure out how to sign up for a service is totally baffled when it comes to making it just as easy to leave. Now, there's two possibilities here: either Amazon's UX competence is a kind of erratic freak tide that sweeps in at unpredictable intervals and hits these unbelievable high-water marks, or the company just doesn't want to let you leave.

To investigate this question, let's consider a parallel: Black Flag's Roach Motel. This is an icon of American design, a little brown cardboard box that is saturated in irresistibly delicious (to cockroaches, at least) pheromones. These powerful scents make it admirably easy for all the roaches in your home to locate your Roach Motel and enter it.

But the interior of the Roach Motel is also coated in a sticky glue. Once roaches enter the motel, their legs and bodies brush up against this glue and become hopelessly mired in it. A roach can't leave – not without tearing off its own legs.

It's possible that Black Flag made a mistake here. Maybe they wanted to make it just as easy for a roach to leave as it is to enter. If that seems improbable to you, well, you're right. We don't even have to speculate, we can just refer to Black Flag's slogan for Roach Motel: "Roaches check in, but they don't check out."

It's intentional, and we know that because they told us so.

Back to Amazon and Prime. Was it some oversight that cause the company make it so marvelously painless to sign up for Prime, but such a titanic pain in the ass to leave? Again, no speculation is required, because Amazon's executives exchanged a mountain of internal memos in which this is identified as a deliberate strategy, by which they deliberately chose to trick people into signing up for Prime and then hid the means of leaving Prime. Prime is a Roach Motel: users check in, but they don't check out:

https://pluralistic.net/2023/09/03/big-tech-cant-stop-telling-on-itself/

When it benefits Amazon, they are obsessive – "relentless" (Bezos's original for the company) – about user friendliness. They value ease of use so highly that they even patented "one click checkout" – the incredibly obvious idea that a company that stores your shipping address and credit card could let you buy something with a single click:

https://en.wikipedia.org/wiki/1-Click#Patent

But when it benefits Amazon to place obstacles in our way, they are even more relentless in inventing new forms of fuckery, spiteful little landmines they strew in our path. Just look at how Amazon deals with unionization efforts in its warehouses.

Amazon's relentless union-busting spans a wide diversity of tactics. On the one hand, they cook up media narratives to smear organizers, invoking racist dog-whistles to discredit workers who want a better deal:

https://www.theguardian.com/technology/2020/apr/02/amazon-chris-smalls-smart-articulate-leaked-memo

On the other hand, they collude with federal agencies to make workers afraid that their secret ballots will be visible to their bosses, exposing them to retaliation:

https://www.nbcnews.com/tech/tech-news/amazon-violated-labor-law-alabama-union-election-labor-official-finds-rcna1582

They hold Cultural Revolution-style forced indoctrination meetings where they illegally threaten workers with punishment for voting in favor of their union:

https://www.nytimes.com/2023/01/31/business/economy/amazon-union-staten-island-nlrb.html

And they fire Amazon tech workers who express solidarity with warehouse workers:

https://www.cbsnews.com/news/amazon-fires-tech-employees-workers-criticism-warehouse-climate-policies/

But all this is high-touch, labor-intensive fuckery. Amazon, as we know, loves automation, and so it automates much of its union-busting: for example, it created an employee chat app that refused to deliver any message containing words like "fairness" or "grievance":

https://pluralistic.net/2022/04/05/doubleplusrelentless/#quackspeak

Amazon also invents implausible corporate fictions that allow it to terminate entire sections of its workforce for trying to unionize, by maintaining the tormented pretense that these workers, who wear Amazon uniforms, drive Amazon trucks, deliver Amazon packages, and are tracked by Amazon down to the movements of their eyeballs, are, in fact, not Amazon employees:

https://www.wired.com/story/his-drivers-unionized-then-amazon-tried-to-terminate-his-contract/

These workers have plenty of cause to want to unionize. Amazon warehouses are sources of grueling torment. Take "megacycling," a ten-hour shift that runs from 1:20AM to 11:50AM that workers are plunged into without warning or the right to refuse. This isn't just a night shift – it's a night shift that makes it impossible to care for your children or maintain any kind of normal life.

Then there's Jeff Bezos's war on his workers' kidneys. Amazon warehouse workers and drivers notoriously have to pee in bottles, because they are monitored by algorithms that dock their pay for taking bathroom breaks. The road to Amazon's warehouse in Coventry, England is littered with sealed bottles of driver piss, defenestrated by drivers before they reach the depot inspection site.

There's so much piss on the side of the Coventry road that the prankster Oobah Butler was able to collect it, decant it into bottles, and market it on Amazon as an energy beverage called "Bitter Lemon Release Energy," where it briefly became Amazon's bestselling energy drink:

https://pluralistic.net/2023/10/20/release-energy/#the-bitterest-lemon

(Butler promises that he didn't actually ship any bottled piss to people who weren't in on the gag – but let's just pause here and note how weird it is that a guy who hates our kidneys as much as Jeff Bezos built and flies a penis-shaped rocket.)

Butler also secretly joined the surge of 1,000 workers that Amazon hired for the Coventry warehouse in advance of a union vote, with the hope of diluting the yes side of that vote and forestall the union. Amazon displayed more of its famously selective competence here, spotting Butler and firing him in short order, while totally failing to notice that he was marketing bottles of driver piss as a bitter lemon drink on Amazon's retail platform.

After a long fight, Amazon's Coventry workers are finally getting their union vote, thanks to the GMB union's hard fought battle at the Central Arbitration Committee:

https://www.foxglove.org.uk/2024/04/26/amazon-warehouse-workers-in-coventry-will-vote-on-trade-union-recognition/

And right on schedule, Amazon has once again discovered its incredible facility for ease-of-use. The company has blanketed its shop floor with radioactively illegal "one click to quit the union" QR codes. When a worker aims their phones at the code and clicks the link, the system auto-generates a letter resigning the worker from their union.

As noted, this is totally illegal. English law bans employers from "making an offer to an employee for the sole or main purpose of inducing workers not to be members of an independent trade union, take part in its activities, or make use of its services."

Now, legal or not, this may strike you as a benign intervention on Amazon's part. Why shouldn't it be easy for workers to choose how they are represented in their workplaces? But the one-click system is only half of Amazon's illegal union-busting: the other half is delivered by its managers, who have cornered workers on the shop floor and ordered them to quit their union, threatening them with workplace retaliation if they don't.

This is in addition to more forced "captive audience" meetings where workers are bombarded with lies about what life in an union shop is like.

Again, the contrast couldn't be more stark. If you want to quit a union, Amazon makes this as easy as joining Prime. But if you want to join a union, Amazon makes that even harder than quitting Prime. Amazon has the same attitude to its workers and its customers: they see us all as a resource to be extracted, and have no qualms about tricking or even intimidating us into doing what's best for Amazon, at the expense of our own interests.

The campaigning law-firm Foxglove is representing five of Amazon's Coventry workers. They're doing the Lord's work:

https://www.foxglove.org.uk/2024/05/02/legal-challenge-to-amazon-uks-new-one-click-to-quit-the-union-tool/

All this highlights the increasing divergence between the UK and the US when it comes to labor rights. Under the Biden Administration, NLRB General Counsel Jennifer Abruzzo has promulgated a rule that grants a union automatic recognition if the boss does anything to interfere with a union election:

https://pluralistic.net/2023/09/06/goons-ginks-and-company-finks/#if-blood-be-the-price-of-your-cursed-wealth

In other words, if Amazon tries these tactics in the USA now, their union will be immediately recognized. Abruzzo has installed an ultra-sensitive tilt-sensor in America's union elections, and if Bezos or his class allies so much as sneeze in the direction of their workers' democratic rights, they automatically lose.

(Image: Isabela.Zanella, CC BY-SA 4.0, modified)


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#20yrsago EFF’s cognitive radio comments to the FCC https://web.archive.org/web/20040707154407/https://www.eff.org/IP/Video/HDTV/EFF-ET03-108.pdf

#20yrsago RIAA: Control your P2P kids! https://craphound.com/areyourkids.txt

#20yrsago Command-line pizza-orderator https://web.archive.org/web/20040508090713/http://www.beigerecords.com/cory/pizza_party/

#15yrsago London cops catch and search a potential terrorist every three minutes https://news.bbc.co.uk/2/hi/uk_news/england/london/8034315.stm

#15yrsago EU kills “3-strikes” Internet rule, affirms Internet is a fundamental right https://www.laquadrature.net/en/2009/05/06/amendment-138-46-adopted-again/

#15yrsago EFF sues Obama administration for promised access to secret copyright treaty documents https://www.eff.org/press/archives/2009/05/06

#15yrsago Homemade Hollywood: book about fan-films and the obsessives who make them https://memex.craphound.com/2009/05/06/homemade-hollywood-book-about-fan-films-and-the-obsessives-who-make-them/

#10yrsago Comic strip etched into a human hair https://www.youtube.com/watch?v=urxflpkY8n8

#10yrsago Review: This One Summer https://memex.craphound.com/2014/05/06/review-this-one-summer/

#5yrsago Evil Clippy: a tool for making undetectable malicious Microsoft Office docs https://www.outflank.nl/blog/2019/05/05/evil-clippy-ms-office-maldoc-assistant/

#5yrsago Big Tech lobbyists and “open for business” Tories killed Ontario’s Right-to-Repair legislation https://www.vice.com/en/article/9kxayy/right-to-repair-bill-killed-after-big-tech-lobbying-in-ontario

#5yrsago Twitter users answer the question: “When did you become radicalized by the U.S. health care non-system?” https://memex.craphound.com/2019/05/05/twitter-users-answer-the-question-when-did-you-become-radicalized-by-the-u-s-health-care-non-system/

#1yrago Look at all the great stuff we lost because of inflation scare-talk https://pluralistic.net/2023/05/05/wmds-two-point-oh/#or-your-lying-ears

#1yrago On the Media on the enshittification (pt 1) https://pluralistic.net/2023/05/06/people-are-not-disposable/#otm

#1yrago Hollywood is the single best example of mature labor power in America https://pluralistic.net/2023/05/06/people-are-not-disposable/#union-strong


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.

Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Precaratize Bosses https://craphound.com/news/2024/04/28/precaratize-bosses/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

Publié le 04.05.2024 à 14:13

Pluralistic: Rosemary Kirstein's "The Steerswoman" (04 May 2024)


Today's links



Rosemary Kirstein's cover for her novel 'The Steerswoman.'

Rosemary Kirstein's "The Steerswoman" (permalink)

For decades, scammy "book doctors" and vanity presses spun a tale about how Big Publishing was too conservative and risk-averse for really really adventurous books, and the only way to get your visionary work published was to pay them to fill your garage with badly printed books that you'd spend the rest of your life trying to get other people to read:

https://pluralistic.net/2021/07/04/self-publishing/

Like all successful grifts, this one worked because it wasn't entirely untrue. No, mainstream publishing isn't filled with corporate gatekeepers who relish the idea of keeping your brilliance from reaching its audience.

But.

But editors sometimes make bad calls. They reject books because of quirks of taste, or fleeting inattentiveness, or personal bias. In a healthy publishing industry – one with dozens of equal-sized presses, all commanding roughly comparable market-share, good books would never slip through the cracks. One publisher's misstep would be another's opportunity.

But after decades of mergers, the population of major publishers has dwindled to a mere Big Five (it was almost four, but the DOJ blocked Penguin Random House's acquisition of Simon & Schuster):

https://www.justice.gov/opa/pr/justice-department-sues-block-penguin-random-house-s-acquisition-rival-publisher-simon

This means that some good books definitely can't find a home in Big Publishing. If you miss with five editors, you can exhaust all your chances with the Big Five.

There's a second tier of great publishers, from data-driven juggernauts like Sourcebooks to boutique presses like Verso and Beacon Press, who publish wonderful books and are very good to their authors (I've published with four of the Big Five and half a dozen of the smaller publishers).

But even with these we-try-harder boutique publishers in the mix, there's a lot of space for amazing books that just don't fit with a "trad" publisher's program. These books are often labors of love by their creators, and that love is reciprocated by their readers. You can have my unbelievably gigantic Little Nemo in Slumberland collection when you pry my cold, dead fingers off of it:

https://memex.craphound.com/2006/09/25/gigantic-little-nemo-book-does-justice-to-the-loveliest-comic-ever/

And don't even think of asking to borrow my copy of Jack Womack's Flying Saucers are Real!:

https://memex.craphound.com/2016/10/03/flying-saucers-are-real-anthology-of-the-lost-saucer-craze/

I will forever cherish my Crad Kilodney chapbooks:

https://pluralistic.net/2024/02/19/crad-kilodney-was-an-outlier/#intermediation

Then there's last year's surprise smash hit, Shift Happens, a two-volume, 750-page slipcased book recounting the history of the keyboard. I own one. It's fantastic:

https://glennf.medium.com/how-we-crowdfunded-750-000-for-a-giant-book-about-keyboard-history-c30e24c4022e

Then there's the whole world of indie Kindle books pitched at incredibly voracious communities of readers, especially the very long tail of very niche sub-sub-genres radiating off the woefully imprecise category of "paranormal romance." These books are landing at precisely the right spot for their readers, despite some genuinely weird behind-the-scenes feuds between their writers:

https://www.theverge.com/2018/7/16/17566276/cockygate-amazon-kindle-unlimited-algorithm-self-published-romance-novel-cabal

But as Sturgeon's Law has it: "90% of everything is shit." Having read slush – the pile of unsolicited manuscripts sent to publishers – I can tell you that a vast number of books get rejected from trad publishers because they aren't good books. I say this without intending any disparagement towards their authors and the creative impulses that drive them. But a publisher's job isn't merely to be good to writers – it's to serve readers, by introducing them to works they are likely to enjoy.

The vast majority of books that publishers pass on are not books that you will want to read, so it follows that the vast majority of self-published work that are offered on self-serve platforms like Kindle or pitched by hopeful writers at street fairs and book festivals is just not very good.

But sometimes you find someone's indie book and it's brilliant, and you get the double thrill of falling in love with a book and of fishing a glittering needle out of an unimaginably gigantic haystack.

(If you want to read an author who beautifully expresses the wonder of finding an obscure, self-published book that's full of unsuspected brilliance, try Daniel Pinkwater, whose Alan Mendelsohn, The Boy From Mars is eleven kinds of brilliant, but is also a marvelous tale of the wonders of weird used book stores with titles like KLONG! You Are a Pickle!):

https://en.wikipedia.org/wiki/Alan_Mendelsohn,_the_Boy_from_Mars

I also write books, and I am, in fact, presently in the midst of a long book-tour for my novel The Bezzle. Last month, I did an event in Cambridge, Mass with Randall "XKCD" Munroe that went great. We had a full house, and even after the venue caught fire (really!), everyone followed us across the street to another building, up five flights of stairs, and into another auditorium where we wrapped up the gig:

https://www.youtube.com/watch?v=ulnlSRbH80Y

Afterwards, our hosts from Harvard Berkman-Klein took us to a campus pizza joint/tiki bar for dinner and drinks, and we had a great chat about a great many things. Naturally, we talked about books we loved, and Randall said, "Hey, have you ever read Rosemary Kirstein's Steerswoman novels?"

(I hadn't.)

"They're incredible. All these different people kept recommending them to me, and they kept telling me that I would love them, but they wouldn't tell me what they were about because there's this huge riddle in them that's super fun to figure out for yourself:"

https://www.rosemarykirstein.com/the-books/

"The books were published in the eighties by Del Rey, and the cover of the first one had a huge spoiler on it. But the author got the rights back and she's self-published it" (WARNING: the following link has a HUGE SPOILER!):

https://www.rosemarykirstein.com/2010/12/the-difference/

"I got it and it was pretty rough-looking, but the book was so good. I can't tell you what it was about, but I think you'll really like it!"

How could I resist a pitch like that? So I ordered a copy:

https://bookshop.org/p/books/the-steerswoman-rosemary-kirstein/7900759

Holy moly is this a good novel! And yeah, there's a super interesting puzzle in it that I won't even hint at, except to say that even the book's genre is a riddle that you'll have enormous great fun solving.

Randall wasn't kidding about the book's package. The type looks to be default Microsoft fonts, the spine is printed slightly off-register, the typesetting has lots of gonks, and it's just got that semi-disposable feel of a print-on-demand title.

Without Randall's recommendation, I never would have even read this book closely enough to notice the glowing cover endorsement from Jo Walton, nor the fact that it was included in Damien Broderick and Paul Di Filippo's "101 Best Science Fiction Novels 1985-2010."

But I finished reading the first volume just a few minutes ago and I instantly ordered the next three in the series (it's planned for seven volumes, and the author says she plans on finishing it – I can't wait).

This book is such an unexpected marvel, a stunner of a novel filled with brilliant world-building, deft characterizations, a hard-driving plot and a bunch of great surprises. The fact that such a remarkable tale comes in such an unremarkable package makes it even more of a treasure, like a geode: unremarkable on the outside, a glittering blaze within.


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#20yrsago Disney buries Moore’s new movie to save its tax-breaks https://www.nytimes.com/2004/05/05/us/disney-is-blocking-distribution-of-film-that-criticizes-bush.html

#15yrsago Woman accuses cop neighbor of forging “Come get all my stuff for free” ad on Craigslist https://web.archive.org/web/20090507065346/https://www.dallasnews.com/sharedcontent/dws/news/city/arlington/stories/DN-craigslistcop_02met.ART.State.Edition2.4a690aa.html

#10yrsago How to Talk to Your Children About Mass Surveillance https://locusmag.com/2014/05/cory-doctorow-how-to-talk-to-your-children-about-mass-surveillance/

#10yrsago Straczynski: “The New Aristocracy” https://www.facebook.com/permalink.php?story_fbid=760992300602302&id=139652459402959

#1yrago Ostromizing democracy https://pluralistic.net/2023/05/04/analytical-democratic-theory/#epistocratic-delusions


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.

Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Precaratize Bosses https://craphound.com/news/2024/04/28/precaratize-bosses/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

 

 Persos A à L
Mona CHOLLET
Anna COLIN-LEBEDEV
Julien DEVAUREIX
Cory DOCTOROW
EDUC.POP.FR
Michel GOYA
Hubert GUILLAUD
Gérard FILOCHE
Alain GRANDJEAN
Hacking-Social
Samuel HAYAT
Dana HILLIOT
François HOUSTE
Tagrawla INEQQIQI
Infiltrés (les)
Clément JEANNEAU
Paul JORION
Frédéric LORDON
LePartisan.info
 
 Persos M à Z
Henri MALER
Christophe MASUTTI
Romain MIELCAREK
Richard MONVOISIN
Corinne MOREL-DARLEUX
Timothée PARRIQUE
Emmanuel PONT
Nicos SMYRNAIOS
VisionsCarto
Yannis YOULOUNTAS
Michaël ZEMMOUR
 
  Numérique
Binaire [Blogs Le Monde]
Christophe DESCHAMPS
Louis DERRAC
Olivier ERTZSCHEID
Olivier EZRATY
Framablog
Francis PISANI
Pixel de Tracking
Irénée RÉGNAULD
Nicolas VIVANT
 
  Collectifs
Arguments
Bondy Blog
Dérivation
Dissidences
Mr Mondialisation
Palim Psao
Paris-Luttes.info
ROJAVA Info
 
  Créatifs / Art / Fiction
Nicole ESTEROLLE
Julien HERVIEUX
Alessandro PIGNOCCHI
XKCD
🌞