SAN FRANCISCO, CA —
Editors yanked AI summaries in 24 hours. Here’s why that matters.
Last June the Wikimedia Foundation rolled out “Simple Article Summaries”: short, clearly labeled AI-generated overviews at the top of long entries for mobile opt-in users. The idea was tiny and sensible—give readers a quick orientation before they dive into dense, Britannica-style pages. The catch here? Volunteer editors, worried about AI hallucinations and the erosion of editorial control, shut it down almost immediately.
Why the freakout isn’t just technophobia
We get the fear. Wikipedia’s credibility was built on human curation, citations, and hours spent chasing down sources. AI summaries that are machine-generated and unverified feel like a fast lane to subtle misinformation. But rejecting a simple, opt-in summary outright? That’s reflexive veto, not conversation. It looks less like protecting readers and more like protecting an institutional gate.
History repeats: VisualEditor, Media Viewer, and the same drama
This isn’t new. VisualEditor crashed and nearly emptied the editor pool. Media Viewer stirred uprisings. Even a popular image filter was torpedoed by community vetoes. Those fights ended in messy compromise. This time, the shutdown was fast enough to leave questions: is the community getting less willing to experiment—or is the Foundation getting worse at coalition-building?
The bigger problem: sustainability and extraction
Here’s the brutal part: our unpaid editors are aging, and tech companies are mining Wikipedia to train profitable AI without compensating the commons. The result? AI that can answer users’ questions without sending them back to the source, breaking the reader→editor→donor feedback loop that funds Wikipedia. That’s not theoretical—it’s already shrinking the pipeline that kept the site healthy for 25 years.
What actually needs to happen
We don’t need a binary choice between “no change” and “unvetted AI everywhere.” Practical fixes: incremental, opt-in summaries with strict provenance and visible citations; deeper community involvement in design and rollout; and real commercial licensing for companies using Wikipedia data (yes, Wikimedia Enterprise or comparable paid access). Also consider controlled LLMs trained only on verified Wikimedia content and updated licensing to reflect AI-era realities.
The Editor’s Take: Wikipedia’s refusal to try small, community-governed experiments is a slow-motion problem. The foundation botched engagement; editors overreacted. We need careful, opt-in tools, stronger provenance, and payment from companies that build billion-dollar products on our volunteer work—otherwise the site risks becoming a museum of knowledge rather than the go-to place for it.
Credit and Source: IEEE Spectrum

