The Economist, “AI is killing the web. Can anything save it?“
tl;dr version:
The article exposes a fundamental shift in how people navigate the internet—AI is delivering answers directly, fundamentally altering the web’s economic ecosystem.
The unraveling of the “open web” bargain
The traditional web model depends on websites offering free content in exchange for search-driven traffic and advertising revenue. But now, AI-driven query tools (like ChatGPT, Perplexity, Google’s AI‑Mode and OpenAI’s upcoming “Operator” browser) provide users with synthesized answers instead of links. This means users increasingly never visit the original sources, choking off traffic to content sites The Week+7The Living Library+7The Economist+7mint+1The Week+1.
Similarweb data shows a ~15 % drop in global search-generated human traffic over the last year. The hardest hit: reference sites (−15 %), health (−31 %), education, and science platforms The Living Library+1mint+1. Publishers like Dotdash Meredith—already receiving over 60 % of their traffic from Google—have seen that figure plummet to the mid‑30s mint.
Stack Overflow’s CEO warns that AI “is basically choking off traffic to most content sites,” explaining a noticeable decline in community contributions The Economist+5mint+5The Week+5. Even Wikipedia warns that AI-generated summaries without citations are “blocking pathways” for people to discover and contribute mint+1The Times+1.
New models to monetize AI-generated value
In response, content creators are negotiating licensing deals or suing AI firms. News Corp has deals with OpenAI; The New York Times works with Amazon and is litigating against OpenAI; Reddit struck a $60 m/year licensing deal with Google mint. However, legal rulings in California have so far sided with AI firms, affirming their fair-use defense The Economist+3mint+3Financial Times+3.
Moreover, the majority of small websites are too small to negotiate with big AI platforms, and unilateral blocking of bots merely ensures total obscurity The Guardian+2mint+2The Living Library+2.
That has driven new technical and economic innovations:
- Cloudflare offers bot-control services where site owners choose the conditions under which AI gathers their content, including pay‑per‑crawl options mint.
- Tollbit enables sites to impose “bot paywalls,” varying fees depending on the content, with early uptake among publishers like AP and Newsweek mint+1BizNews+1.
- ProRata (via its Gist.ai engine) redistributes ad revenue from AI answers back to source sites, providing an exemplar for a long-term model mint.
Despite these efforts, scaling up remains challenging—antitrust law limits collective action, and judges have yet to rule in favor of widespread licensing frameworks mint.
A broader and perhaps necessary evolution
Google maintains optimism: as search becomes more AI‑centric, the volume of content indexed has grown by 45 % in two years mint. More sites are being “read” by AI, even if by bot rather than human eyes. Google claims outbound clicks haven’t dramatically dropped, citing other factors like social media and podcasts mint.
The article concludes with a realistic outlook: the open web’s traditional ad‑and‑traffic‑driven model is under existential threat. Content creators must adapt—employing licensing agreements, exploring bot-led paywalls or revenue sharing, and diversifying with newsletters, apps, events, audio/video offerings and enterprise products mint.
Ultimately, While AI may represent the most significant disruption to date, The Economist suggests it’s a pivot point, not the web’s death. The future internet may coalesce around new “agentic” protocols, collective licensing, revenue-sharing infrastructure, and open standards—balancing creator incentives with AI‑driven efficiency mint.
In short, AI-driven search is reshaping the internet’s economic model. While content creators scramble to monetize their work via deals, bot paywalls, or revenue sharing, the survival of the open web hinges on whether these new systems can scale. Without them, we risk losing serendipity, diversity, and the core value of human‑driven content—even as AI offers unprecedented access and efficiency.
The above is from ChatGPT, with the most basic prompt imaginable: a request for a 500-word summary. It’s pretty good–and considerably better than it would have been when I first started playing with the LLM a year and change ago. It also recommends the FT article “We need a new deal for the web” and other from Business Insider titled “Thanks to ChatGPT, the pure internet is gone. Did anyone save a copy?“
It’s noteworthy that neither the FT or BI pieces are paywalled—but the Economist article is.
Another prompt, “Provide the key points of those three articles in bullet form,” produces this:
1. The Economist – “AI is killing the web. Can anything save it?”
- The traditional web model relies on free content attracting traffic via search engines, monetized through ads.
- AI tools like ChatGPT, Perplexity, and Google’s SGE answer queries directly, bypassing source websites.
- This has led to a significant drop in web traffic, especially for reference, health, and education sites.
- Content creators are losing incentives to produce content, which threatens the sustainability of the open web.
- Some publishers are striking licensing deals with AI firms (e.g., News Corp, Reddit), but many smaller sites lack leverage.
- Tools like Tollbit, Cloudflare’s bot-control, and ProRata aim to monetize or control AI access to content.
- Lawsuits challenging AI use of scraped data have largely failed so far, with courts siding with AI companies.
- Google insists AI hasn’t greatly reduced clicks and says more content is being indexed than ever before.
- AI companies may need to share value with creators to preserve a healthy content ecosystem.
- The web may shift to a “post-search” world relying on bots, agents, and new compensation models.
2. Financial Times – “We need a new deal for the web”
- The web’s original promise of openness and access is being undercut by the rise of AI and platform dominance.
- AI threatens to extract value from the web without fairly compensating creators.
- Large tech firms have built monopolistic platforms that siphon value but contribute little original content.
- The FT calls for a new “deal” to protect journalism, knowledge, and public discourse from being hollowed out.
- This deal should include legal reform, revenue-sharing models, and enforceable transparency for AI systems.
- Content producers need stronger collective bargaining power to negotiate with AI firms.
- Regulation should require AI systems to attribute and compensate sources.
- The current path could lead to a digital world dominated by derivative content with no incentives to create new work.
- Europe’s efforts (e.g., the EU AI Act) are a step forward but need to go further.
- Preserving a rich, open, and diverse web requires rebuilding the economic model around trust and value creation.
3. Business Insider – “Thanks to ChatGPT, the pure internet is gone. Did anyone save a copy?”
- AI tools are rapidly consuming and rewriting internet content, altering its original structure and accessibility.
- This creates a risk of “data collapse” where AI models recycle AI-generated content, degrading quality over time.
- Some technologists are racing to preserve “clean” versions of the internet, such as through the “Low Background Steel” project.
- There’s growing concern about the provenance of data used by AI—what’s real versus AI-generated.
- Cloudflare and others are helping sites manage AI bots and control scraping, but it’s an uphill battle.
- The internet is becoming more polluted by synthetic content, reducing discoverability of original thought.
- Critics warn that without intervention, AI will erode the “signal” of genuine human expression.
- Some archivists are trying to preserve a version of the web that predates AI dominance.
- Others call for labeling systems or watermarks to distinguish AI output from human-generated content.
- The article paints a sobering picture: the age of a “pure,” decentralized web is fading, and we may not be able to reverse it.
Again, pretty good and done in seconds.
It then asks, “Let me know if you’d like a comparison or synthesis of these.”
The comparison:
Shared Concerns
- AI is Disrupting the Web’s Economic Model
All three articles agree that AI tools (like ChatGPT, Perplexity, and Google’s SGE) are undermining the traditional web structure by intercepting user queries and providing answers directly, thereby bypassing source websites and cutting off revenue streams for content creators.- Loss of Incentives for Content Creation
They emphasize that creators—journalists, academics, developers—are losing motivation to produce quality content, knowing they might not receive traffic, attribution, or compensation. This threatens the sustainability and quality of the internet.- Urgency for New Legal and Economic Frameworks
Each article suggests the current legal and economic systems are ill-equipped to protect content creators from exploitation by AI. They call for regulatory intervention, licensing systems, or new technical standards to preserve the open web.
Differences in Emphasis
Theme The Economist Financial Times Business Insider Focus Web traffic decline & monetization crisis Policy and rights-based call for reform Data purity, archiving, and existential risk Tone Analytical, structural Normative, advocacy-driven Alarmist, preservationist Proposed Solution Licensing, bot paywalls, revenue sharing via new tech Legal regulation, collective bargaining, enforceable transparency Archiving the web, labeling AI content, preserving “pure” data Threat Framed As Economic unsustainability for creators Collapse of public discourse and media independence Irreversible degradation of internet quality and provenance
Unique Contributions
- The Economist introduces the idea of “bot paywalls” and revenue-sharing mechanisms (e.g., Tollbit, Cloudflare, ProRata) as experimental models for creators to monetize interactions with AI.
- The Financial Times frames the problem as a political and moral crisis, urging governments and regulators to enforce accountability, transparency, and fairness from AI platforms—especially to protect journalism and democratic discourse.
- Business Insider raises the more philosophical and archival issue of the web’s transformation: the internet as a record of human knowledge is being contaminated by synthetic content, possibly leading to a “data collapse” where future AI only feeds on its own outputs.
Points of Convergence
- AI scraping content without consent is an urgent issue.
- Compensation models and/or regulation are needed to ensure human creators are not driven out of existence.
- AI firms are benefiting disproportionately from a content ecosystem they did not create but now dominate.
- Without systemic change, the result could be a less diverse, less trustworthy, and less useful web.
Conclusion
These three articles complement each other to paint a comprehensive picture: the web is undergoing a structural transformation due to generative AI. While The Economist focuses on the economic fallout, the FT highlights the moral and democratic implications, and Business Insider warns of a technological degradation of data integrity. All converge on the need for urgent innovation, regulation, and collaboration to save the web from being hollowed out by the very AI tools that depend on it.
The synthesis:
AI Is Reshaping the Internet’s Core Structure
The rise of AI tools that deliver synthesized answers—such as ChatGPT, Perplexity, and Google’s AI Overviews—is fundamentally changing how users engage with information online. Rather than directing users to source websites, these systems increasingly serve as gatekeepers, offering final answers without requiring users to click through. This shift undermines the basic economic model that has sustained the open web for decades: free content in exchange for ad-supported traffic.
The Consequences: Economic, Epistemic, and Existential
This disruption is multi-dimensional:
- Economic: As AI displaces traditional search engines, web traffic is collapsing, especially for knowledge-intensive sites—reference, health, education, science. As creators lose visibility and ad revenue, they are scaling back or shutting down. Licensing deals with major AI firms (e.g., Reddit, News Corp) offer some relief, but smaller sites are left behind, unable to negotiate or monetize their content access.
- Epistemic: The Financial Times emphasizes that this process erodes democratic discourse and journalism, turning content creation into a one-sided extraction game that benefits tech platforms but hollows out the knowledge economy. Without reliable revenue, trusted institutions may fade, replaced by opaque AI outputs lacking attribution, transparency, or accountability.
- Existential: Business Insider introduces the notion of “data collapse,” where AIs are increasingly trained on AI-generated content, creating a feedback loop of synthetic information. The original, human-authored internet is being polluted—its provenance unclear, its trustworthiness diminished. Some technologists now treat the internet as a cultural artifact that must be preserved before it’s overwritten by machine-made content.
Emerging Responses, But Fragmented
Solutions are emerging, but are still nascent, fragmented, and underpowered:
- Technical fixes: New tools like bot paywalls (Tollbit), bot access control (Cloudflare), and revenue-sharing systems (ProRata) allow websites to charge or regulate AI access. But adoption is low, and many creators fear that blocking AI simply ensures invisibility.
- Legal and regulatory reform: The Financial Times argues that governments must intervene, creating frameworks that enforce attribution, mandate compensation, and support collective bargaining by content creators. Yet courts have largely sided with AI firms, reinforcing the doctrine of “fair use” in scraping content.
- Cultural and archival efforts: Business Insider highlights efforts to preserve a “clean” internet—archiving web content untouched by AI, and proposing labeling systems to help users distinguish human-generated versus AI-synthesized content.
A Crossroads for the Web
Together, the three articles suggest that we are at a critical juncture. The web is moving from an open, human-centric ecosystem to a closed loop of machine-to-machine content creation and consumption. If unchecked, this transformation could undermine the internet’s diversity, authenticity, and value as a public good.
Yet it is not inevitable. A combination of technical innovation, legal reform, and cultural awareness could help build a new web economy—one where AI works with, not against, human creators. To get there, platforms must start sharing the value they extract, regulators must recognize the stakes, and society must decide what kind of digital future it wants to preserve.
In short: AI is not just changing the web; it’s threatening to consume it. Whether the internet remains a vibrant space of human knowledge or becomes a flattened mirror of machine outputs will depend on choices we make now—about economics, policy, and ethics.
So, aside from making paywalls essentially irrelevant—making monetizing web content still harder—it makes creating (generating?) content easier than ever before.
Producing quality, original content is hard. The Economist, FT, New York Times, Washington Post, or another elite outlet may well pay seasoned reporters six-figure salaries and provide extensive budgets for research and travel. For years, aggregation sites have paid staffers peanuts to crank out stories, often under more clickbait headlines, that simply regurgate the reporting of others. The more scrupulous ones acknowledge their sources, but most just pretend that they’re the ones who got the quotes.
But generative AI—and I’m just using the free version of ChatGPT and haven’t spent a lot of time playing with it—can now do that in seconds. One suspects we’ll get to the point where even the prestige outlets rely on it more heavily. Heck, I’m already seeing that happening in academia—and not just with the students.
The nature of my academic training is such that I prefer to at least feed in source materials that I trust. But most will simply use ChatGPT and competitors the way most of us have used Google over the years. We’ve already seen the erosion of the distinction that was drilled into me as far back as high school between primary, secondary, and tertiary sources and between authoritative and non-authoritative ones. That will inevitably escalate rapidly in this new environment.
And, of course, if people can’t make a living creating quality content, they won’t create it.
Click this link for the original source of this article.
Author: James Joyner
This content is courtesy of, and owned and copyrighted by, https://www.outsidethebeltway.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.