
Artificial intelligence is rewriting the rules of global publishing at an unprecedented speed: while Axel Springer is laying off the entire Italian editorial staff of Upday to replace them with ChatGPT, newspapers such as Il Foglio has recorded a 60% increase in sales thanks to an insert written entirely by AI. But behind the scenes, a more complex truth emerges: many “revolutionary AI solutions” hide operational realities that oscillate between genuine innovation and systematic manipulation of the information ecosystem.
The phenomenon, which researchers have ironically dubbed “fauxtomation” (i.e., pseudo-automation), reveals how the tech industry often uses low-quality automation passed off as advanced artificial intelligence.
NewsGuard's research captures an explosive phenomenon: over 1,200 automated pseudo-information sites spread across 16 languages. An ecosystem riding a market set to quadruple in six years, from $26 billion today to nearly $100 billion in 2030.
The difference between those who thrive and those who succumb? The ability to transform AI from an existential threat into a competitive advantage through a new key skill: knowing what to ask the machine—when the machine is really a machine.
The Great Divide: Who's Hiring and Who's Firing in the Age of AI
The two-year period 2024-2025 marked a decisive turning point in the publishing industry.On the one hand, waves of layoffs hit historic newsrooms: Business Insider cut 21% of its staff, The Messenger closed, leaving 300 journalists out of work, while CNN and the Los Angeles Times eliminated hundreds of positions. The roles most affected are copywriters, junior editors, and translators—all easily automatable functions.
On the other hand, a new ecosystem of opportunities is emerging. The Washington Post created the first position of “Senior Editor for AI Strategy and Innovation,” while Newsweek launched a team dedicated to AI for breaking news that helped reach 130 million monthly sessions. The job market has seen a 124% increase in offers for AI roles in the media sector, with salaries reaching up to $335,000 per year for senior prompt engineers.
The key to this transformation lies in the strategic approach. Thomson Reuters invests over $100 million per year in AI, using different models for specific tasks: OpenAI for content generation, Google Gemini for analyzing complex legal documents, and Anthropic Claude for highly sensitive workflows. This multi-vendor approach has allowed the company to optimize costs and performance while maintaining control over editorial quality.
The Art of Dialogue with Artificial Intelligence: The New Grammar of Journalism
“Knowing what to ask the machine” is not a slogan, but an emerging professional skill that is redefining the craft of journalism. A survey of 134 information professionals in the US, UK, and Germany reveals that verifying AI content “sometimes takes longer than writing by hand.” This seemingly paradoxical finding hides a fundamental truth: AI does not replace journalists, but requires new forms of editorial supervision.
The Evolution of Skills: Tradition and Innovation
Traditional skills do not disappear, but evolve into more sophisticated forms. Relationships with sources, editorial judgment, and contextualization remain irreplaceable. As a British newsroom manager points out: “I don't want to be BuzzFeed or CNET, which put out junk. We have to do things right.”
Formulating effective questions for artificial intelligence goes beyond simply requesting information. It requires an understanding of algorithmic biases, the ability to structure complex requests, and the skill to iterate to obtain increasingly accurate results. A productive conversation with AI must: provide context by supplying the necessary background information, specify the desired format, set ethical parameters by requiring transparency in sources, and calibrate the tone to suit the target audience.
Verification as a New Frontier
Paradoxically, the AI era has made fact-checking even more crucial. Journalists are developing new methodologies for assisted fact-checking, where artificial intelligence becomes both the object and the tool of verification. The challenge is no longer just to distinguish between true and false, but also to evaluate the quality of automatic summaries, identify significant omissions, and recognize when AI introduces subtle biases into the narrative.
The responsible use of artificial intelligence requires constant ethical reflection. Transparency with the public about the use of AI becomes a pillar of editorial credibility. In this scenario, a new hybrid figure emerges: the journalist-orchestrator, capable of conducting a symphony of human and digital sources to produce superior quality information.
The AdVon Case: The Evolution from Content Farm to Enterprise Solution
The story of AdVon Commerce perfectly illustrates the evolution of technologies from controversial practices to legitimate business solutions. At the center of the Sports Illustrated and USA Today scandals, AdVon turned “automated journalism” into a million-dollar business. The numbers were impressive: 90,000 articles published through its system for hundreds of publications, using completely fabricated journalists with AI-generated profile photos.
An internal training video obtained by Futurism revealed the operational reality: employees who “generated an AI-written article and refined it” 12 Ways Journalists Use AI Tools in the Newsroom - Twipe. The strategy was simple but effective: initially use contractors to write product reviews, then use this material to train language models, evolving towards automation. It is an example of the transition from human labor to “Actual AI” - with human workers training machines in a process of gradual replacement.
The Positive Transformation
AdVon is now part of Flywheel Digital (acquired by Omnicom) and presents itself as a provider of “SEO & user-centric content solutions powered by AI” for Fortune 500 companies. The shift from controversial content farming for news outlets to enterprise tools for e-commerce represents a typical evolution of tech startups: same technology, different markets, different ethics.
The AdVon case also demonstrates that the same technologies can serve legitimate markets (e-commerce) and problematic practices (fake journalism) simultaneously. The evolution of the model—from content farm to enterprise software—shows how technological innovation can find more ethical applications over time.
The Google Paradox: When Company Divisions Don't Talk to Each Other
The most emblematic case of the complexity of large tech companies emerges from the timeline: on March 5, 2024, Google announces measures against “scaled content abuse”; on April 1, 2024, Google Cloud announces a partnership with AdVon to launch AdVonAI. When Futurism asked for clarification, Google responded with total silence.
The most likely explanation lies in the organizational structure: Google Cloud operates as a separate division with its own commercial objectives, and AdVonAI is positioned as a B2B tool for retailers such as Target and Walmart, not for journalistic content farming. As Karl Bode of Techdirt observes: “Incompetent executives continue to treat AI not as a way to improve journalism, but as a shortcut to creating an automated advertising engagement machine.”
CNET: The Anatomy of a Disaster Foretold
CNET provided one of the first large-scale examples of how NOT to implement AI in journalism, becoming a perfect case study on the risks of “fauxtomation.” The prominent tech site used an “internal AI engine” to write 77 stories published since November 2022, representing about 1% of the total content published during that time.
The Disastrous Results
CNET had to correct errors in 41 of the 77 AI-generated stories—more than half of the automated content. One article on compound interest claimed that deposits of $10,000 with 3% annual interest would yield $10,300 instead of $300—a 3,333% error that would have financially ruined anyone who followed the advice.
Subsequent investigations also revealed evidence of structural plagiarism with articles previously published elsewhere. Jeff Schatten, a professor at Washington and Lee University, after reviewing numerous examples, called the bot's behavior “clearly” plagiarism. “If a student turned in an essay with a comparable number of similarities to existing documents without attribution, they would be sent to the student ethics board and, given the repeated nature of the behavior, would almost certainly be expelled from the university.”
The Systemic Consequences
The CNET case reveals how the logic of content farms is also penetrating historic publications. As reported by The Verge, the primary strategy of Red Ventures (owner of CNET) was to publish massive amounts of content, carefully designed to rank high on Google and loaded with lucrative affiliate links. CNET had become an “AI-powered money-making SEO machine.”
The key lesson: AI has a “notorious tendency to produce biased, harmful, and factually incorrect content,” requiring expert human oversight, not just superficial editing.
The New Generation: Fully Automated Content Farms 2.0
Meanwhile, an even more sophisticated generation of fully automated content farms is emerging. NewsGuard has identified sites that “operate with little or no human oversight and publish articles written largely or entirely by bots,” with generic names such as iBusiness Day, Ireland Top News, and Daily Time Update.
The Numerical Explosion
The numbers are alarming: from April 2023, when NewsGuard identified 49 sites, the number exploded to over 1,000 in August 2024.
Given the simultaneous decline of genuine local news outlets around the world, the odds that a news website claiming to cover local news is fake are greater than 50%.
Concrete Examples of Degeneration
OkayNWA (Arkansas): The first fully automated “local newspaper” with “AI reporters” with surreal names like “Benjamin Business” and “Sammy Streets.” The site scrapes the web for local events and republishes them under fake AI identities, representing the final evolution of the AdVon model.
Celebritydeaths.com: Falsely claimed that President Biden had died and that Vice President Harris had taken over his duties. Analysts Warn of Spread of AI-Generated News Sites - an example of how uncontrolled automation can create dangerous misinformation.
Hong Kong Apple Daily: The domain of the former pro-democracy newspaper was taken over by a Serbian businessman and filled with AI-generated content Analysts Warn of Spread of AI-Generated News Sites after the newspaper was forced to close in 2021 - a particularly cynical case of digital appropriation.
The Economics of Creative Destruction
The Devastating Impact on Traditional Markets
AI-generated sites typically have no paywalls and do not bear the costs of hiring real journalists, so they can attract programmatic advertising revenue more easily Watch Out: AI “News” Sites Are on the Rise - NewsGuard. This creates a devastating vicious cycle: as these sites siphon off advertising revenue, local news organizations struggle even more to sustain themselves, leading to further cuts in staff and resources.
NewsGuard found that Google is behind 90% of the ads on these sites Analysts Warn of Spread of AI-Generated News Sites. When Voice of America asked for clarification, Google said it could not verify why NewsGuard does not share its list of sites (which it obviously would not share, as it is their main commercial asset).
The Numbers of Transformation
The economic data tells a story of profound disruption:
Global AI market in media: 24.2% annual growth (almost five times the average economic growth)
Investments in AI startups: $209 billion in 2024 (46.4% of total venture capital)
Italian print advertising revenue: -13.7% in the first months of 2024
ROI of AI implementations: up to 210% for those who invest correctly
The impact on wages is just as dramatic. Roles requiring AI skills command salary premiums of up to 25% in the US. An AI Content Manager at Amazon can earn between $62,000 and $95,000, while senior prompt engineers can earn salaries of $335,000. In contrast, 58% of journalists are self-training in AI without any company support.
The Illuminating Contrast: Il Foglio's Transparent Experiment
Against this backdrop of systemic deception and hidden automation, Il Foglio's experiment shines as an example of radical transparency. The newspaper published a supplement written entirely by AI for an entire month, achieving a 60% increase in sales on the first day and international media coverage.
Claudio Cerasa, editor-in-chief of the newspaper, openly admits the limitations: “This is one of the cases where AI performs poorly” in terms of originality, but he emphasizes the fundamental lesson: “The key thing is to understand what more can be done, not less.”
Il Foglio's success takes on even greater significance when compared to the reality of content farms. While Cerasa conducts a transparent and ethical experiment, declaring every aspect of AI use to readers, thousands of sites around the world hide their automated nature behind false journalistic identities.
Cases of Responsible Innovation: When Artificial Intelligence Truly Serves Journalism
News Corp Australia: The Transparent Industrial Model
News Corp Australia already produces 3,000 AI articles per week through the Data Local project, but with one crucial difference: structured editorial oversight and full disclosure. The industrial but transparent approach demonstrates that automation can be implemented on a large scale while maintaining ethical standards.
EXPRESS.de: Collaborative Artificial Intelligence
The case of EXPRESS.de in Germany illustrates how AI can become a genuine partner for journalists. Their “Klara” system now contributes to 11% of articles and during seasonal peaks accounts for 8-12% of overall traffic, mainly thanks to its effective headline generation.
The impact is measurable: this human-AI partnership has led to a significant 50-80% increase in click-through rates when AI curates articles based on user interests. Employees act as supervisors, reviewing each piece, verifying sources, and ensuring journalistic integrity.
RCS MediaGroup: The Italian Strategic Approach
Fabio Napoli, Business Digital Director at RCS, highlights how the company plans to expand its AI-driven offerings by developing new thematic apps and improving existing platforms such as L'Economia. The goal is to use AI and data analytics to deliver more personalized content, ensuring that readers engage more deeply and spend more time on RCS platforms.
The Regulatory Framework: From Wilderness to Control
The EU AI Act and Its Implications
The EU AI Act, which came into force in August 2024, represents the first systematic attempt to regulate AI on a continental scale. The act imposes a labeling requirement for AI-generated content, laying the legal groundwork for distinguishing between human and automated content.
The Paris Charter on AI and Journalism
The Paris Charter on AI and Journalism, chaired by Nobel Prize winner Maria Ressa, has defined 10 fundamental principles for ethical AI in journalism. The document emphasizes that “technological innovation does not inherently lead to progress: it must be guided by ethics.”
Key principles include: transparency in the use of AI, mandatory human oversight for sensitive content, protection of source diversity, and clear editorial responsibility. Organizations such as the IFJ and EFJ are fighting to ensure fair compensation for content used in AI training and algorithmic transparency.
Spines and the Debate on Automated Publishing
Among the cases dividing the publishing community is Spines, an Israeli startup that offers automated publishing services, reducing turnaround times from 6-18 months to three weeks, with prices ranging from $1,200 to $5,000, allowing authors to retain 100% of their rights.
The platform uses AI for editing, proofreading, cover design, and formatting, while still assigning a human project manager to oversee each book. Critics focus on quality—‘Artificial intelligence is notoriously untalented as a writer’—while supporters emphasize the democratization of access to previously expensive services.
The startup has attracted $22.5 million from reputable investors, and CEO Yehuda Niv has a solid track record. The model represents the industrialization of existing services, not necessarily “revolutionary” but potentially important for the accessibility of publishing.
Future Scenarios: Utopia, Dystopia, or Something in Between?
The “AI in Journalism Futures” Project
The scenarios for 2025-2030 outlined by the “AI in Journalism Futures” project range from radical transformation to continuity. The “Machines in the Middle” scenario envisions AI as essentially the newsroom, processing and distributing most journalistic information.
Experts predict a “post-link reality” where users will no longer visit publishers' sites, accessing news through AI agents that summarize content. This scenario would lead to further centralization of information control in the hands of large tech companies.
Emerging Organizational Models
Successful newsrooms are adopting “two-speed” models that allow for experimentation while maintaining traditional workflows. “Federalized” structures are emerging with autonomous teams supported by centralized AI infrastructure. The key is the balance between technological efficiency and journalistic values: accuracy, fairness, accountability, and public service.
The Unexpected Resilience of the Market
However, a comforting truth emerges from the comments of the editorial community: markets have natural antibodies against scams. As one industry veteran observes, “There are always scams, but I've never seen one that has had a lasting impact.”
The reason is simple but powerful: discovery algorithms (which are, ironically, true AI) reward reader engagement and satisfaction. Content farms may flood the market, but quality always emerges. Readers don't read beyond the first page of low-quality content, whether it's produced by humans or AI.
Conclusions: The Revolution That Requires Evolution
AI is not the future of journalism—it is its turbulent and contradictory present. The ongoing transformation reveals an even deeper bifurcation than initially imagined: it is not just about replacing journalists with machines, but about the battle between ethical automation and predatory “fauxtomation.”
The contrast between Il Foglio and the thousands of automated content farms is emblematic. On the one hand, a transparent experiment that openly declares its use of AI, invests in human supervision, and uses technology to question the future of the profession. On the other, an industrial system of deception that pollutes the information ecosystem with low-quality content masquerading as authentic journalism.
The Automation of Trust
Success in the age of AI-publishing requires five fundamental elements:
Serious investment in training—not the improvised self-learning that characterizes 58% of the industry
Strict ethical governance—not the “move fast and break things” approach of content farms
Total transparency on processes—not masking automation behind false identities
Understanding that AI amplifies both excellence and mediocrity
Trust in the market's ability to distinguish authentic value from noise
The newsrooms that thrive are those that, like Il Foglio, use AI to free journalists from repetitive tasks and challenge them to focus on what machines cannot do: build trusting relationships, contextualize complexity, and tell stories that touch the human soul.
The Final Paradox
The paradox is devastating but also liberating: in the age of maximum automation, honesty becomes revolutionary. Knowing what to ask of the machine is not just a technical skill—it is an act of resistance against an ecosystem that rewards systematic deception.
But as the wisdom of the publishing community and the resilience of the markets demonstrate, readers can tell the difference. Italian newsrooms are faced with a choice that goes beyond technology: they can join the race to the bottom of automated content farms, or they can follow Il Foglio's example and use transparency as a competitive weapon.
In this era of “fauxtomation,” authentic journalism becomes the ultimate form of automation that no machine can ever replicate: the automation of trust. And trust, as every good journalist has always known, is earned one story at a time, one reader at a time, one truth at a time.
The difference between survival and prosperity is not in the adoption of AI—it's in the ability to maintain integrity while everyone around you pretends that their automation is more sophisticated than it really is. The future belongs to those who know how to turn technology into a tool of truth, not deception.
Sources:
Market research and data:
Case studies and scandals:
Content Farms and automation:
Global partnerships and initiatives:
Welcome to Electe’s Newsletter - English
This newsletter explores the fascinating world of how companies are using AI to change the way they work. It shares interesting stories and discoveries about artificial intelligence in business - like how companies are using AI to make smarter decisions, what new AI tools are emerging, and how these changes affect our everyday lives.
You don't need to be a tech expert to enjoy it - it's written for anyone curious about how AI is shaping the future of business and work. Whether you're interested in learning about the latest AI breakthroughs, understanding how companies are becoming more innovative, or just want to stay informed about tech trends, this newsletter breaks it all down in an engaging, easy-to-understand way.
It's like having a friendly guide who keeps you in the loop about the most interesting developments in business technology, without getting too technical or complicated
Subscribe to get full access to the newsletter and publication archives.