CNET's AI Controversy

And what marketers can learn from their mistakes.

Happy “Friday Junior,” Collaborators!

The marketing world is ablaze with the news that Google may be changing its stance toward AI-generated content. In its “Helpful Content” guidance, Google replaced “written by people” with “created for people.”

The change encourages marketers to create content that is genuinely useful for readers, whether they use AI or their own words.

This week, we’re looking at how a media company used AI to boost its Google Search rankings, and sparked a massive backlash. Most importantly, we’ll talk about how we can avoid their mistakes.

Let’s get ready for this new era of Google Search!


CNET’s AI Controversy

Our story begins with an AI-generated resignation letter.

On her last day as a senior editor in cybersecurity and privacy at CNET, Rae Hodge logged into her email. She generated a resignation letter with AI and sent a screenshot of it to hundreds of colleagues.

Switching to her own words, Hodge claimed that the company was sending AI-generated content to cybersecurity newsletter subscribers, with errors that “could cause direct harm to readers.”

The Backlash

The concerns Hodge outlined would soon spread.

In January, Futurism and The Verge reported that CNET had been using AI to generate stories without disclosing its use.

Even worse, the AI written content contained real mistakes. A CNET article on compound interest read: “​​if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you'll earn $10,300 at the end of the first year." (Actually, while you would have $10,300 total in the account at the end of the year, you would only earn $300.)

All writers - both human and AI - make mistakes. However, these errors can cause real harm, particularly in financial and health advice articles.

The basic inaccuracies in CNET’s AI-generated articles raised larger concerns about the publication’s AI workflow and oversight. Staff at CNET told The Verge that they didn’t know which articles were written by AI and which were written by humans.

The backlash quickly became a media firestorm.

The Company

The charges were surprising for an outlet that was once one of the most trusted in the tech industry.

In the Dot Com era, CNET dominated the tech news and reviews space. However, as media business models shifted and a new generation of media disruptors emerged, CNET fell upon hard times. After selling to CBS Interactive for $1.8 billion in 2008, the company sold to Red Ventures for a mere $500 million in 2020.

The New York Times itself described Red Ventures as “the biggest digital media company” that “you’ve never heard of.” Red Ventures is a private equity firm that owns some of the biggest franchises in digital media, including CNET, Bankrate, Healthline, Lonely Planet, Greatist, and The Points Guy.

You may notice that many of those sites focus on advice. That is no accident. Red Venture’s business model centers on affiliate marketing. They use SEO to bring in consumers who are already shopping, advise them on what to buy, and get a cut of the purchase.

In the SEO affiliate model, reducing the cost of creating basic, search-optimized content maximizes profits. After acquiring CNET, Red Ventures conducted three rounds of layoffs, and shifted the site’s content toward its core SEO strategy.

Red Ventures was drawn to AI. AI could write basic, SEO-bait articles like “What is compound interest?” faster and cheaper than human writers. The company could fill the AI articles with lucrative affiliate links to financial products.

The AI

Red Ventures built a proprietary AI writing software to create its content, according to VP of Content Lance Davis. Editors at CNET could combine AI-generated text with their own work, and pull data from specific domains and domain-level sections to create stories. The tool was one of a few AI tools the company used for different purposes.

By October 2022, the tool had spent months in development and internal testing. Leaders at Red Ventures convened to discuss its progress.

The company’s tool wrote faster than humans, but editors had to spend more time revising AI copy than human copy. More concerningly, internal tests found that the AI sometimes added incorrect details, and plagiarized its sources.

The executives discussed these challenges.

Three months later, Rae Hodge wrote her resignation letter.

The Fallout

In response to the backlash against its undisclosed AI stories, Red Ventures paused AI reporting across CNET, Bankrate,, and other publications. The company promised to disclose its use of AI moving forward.

The company also created a new AI policy:

“If and when we use generative AI to create content, that content will be sourced from our own data, our own previously published work, or carefully fact-checked by a CNET editor to ensure accuracy and appropriately cited sources.”

- CNET AI Policy: How We Will Use Artificial Intelligence at CNET

The SEO Results

SEO expert Kevin Indig used Ahrefs to estimate the impact of Red Ventures’ AI content. CNET generated ~20K organic visits from AI content per month (as of January 2023). CNET’s sister site Bankrate brought in ~125K visits per month from AI content.

Lessons for Marketers

Taken alone, CNET’s AI site traffic numbers are enviable. But they came at a cost to public trust that the company did not have to pay.

Whether you are a journalist or a marketer, publishing inaccurate or plagiarized content comes with legal, ethical, and business risks.

Here are a few tips to help you reduce the risks of using AI content:

Human oversight is crucial

We’ve said it before and we’ll say it again: a (diligent) human should edit and fact-check all your AI-generated content.

LLMs are incredible writing assistants, but they cannot operate independently yet. They sometimes make up facts, and present them so authoritatively that you go down a 30-minute internet rabbit hole trying to figure out where it got its information.

You should also use a plagiarism checker on all of your AI content. It’s important to note that CNET’s team claimed that they used a checker that didn’t catch all of the “unoriginal content” in their article. Consider using multiple plagiarism checkers.

Don’t rush AI tools to implementation

Executives at Red Ventures allegedly knew that their proprietary tool made mistakes and copied material from its training sources, but they greenlit a broader content test anyway.

Before you start publishing AI content, take time to test multiple content generators. Make sure that you have a plan in place to catch and fix errors before the content is published.

Most importantly, if a tool is not providing content that is up to your standards, take the time to improve its output or find a new tool. Don’t rush to implement a tool that doesn’t meet your needs.

Quality first

With Google’s change to their AI guidance, a number of new companies will start using generative AI to create SEO-bait articles.

The landscape of search is always changing. There are thousands of updates to Google’s algorithm per year. Through all those changes, one thing has remained consistent: quality matters.

Is your AI content genuinely helpful to your readers? Is it engaging? Is it free of errors that can hurt them?

If you’re not sure how to create engaging content with AI, check your email next Thursday. We’ll be talking about how another media organization creatively implemented AI to delight its readers. Can you guess which outlet it is?