• Possaible
  • Posts
  • These are not the answers you're looking for

These are not the answers you're looking for

How to reduce inaccuracies in AI-generated content.

Good morning, Collaborators!

Yesterday, Anthropic released Claude 2.0 - their ChatGPT competitor - to the public. Initial reviews are positive, but Claude has a problem with incorrect answers.

When I asked what happened to the Titan sub, Claude told me that it sank - in 1999.

Claude included a breathtaking amount of detail.

Claude didn’t let facts get in the way of the story.

It was all fake.

You may have heard other stories about “AI hallucinations,” or times that generative AI gives false answers.

These cases range from funny to alarming.

If you are worried about fake “facts” in AI content, you are in good company. Accuracy and quality are the biggest concerns marketers have about using AI.

Content accuracy is essential. It helps you build trust and convince people to buy from your brand. As well as business risks, inaccuracies can carry legal, financial, and ethical risks.

This week, we’re tackling this concern head-on with tips to fact-check your AI-generated content.


How to generate accurate content faster

Here are a few strategies to help you fact-check AI content thoroughly and save time:

1. Give AI the facts yourself.

One way to reduce inaccuracies is to do the background research for your post yourself. Provide your AI tool with an outline of the content that you want it to write.

AI may still insert incorrect details. However, knowing what you want to say in advance also makes it easier to fact-check the finished article.

2. Ask AI to cite its sources.

For chat-based applications, ask the AI “what evidence supports your answer?” Sometimes, the tool will admit that the data it used to generate its response is shaky or non-existent.

Claude backpedaled quickly.

You can also ask AI to use information from reputable sources (here is a great guide to making ChatGPT cite sources from David Gerwitz).

The AI tool may cite the wrong sources, or draw the wrong conclusions. The sources AI provides should be the first sources you check, not the last. 

3. Keep people involved in fact-checking.

AI is not at the point where it can be left on its own–yet. It is crucial for a human to review AI content before you publish it.

Here are a few fact-checking tips:

  • Ask an expert to review the content.

  • Check multiple sources of information.

  • Look for reputable sources.

  • Check publication date. More recent sources can be more accurate in fast-moving fields.

  • Find the original source.

4. Check content for plagiarism.

The laws around AI content, copyright infringement, and “fair use” are still taking shape.

Before publishing AI content, use a plagiarism checker or a reverse image search to catch content that has been copied from another source.


Handling inaccuracies in high-risk content

False information about the chronological order of Star Wars movies is far less harmful than bad advice on managing mental health conditions.

If you work in a high-risk industry like healthcare or finance, make sure that an expert approves AI-generated content before it is published.