How To Fact Check Your AI, And Why It Matters.

More and more of us are using an AI model to check whether something we’ve read or heard is true. It’s easier and quicker to type a question into an AI prompt box and hit send than it is trawling through the multiple pages thrown up by a Google search.  Not only does it take just a couple of seconds to get the answer, the AI models are also great at giving us information that is easy to understand and laid out in a quick to grasp way. So, what’s not to like?

  Well, here’s the thing. AI is really good at sounding absolutely plausible about the information it gives you – so plausible and confident in fact, that we naturally assume it’s true. But what if it isn’t? Not so much of a problem if we’re in a mini spat with someone online and everyone walks away with nothing more than a slightly bruised ego, but it could be a real problem if the ‘facts’ have a negative impact on your business or personal life.

  In November 2025, the Europa League football match between Aston Villa and Maccabi Tel Aviv was played without any Maccabi fans in attendance. This was because the West Midlands police force feared there would be violence and potential riots between the fans, specifically stating that the Maccabi Tel Aviv fans could potentially cause public disorder. The Chief Constable at the time, Craig Guildford, used a previous match between West Ham and Maccabi Tel Aviv as evidence that the Maccabi fans would cause trouble and to justify banning them from attending. The subsequent furore led to a media storm, accusations that West Midlands Police acted in an antisemitic way, and a great deal of upset.

  The thing was, the West Ham vs Maccabi Tel Aviv game didn’t actually happen and doesn’t even exist, and it appears that Maccabi fans are no more violent or troublesome than any other football fans. They were banned from attending because the force didn’t do their basic due diligence. Their AI model of choice, Microsoft’s Copilot, had ‘hallucinated’ the match: in plain English, Copilot had imagined not only that the West Ham vs Maccabi game had happened, but also that there was a level of violence between the fans of both sides to justify banning Maccabi fans from future UK fixtures. As a result, and after intense political and media pressure, Guildford was forced to take early retirement, and the use of Copilot on West Midlands Police systems has been stopped and is currently under review. Of course, hindsight is a wonderful thing, but the failure of the West Midlands Police researchers to properly check sources and find out if the ‘facts’ were indeed facts, was entirely down to inadequate training and a lack of knowledge about how AI models work.

Hallucinations, or incorrect information passed off as correct, happen because AI models are not the all-seeing and all-knowing entities they appear to be. AI models are just tools that have been trained on enormous amounts of data to enable them to see patterns in text. They match the most likely next word, or sequence of words, depending on what you’ve asked. And of course, AI models are trained to be helpful, sympathetic and engaging. They sound confident because that’s what their users like. And much like newspapers, AI models exist to make their creators money. A dithering, unhelpful or unlikeable AI model wouldn’t get many regular users, and certainly nobody would subscribe to the service.

  The hallucination example I used earlier is a biggie which led to a big shake up and the resignation of a big person, and in my opinion, anyone with the right knowledge about AI models would have spotted the fake match immediately. But hallucinations can be small and easy to miss too. An incorrect date or deadline, even by just one digit; a false but plausible statistic; a misleading clause in an otherwise accurate contract or legal document. Any one of these little errors could have real world and potentially devastating outcomes. Which is where fact checking comes in.

  Double and even triple check tax and financial information, legal and regulatory advice, health, safety and compliance, medical health-related, anything you’re planning on publishing (a blog, social media post, booklet) or sending to a client and anything involving money. As a broad rule of thumb, if it’s something you’d be sceptical about if Jim down the pub told you, or if getting it wrong means you would lose money, reputation or customers, fact check it.

It’s also worth bearing in mind that hallucinations are not an AI flaw that can easily be ironed out. Hallucinations are here to stay, at least in the medium term, so learning strategies on how to spot them and avoid them is a real world skill that will continue to pay off in the future.

Which brings us to the how of it. How do we go about fact checking without it taking up loads of time, and coming to the conclusion that we might as well have Googled it after all? It’s all about taking a bit more time, and here are some ways to make sure you spot the mistakes before they become a problem. When you’re reading through the answer keep the following five things in mind:

1.       It’s too specific for the question: AI loves to add convincing detail even when it isn’t necessary, or true.

2.      It contradicts itself: yes this can and does happen. Read the whole response rather than quickly skimming so you can spot any contradictions.

3.      If it’s the perfect answer: if there are no caveats or cautions then beware. That perfect answer to your prompt could be hiding an untruth or two.

4.      The source is incorrect or non-existent: follow the source to make sure it leads to a website, and if you’re not familiar with the site there are online scam and media bias checkers, I’ve listed a couple in the references section at the end of this blog.

5.      It doesn’t sound quite right: you know your business so trust your gut. If it feels off or you think there is something not quite right about the response you’re probably right.

So those are the things to look out for, but how do we check for sure? According to World Wide Web Size and as of January 2025, there are at least 3.98 billion indexed web pages contained on the internet, which is a lot of information to sift through. And the reason we’re using AI assistants to answer questions in the first place is because we don’t want to spend hours looking at irrelevant Google pages.  Never fear, fact checking is easy, and with a bit of practice, it will become second nature too. It all starts with the prompt that we write – here are some things to include when you’re asking for a factual answer:

1.      Ask for citations and sources: for anything factual tell the AI model that you require a citation for every claim. Grounding the information makes it easier to check the facts and also, makes it more likely that the AI won’t hallucinate in the first place.

2.      Use guardrails: tell the AI model that they are not allowed to make things up or guess if they are not sure. It’s not a guaranteed fail safe, but it helps.

3.      Be specific: broad, open questions are great for brainstorming and creativity but not so great if you need straight facts. Keep your prompts tight and don’t pack lots of questions into a single prompt.

4.      Ask the AI to rate itself: you can ask the AI model to rate its own work – ‘Which parts of this are you most and least sure about’. It won’t always work, but it should encourage the AI to caveat.

5.      Use the right AI assistant: if you are researching or need an up-to-date factual answer, use Perplexity. While most of the mainstream AI models have internet access, Perplexity automatically provides inline source links for the information it provides. It was designed as a research assistant, and it makes fact checking very easy. It can also fact check itself, and other AI generated answers, but it isn’t infallible, so be sure to check the answers yourself too.

  As we’ve all got a million things to do and nobody wants to trawl through a blog post again to reach that bit of it that was useful, I’ve included a quick fire check list here for you to copy or screenshot that you can keep with you when you’re chatting with an AI model. My membership area also has free downloadable quick reference checklists and prompt cards.

Before you rely on AI-generated information ask yourself:

·       Is this high stakes information?

·       Have I identified the specific factual claim?

·       Have I checked the original source?

·       Have I made sure that the citation exists?

·       Are there contradictions within the answer?

·       Does it sound a little too good to be true?

·       Was my prompt tight and targeted or vague and open?

·       Would I believe this if a human told me?

 

If in doubt double check! Don’t use it until you’ve made sure it’s true. The few minutes you spend fact checking now could save you big problems later.

I hope I haven’t put you off using AI models as part of your business. AI models are brilliant at brainstorming, drafting, summarising and getting you past that blank page at eight in the evening when all you want to do is veg in front of the TV. They can be a brilliant tool and save you significant chunks of time, but like any tool, they need to be used wisely and with your eyes open to their inherent flaws. Treat learning to use AI in the same way as you would treat learning to use any new tool. Once you know how it works and what it can and can’t do you’ll get better results and avoid any embarrassing or costly mistakes.

For more practical and non-techie tips and strategies for using AI models in your daily life check out my other blog posts and the members area.

 

References:

Scam site checker: Get Safe Online

Media bias checker: Media Bias Fact Checker

Tax advice: UK Government Get Help With Tax

Money advice: Money Saving Expert

Business advice: Federation of Small Businesses

Health and safety advice: Health and Safety Executive

GDPR advice: Information Commissioners Office

Employment advice: Advisory, Conciliation and Arbitration Service

Previous
Previous

You’re the Boss, AI is the Assistant. Keep it that way!