Back to Main

AI Misinformation: A Guide to Fact-Checking AI in 2025

Date:

Navigating the digital world now requires new skills to combat AI misinformation. Due to the risk of AI hallucinations and the spread of disinformation, learning to fact-check AI-generated content effectively is crucial. This guide provides essential steps for verifying information, helping organisations and individuals develop the digital literacy skills needed to protect themselves from online misinformation and ensure the accuracy of their content.

Did you know that the average person falls for a fake news story 25% of the time? 

There was a study I read at university that has stuck with me ever since; it was a survey on fact-checking and false news. A large majority of people, around 80%, agreed that misinformation was a serious problem and that everyone should always fact-check the news they consume. However, when asked if they fact-checked what they read or watched themselves, only about 35% actually did.

Why the huge gap? Well, according to a recent study (Lyons et al. 2021), 73% of people overestimate their ability to distinguish between real and false information. That same study found that 90% of people believed they were "above average" at spotting misinformation. In reality, the average person falls for a fake story 25% of the time.

 A Guide to Fact-Checking AI in 2025

 

(Lyons et al. 2021)

My point is simple: we all generally see misinformation as a problem, yet we all also tend to think we're too clever to fall for it ourselves. And because of that very overconfidence, we, you and I, the average person, we fall for a fake story 1 out of 4 times, which is a hell of a lot. 

This is where AI comes in. As a society, as we increasingly rely on AI, we're all more susceptible to being misled by it. Even if you don't think you're using AI, you might be. Have you noticed that sometimes when you Google something, the first result is an AI overview of the subject? 

 

AI on Google search

 

 

That is just one way AI is telling you things, whether you realise it or not, and this new layer of information makes accidentally falling for misinformation all the easier. 

So today, let’s talk about misinformation specifically from artificial intelligence, how it spreads misinformation and what you can personally do to ensure that you do not fall victim to this. 

You may have noticed that I’m citing sources in this article. While I always research and use proper sources in Member Jungle articles, I don’t usually cite them in such a formal way, as that’s just not a common formatting choice in blog articles. However, as we are talking about misinformation and fact-checking, it seemed only right to be more over the top with my citing. 

Glossary Of Terms That Are Important To AI Misinformation

First, let’s briefly cover a few terms that I’ll be using today to ensure we are all on the same page about them. 

Misinformation: Incorrect information 

Disinformation: Incorrect information deliberately designed to mislead people

Propaganda: Biased, misleading or incorrect information designed to mislead people and push a certain agenda, usually political

Fake News: A political buzzword that’s a mix of all of the above terms, and can often be used as an insult slapped on any information someone doesn’t like 

Why This Is So Important 

You may wonder why you should care about AI-related issues, particularly the importance of fact-checking AI. The reality is that the stakes are high, ranging from massive societal impacts, such as altering election results and affecting the global economy, to erosion of faith in institutions (Karas, 2024). But the issue can also be much closer to home. Imagine simply Googling "When is my NFP’s tax return due?" and receiving an AI-generated result that, without you even realising it, contains inaccurate information. Suddenly, you're operating on a foundation of bad information, and the consequences could be significant for your organisation.

The truth is that artificial intelligence is contributing to the rise of misinformation online. This isn't just a threat from chatbots; it can also come from AI-generated pictures, videos, or posts, as well as in the AI overviews you see at the top of a search result and underneath human-generated posts. This new layer of information makes it all the easier to accidentally fall for misinformation (Monteith et al., 2023). As AI becomes more advanced, being able to spot AI-generated content and knowing how to verify its accuracy is a skill we all need to develop.

You might feel like you barely use AI today, but that almost doesn't matter because the AI revolution is poised to be bigger than the internet revolution itself. AI will inevitably become a part of your life, whether you like it or not. To stay on top of this, you'll need the skills to fact-check and verify information, ensuring you don't fall victim to misinformation as this technology becomes more ingrained in our world.

Why AI Sometimes Shares Incorrect Information 

So, the first question you’re probably asking is why AI sometimes spreads misinformation. Well, there are a few reasons, so let’s check them out.

The AI Was Fed Poor Information

Artificial intelligence is trained by providing it with vast amounts of data, typically sourced directly from the internet, which it then sifts through to extract information, enabling it to “understand” the world. They then use this information to fulfil their programmers' wishes, which is usually to help the users with their queries. The issue is that bad information in equals bad information out. 

If I say that zebras can fly despite not having wings, and they use this ability to swoop down and pluck leopards from trees to eat them, nothing happens. If we all start claiming this, putting it in writing, putting it in academic papers, before long, AI will start believing it and repeating it to unwitting people who ask AI about zebras. 

The point being that an AI is only as good as the information it receives, and since that information is essentially the entire internet, it will contain both good and bad information. 

The AI Just Got Confused

One common reason for AI to spread misinformation is that artificial intelligence can get confused and “hallucinate”. This is where an AI simply invents an answer to a question it doesn’t know. This could look like an AI telling you something that is flat-out false, referring to a source that doesn’t exist, or just generally muddling its facts. 

The extent to which this happens depends on the AI engine in question, and all AI companies are working to reduce the likelihood of their models hallucinating. However, according to a 2025 study conducted by OpenAI, the company behind ChatGPT, their two newest models hallucinated 33% and 48% of the time. (Moore, 2025). The thinking here is that while new, more powerful AI models are more accurate, “it appears to come at the cost of more inaccurate hallucinations” (Moore, 2025). This is a crucial point, because these new, more frequent hallucinations are often subtle or creative, and as such can be much harder for the average person to spot. This adds another layer of danger to how easily people can be misled.

Lack Of Understanding

AI engines don’t actually think the way we do; they are essentially sophisticated pattern-matching machines. When you ask an AI for the best way to bake a vegan chocolate cake, it isn't "thinking" about the recipe in a human sense. Instead, it quickly scans its data banks to determine the most probable sequence of words that will answer your prompt.

Think of it as a highly advanced digital assistant that has processed every recipe, cookbook, and cooking show transcript on the internet. It doesn't understand the ingredients or the baking process, but it can predict with incredible accuracy what words should follow to create a well-structured and delicious recipe. This is also why it can occasionally make a mistake or hallucinate, presenting you with incorrect information. It's simply choosing the wrong word sequence from its immense library of learned patterns.

AI Can Be Instructed To Spread Disinformation 

AI being told to deliberately spread disinformation and propaganda is currently an extremely rare occurrence; however, there have been alleged cases of AIs being reportedly instructed by their programmers to spread false information about certain topics (Kerr, 2025). 

The issue here isn’t whether or not this has happened yet; it’s about the future potential of it. A single person with a powerful AI model can now create and disseminate a misinformation campaign on an unprecedented scale with very little effort. In the past, this required a huge, coordinated bot farm to be effective. Today, a single bad actor can produce a wide-scale disinformation campaign, making this a powerful tool for propaganda in the hands of bad actors.

So, now we know why we need to be wary of AI misinformation and why AI makes mistakes in the first place. Let's talk about what we can do about it.  

How To Fact Check AI

So finally, let’s look at how you can effectively and quickly fact-check any story or piece of information, AI-generated or not. To make this easier to follow, let’s use the example of you asking ChatGPT, "What is the revenue limit for an Australian not for not-for-profit to legally have to abide by the Australian Privacy Act? "

This was ChatGPT’s answer.

 

How To Fact Check AI

 

It looks nice and thorough, but is it correct? Let’s take this through our fact-checking steps and find out. 

Question 1 - Do I Need To Fact-Check This?

A good rule of thumb is that if a piece of information is important or is something you are likely to share with someone, you need to check it. My personal rule is that if I haven't checked a piece of information, I will assume it is false and will never act on it or repeat it to anyone under any circumstances. Basically, false until proven true.  

Our example of whether an NFP falls under the Privacy Act is important, so we need to check it. 

Question 2 - What Is The Source?

Now that we have decided to check this piece of information, our first step should be to work out the source. So we can see where this information came from. The key to this is looking for a primary source; we aren’t looking for a secondary source like an AI, magazine, newspaper or podcast talking about it; we are looking for a primary source where this information will have originated. Likely an academic source or, in this case, a government website. 

You may have noticed that, under the result, ChatGPT has a little option labelled “Source”. If your chatbot doesn’t have this, you can simply ask it to share a link to its source. 

When I clicked source, it cited several websites, from government ones to private industry, even a WordPress website. So, let's check the government source it listed as that will likely be a primary source. ChatGPT appears to have gotten most of its information from a page on the government website for the Office of the Australian Information Centre.

 

Office of the Australian Information Centre 

 

Do not make the mistake of stopping here; we need to verify this source because we are talking about something important, and we need to double-check it. 

Question 3 - Who Else Is Saying This? 

Don't rely on a single source, even if it seems reputable. Cross-reference the information with at least two other independent and reliable sources. If you find conflicting information, it's a red flag. So, now let’s just Google this “revenue limit for the Australian Privacy Act” and see what comes up. 

The results included several websites, both government websites and business and law websites, and more AI summaries, all of which back up ChatGPT's original claim. 

The key thing here isn’t if other people are reporting this, but who they are. Seeing that Business.gov and the Australian Law Reform Commission are also reporting this means a lot more than if some random online newspaper were reporting it. 

If you are unsure about the reliability of a particular news source, sites like Media Bias Fact Check and Ground News are able to help you check the reporting history of particular outlets and stories. 

Question 4 - What Is The Real Answer

By now, we can see that ChatGPT’s answer was correct, and we can take it as fact. However, if we had kept getting different answers and were struggling to find reliable primary sources, the next step would be to find a different answer and then repeat the process until we get an answer that holds water.  

This can be as simple as telling ChatGPT that you think it got it wrong and to double-check. 

I know our example here was correct, but AI does get it wrong, and some really clever people fall for it, because they just assume they’d notice a fake statement. A few examples of this include: 

  • Multiple lawyers in Australia, Canada, and the US have filed court documents that contained references to non-existent legal cases that AI had invented. 
  • One AI chatbot cited a clinical study in the New England Journal of Medicine, including precise statistics. The issue was that the entire study was fictional and made up by the AI. 
  • One AI chatbot being used in a legal study made up false sexual harassment charges about a real law professor. 
  • Another chatbot asked to simply edit submissions to a scientific journal, added in unrelated and incorrect references to the text.    

 

The point is that AIs make mistakes, they are only as good as the people who train them and the data they are given. By following these steps, you can become a more critical and informed consumer of information, protecting yourself and others from the dangers of AI-generated misinformation.

Navigating a New Digital Landscape

I'm not trying to scare you off from using AI. AI is an incredibly useful tool that can really streamline a lot of the things you do. What I'm saying is that you need to be careful with it and take anything it says with a grain of salt. If you do that, AI can be a wonderful tool to really help your organisation thrive. Just fact-check things before you accept them. 

If you want to know some more ways you can use AI to help your organisation, check out How You Can Use AI To Help Your Club Or Association In 2025.

If you’re looking for ways to embrace our ever-changing future and ensure you’re ready for whatever it throws at you, have a look at How to Future-Proof Your Club: A Guide to Building a Resilient Organisation

References 

Karas, Z. (2024) ‘Effects of ai-generated misinformation and disinformation on the economy’, Düzce Üniversitesi Bilim ve Teknoloji Dergisi, 12(4), pp. 2349–2360. doi:10.29130/dubited.1537268. 

Kerr, D. (2025) Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chats, The Guardian.

Lyons, B.A. et al. (2021) ‘Overconfidence in news judgments is associated with false news susceptibility’, Proceedings of the National Academy of Sciences, 118(23). doi:10.1073/pnas.2019527118. 

Monteith, S. et al. (2023) ‘Artificial Intelligence and increasing misinformation’, The British Journal of Psychiatry, 224(2), pp. 33–35. doi:10.1192/bjp.2023.136. 

Moore, R. (2025) AI hallucinates more frequently as it gets more advanced - is there any way to stop it from happening, and should we even try?, LiveScience. 

 

Let's Keep in Touch

Subscribe and never miss another blog post, announcement, or special event. We hate spam and will never sell your contact information, we will only send you our monthly Member Jungle newsletter, full of great articles.