How close are we to an accurate AI fake news detector?

Categories
Research and innovation

Professor Magda Osman is a Visiting Professor of Research Impact at Leeds University Business School.

A digitial face scanning a device with a hand above the device

This article originally featured on The Conversation.

<p>In the ambitious pursuit to tackle the harms from false content on <a href="https://link.springer.com/content/pdf/10.1007/s13278-023-01028-5.pdf">social media</a> and <a href="https://www.sciencedirect.com/science/article/pii/S266682702100013X">news websites</a>, data scientists are getting creative. </p>

<p>While still in their training wheels, the <a href="https://doi.org/10.1038/s42256-024-00881-z">large language models (LLMs)</a> used to create chatbots like ChatGPT are being recruited to spot <a href="https://doi.org/10.3390/fi16080298">fake news</a>. With better detection, AI fake news checking systems may be able to warn of, and ultimately counteract, serious harms from <a href="https://arxiv.org/pdf/2102.04458">deepfakes</a>, <a href="https://dl.acm.org/doi/full/10.1145/3613904.3642805">propaganda</a>, <a href="https://ieeexplore.ieee.org/abstract/document/9750122">conspiracy theories</a> and <a href="https://doi.org/10.1007/s11042-023-17470-8">misinformation</a>. </p>

<p>The next level AI tools will personalise detection of false content as well as protecting us against it. For this ultimate leap into user-centered AI, data science needs to look to behavioural and neuroscience.</p>

<p>Recent work suggests we might <a href="https://doi.org/10.1016/j.chb.2020.106633">not always consciously know</a> that we are encountering fake news. Neuroscience is helping to discover what is going on unconsciously. Biomarkers such as <a href="https://ieeexplore.ieee.org/abstract/document/9304909">heart rate</a>, <a href="https://dl.acm.org/doi/abs/10.1145/3382507.3418857">eye movements</a> and <a href="https://ieeexplore.ieee.org/abstract/document/9277701">brain activity</a>) appear to subtly change in response to fake and real content. In other words, these biomarkers may be “tells” that indicate if we have been taken in or not.</p>

<p>For instance, when humans look at faces, eye-tracking data shows that we scan for rates of blinking and <a href="https://doi.org/10.1016/j.jvcir.2024.104263">changes in skin colour</a> caused by blood flow. If such elements seem unnatural, it can help us decide that we’re looking at a deepfake. This knowledge can give AI an edge – we can train it to mimic what humans look for, among other things.</p>

<p>The personalisation of an AI fake news checker takes shape by using findings from <a href="https://dl.acm.org/doi/abs/10.1145/3382507.3418857">human eye movement data</a> and <a href="https://ieeexplore.ieee.org/abstract/document/9277701">electrical brain activity</a> that shows what types of false content has the greatest impact neurally, psychologically and emotionally, <a href="https://doi.org/10.1016/j.chb.2022.107307">and for whom</a>. </p>

<p>Knowing our specific interests, personality and <a href="https://doi.org/10.1080/0960085X.2023.2224973">emotional reactions</a>, an AI fact-checking system could detect and anticipate which content would trigger the most severe reaction in us. This could help establish when people are taken in and what sort of material fools people the easiest.</p>

<h2>Counteracting harms</h2>

<p>What comes next is customising the safeguards. Protecting us from the harms of fake news also requires building systems that could intervene – some sort of <a href="https://doi.org/10.1027/1864-1105/a000407">digital countermeasure to fake news</a>. There are several ways to do this such as warning labels, links to expert-validated credible content and even asking people to try to consider different perspectives when they read something.</p>

<p>Our own personalised AI fake news checker could be designed to give each of us one of these countermeasures <a href="https://journals.sagepub.com/doi/full/10.1177/1529100620946707">to cancel out the harms from false content online</a>. </p>

<p>Such technology is already being trialled. Researchers in the US have studied how people interact with <a href="https://dl.acm.org/doi/pdf/10.1145/3544548.3581219">a personalised AI fake news checker of social media posts</a>. It learned to reduce the number of posts in a news feed to those it deemed true. <a href="https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2019.00011/full">As a proof of concept</a>, another study using social media posts tailored additional news content to each media post to encourage users to view alternative perspectives.</p>

<h2>Accurate detection of fake news</h2>

<p>But whether this all sounds impressive or dystopian, before we get carried away it might be worth asking some basic questions.</p>

<p>Much, if not all, of the work on <a href="https://journals.sagepub.com/doi/pdf/10.1177/20563051221150412">fake news, deepfakes, disinformation</a> and <a href="https://journals.sagepub.com/doi/pdf/10.1177/17456916221141344">misinformation </a> highlights the same problem that any lie detector would face. </p>

<p>There are many types of lie detectors, not just the polygraph test. Some exclusively depend on linguistic analysis. Others are systems designed to read people’s faces to detect if they are leaking micro-emotions that give away that they are lying. By the same token, there are AI systems that are designed to detect if a face is genuine or a deep fake.</p>

<p>Before the detection begins, we all need to agree on what a lie looks like if we are to spot it. In fact, in <a href="https://doi.org/10.1177/09637214231173095">deception research</a> shows it can be easier because you can instruct people when to lie and when tell the truth. And so you have some way of knowing the ground truth before you <a href="https://doi.org/10.1080/00909880305377">train a human</a> or a <a href="https://doi.org/10.1016/j.actpsy.2020.103250">machine</a> to tell the difference, because they are provided with examples on which to base their judgements.</p>

<p>Knowing how good an expert lie detector is depends on how often they call out a lie when there was one (hit). But also, that they don’t frequently mistake someone as telling the truth when they were in fact lying (miss). This means they need to know what the truth is when they see it (correct rejection) and don’t accuse someone of lying when they were telling the truth (false alarm). What this refers to is signal detection, and the same logic applies to <a href="https://doi.org/10.1177/1745691620986135">fake news detection</a> which you can see in the diagram below.</p>

<p>For an AI system detecting fake news, to be super accurate, the hits need to be really high (say 90%) and so the misses will be very low (say 10%), and the false alarms need to stay low (say 10%) which means real news isn’t called fake. If an AI fact-checking system, or a human one is recommended to us, based on signal detection, we can better understand how good it is. </p>

<p>There are likely to be cases, as has been reported in a recent <a href="https://www.mdpi.com/2673-5172/5/2/50/pdf">survey</a>, where the news content may not be completely false or completely true, but partially accurate. We know this because the speed of news cycles means that what is considered accurate at one time, may later <a href="https://doi.org/10.1080/13669877.2022.2049623">be found to be inaccurate,</a> or vice versa. So, a fake news checking system has its work cut out.</p>

<p>If we knew in advance what was faked and what was real news, how accurate are biomarkers at indicating unconsciously which is which? The answer is not very. Neural activity <a href="https://ieeexplore.ieee.org/iel7/9851848/9851959/09851990.pdf?casa_token=M5v1Y02PojMAAAAA:vcoUqhoCXi8F9R0cyq49HEAvMpWjFw6UND5vMTrR2TQ8NSgRobKeUT-7GvUZlVo4r_DHSFmYzA">is most often the same</a> when we come across real and fake news articles. </p>

<p>When it comes to eye-tracking studies, it is worth knowing that there are different types of data collected from eye-tracking techniques (for example the length of time our eye fix on an object, the frequency that our eye moves across a visual scene). </p>

<p>So depending on what is analysed, some studies show that <a href="https://dl.acm.org/doi/pdf/10.1145/3517031.3529619?casa_token=H_djGz0jSMUAAAAA:qOJuvnWT1ER05kzEYreuK1YC2hDzsF0SdyHtDdeS3pRxOA4L5vReqXHpLBSfRO2_v1JYWpBIBnWUBw">we direct more attention</a> when viewing false content, while others show the <a href="https://dl.acm.org/doi/pdf/10.1145/3397271.3401221?casa_token=yuYm20sEGgEAAAAA:LxvBqml_pS0hi8ojlM7vLdITFGJvSrOwsOm56_zyudAll89DKUGzmLA4y1lrQW7GD1yWOUF_7US5TQ">opposite</a>.</p>

<h2>Are we there yet?</h2>

<p>AI fake news detection systems on the market are already using insights from behavioural science to help <a href="https://doi.org/10.1111/jasp.12959">flag and warn us against fake news </a> content. So it won’t be a stretch for the same AI systems to start appearing in our news feeds with customised protections for our unique user profile. The problem with all this is we still have a lot of basic ground to cover in knowing what is working, but also checking <a href="https://doi.org/10.48550/arXiv.2308.10800">whether we want this</a>. </p>

<p>In the worst case scenario, we only see fake news as a problem online as an excuse to solve it using <a href="https://books.google.co.uk/books/about/Smart_Until_It_s_Dumb.html?id=rfuizwEACAAJ&redir_esc=y">AI</a>. But false and inaccurate content is everywhere, and gets discussed <a href="https://www.csap.cam.ac.uk/media/uploads/files/1/offline-vs-online-sharing.pdf">offline</a>. Not only that, we don’t by default believe all fake news, some times we use it in discussions to <a href="https://doi.org/10.3390/journalmedia5020050">illustrate bad ideas</a>. </p>

<p>In an imagined best case scenario, data science and behavioural science is confident about the scale of the various harms fake news might cause. But, even here, AI applications combined with scientific wizardry might still be very poor substitutes for less sophisticated but more effective solutions.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img src="https://counter.theconversation.com/content/242309/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important" referrerpolicy="no-referrer-when-downgrade" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https://theconversation.com/republishing-guidelines --></p>

  <p><span><a href="https://theconversation.com/profiles/magda-osman-708478">Magda Osman</a>, Professor of Policy Impact, <em><a href="https://theconversation.com/institutions/university-of-leeds-1122">University of Leeds</a></em></span></p>

  <p>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/how-close-are-we-to-an-accurate-ai-fake-news-detector-242309">original article</a>.</p>
</div>
 

Related content

Contact us

If you would like to get in touch regarding any of these blog entries, please contact:research.lubs@leeds.ac.uk

Click here to view our privacy statement. You can repost this blog article, following the terms listed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International licence.

The views expressed in this article are those of the author and may not reflect the views of Leeds University Business School or the University of Leeds.