

Why voting in a fact-checking void should worry you
March 28, 2025
Australian voters heading to the polls need to be aware there’s little standing between them and potential manipulation of information by vested interests.
The loss of Australia’s go-to political fact-checker and the rise of AI tools has created a crisis for political accountability just as the nation’s voters prepare to go to the polls.
Professional fact-checkers have never been under more pressure and social media users face a complex and fast-evolving misinformation landscape.
It’s crucial voters understand the situation in the lead-up to the vote.
This federal election will be the first without ABC RMIT Fact Check, which completed its first fact check during the Rudd-Abbott election of 2013.
It will also be Australia’s first federal election since the release of ChatGPT and other generative AI tools that have heralded a new normal of AI-generated political advertising and propaganda.
The risk to accountability is a win for vested interests in Australia’s political and media systems. It means there is even more potential for those vested interests to manipulate information for their own benefit rather than the public good.
Australia needs political parties to commit to ethical use of AI in their campaigning and bipartisan support for improved human-AI detection tools created by fact-checkers and for fact-checkers and journalists, to improve media information integrity systems.
What happened to political fact-checking
Independent fact-checking has faced a public legitimacy crisis in the past few years, mirroring similar crises of trust in news.
The crisis is driven in part by politicians’ denigrations of online investigative research activities, which is related to distrust of the fact-checking movement by far-right politicians and their allies around the world.
In Australia, the end of fact-checking arrangements between the ABC and RMIT University happened in 2024 amid a media furore in the lead-up to the Voice to Parliament referendum. Conservative media depicted RMIT Fact Lab, another entity under RMIT’s professional fact-checking wing, as grossly biased.
Claims of fact-checkers’ political bias hinge on observations that right-leaning voices tend to share news content that diverges from established consensus more often, resulting in a relatively high proportion of their claims being fact-checked.
The suspension of RMIT Fact Lab’s membership of Meta’s third-party fact-checking program cast a long shadow over the credibility of fact-checking, reflecting similar questions to those recently posed in the United States about the role of truth in politics.
Australia still has two locally-owned fact-checking units – ABC’s in-house fact-checker ABC News Verify and Australian Associated Press fact-checking service – as well as AFP Australia, the local division of Agence France-Presse’s fact-checking operation.
Australian fact-checkers have been part of a push for political accountability and depolarisation, responding to the concerns of Australians about the interplay between private interests in politics and media organisations and the public interest, including the roles of big tech in moderating information online.
There have been calls for greater accountability and transparency in news reporting, but fact-checkers worldwide have experienced setbacks.
At the start of the year, Meta announced that it was ending its third-party fact-checking program in the United States and making changes to its content-moderation policies. The changes would amplify political content and allow content targeting vulnerable minorities that it previously considered contentious and divisive.
This move signalled a crisis for professional fact-checkers, journalists and misinformation researchers.
Meta boss Mark Zuckerberg, under pressure from Donald Trump and other conservative critics of Meta’s third-party fact-checking program, claimed US fact-checking was akin to censorship. That echoed accusations of partisan censorship in Europe, the Philippines and Australia.
Alternatives to independent fact-checking
Zuckerberg claims the answers to Meta’s controversial information integrity problems will be found in a Community Notes-style program, modelled on the program developed on Twitter and used on Elon Musk’s X.
While such an approach could provide some value in terms of contextualising misleading content, it does little to address complex online harms.
Recent studies have found independent fact-checkers are frequently cited in Community Notes, and that successful community moderation relies on professional fact-checking.
Human-AI approaches are also increasing, with X’s Community Notes employing a bridging algorithm that assesses contributors before posting a correction to content.
However, there are flaws in that system.
Professional fact-checking has been notoriously challenging to scale so fact-checkers have also been experimenting with AI-based approaches. However, they are limited by time and resources.
What this means for Australia
There are already signs of problematic AI use in political communication, including politicians being edited using AI to engage audiences, often at the expense of other candidates or parties.
This is done through carrying unsanctioned or uncharacteristic messaging to attack or cause confusion around certain policies or politicians, through parody as well as deception.
While Meta recently committed to labelling content that it identifies as being generated with AI, evidence suggests that labelling content as generated does little to reduce its perceived credibility. In other words, the power of AI for political communication is not just its ability to deceive, but to persuade – both cheaply and at scale.
These practices could deceive or manipulate voters and even lead to a loss of faith in institutional systems or authentic evidence being discredited.
The defunding and delegitimisation of professional fact-checkers threatens their ability to provide context and explanation and impedes their investigative abilities to better understand the problematic media landscape.
The end of platform-supported fact-checking in the United States also sets a precedent for digital platforms to enter into covert agreements with elected officials, furthering individual political or economic agendas, instead of creating policies that serve the public interest.
In Australia, there is the potential for future political dealmaking between influencers or power brokers, platform owners like Musk and Zuckerberg and segments of the Australian elite, which would cause more public confusion and disillusionment.
Republished from 360info, 24 March 2025