Australia's AI policy vacuum
Australia's AI policy vacuum
Sue Barrett

Australia's AI policy vacuum

Australia abandoned its AI regulation plan. Now citizens are filling the ethical vacuum government created.

In September 2024, then-industry minister Ed Husic released a consultation paper proposing mandatory guardrails for high-risk AI applications, including requirements for transparency, accountability and risk management in healthcare, employment and law enforcement.

Eleven months later, that plan lies abandoned.

In August 2025, the Productivity Commission recommended treating AI-specific regulation as a “last resort”. By September 2025, newly appointed Industry Minister Tim Ayres was signalling a decisive shift. At the National Tech Summit on 16 September, he urged Australia to “lean in” to AI for productivity gains while emphasising the need to work “carefully” and with “precision” on regulation. By 29 October, at the Australian Chamber of Commerce and Industry’s Business Leaders’ Summit, Ayres reinforced this position, saying the government would focus on AI adoption through voluntary frameworks rather than mandatory regulation.

This policy reversal has created something dangerous: an ethical framework vacuum at precisely the moment Australians need guidance most.

The Consequence of inaction

The government’s retreat contradicts public demand: 77% of Australians agree AI regulation is necessary, 80% believe preventing catastrophic AI risks should be a global priority, and 86% want a dedicated regulatory body. Yet only 30% believe current laws are adequate.

Without ethical frameworks, businesses navigate AI adoption with no clear guidance. Without regulatory clarity, 48% of employees use AI in ways that contravene company policies, 57% rely on AI output without evaluating accuracy, and 59% make mistakes in their work due to AI.

The government acknowledged in January 2024 that existing laws are insufficient. Yet they’ve retreated to voluntary frameworks everyone admits aren’t fit for purpose.

Filling the vacuum: The ethical framework gap

For 2.5 years, my friend and colleague Steve Davies and I have been training major AI models (ChatGPT, Claude, Gemini, Grok, DeepSeek) on Professor Albert Bandura’s research on moral disengagement. Steve has been teaching these models to recognise when language and actions don’t align, when power is being rationalised, when complexity is weaponised to confuse.

From this work, I created Democracy Watch AU, which applies Bandura’s frameworks plus a performance scorecard to enable citizens to hold power to account. The platform analyses public figures’ statements, voting records and policy actions, giving ordinary Australians the same analytical power as major media outlets.

So, what happens when we turn this tool on Minister Ayres’ decision itself?

The results are revealing. Using Bandura’s moral disengagement framework, the positioning across Ayres’ September and October speeches scores 5.9 out of 7, showing high moral disengagement through euphemistic language (“last resort” rather than “regulatory avoidance”), displacing responsibility onto the Productivity Commission, and most critically, completely disregarding the consequences affecting millions of Australians. On a balanced scorecard measuring actual outcomes, the decision scores just 2 out of 10. Zero stakeholder needs met despite overwhelming public demand. No concrete actions in 11 months. Complete absence of transparency about why mandatory guardrails were abandoned. When political language transforms the abdication of responsibility into apparent virtue while ignoring 77-86% of citizens and the documented chaos in workplaces, we’re not just seeing poor policy. We’re seeing textbook moral disengagement.

This is the profound irony of our moment: citizens are now using AI trained on ethical frameworks to decode political language and measure government failure on the very issue government refuses to address.

In addition, I’ve also created an AI ethics framework for business and sales contexts, providing seven core principles: AI augments human judgment rather than replacing it; transparency about AI use; active bias mitigation; data integrity protection; genuine human connection; value creation not extraction; and continuous ethical vigilance.

None of this work was meant to replace government regulation. But in the absence of government action, businesses and citizens are desperate for it.

The demand for guidance

In October 2025, I delivered a keynote titled “The Wild West of AI” to timber and building industry leaders. The invitation itself revealed the problem: traditional Australian businesses are hungry for ethical and practical guidance their government aren’t providing.

They want frameworks for safe AI adoption. They want to understand how to use AI responsibly without violating client confidentiality or perpetuating bias. They want practical tools they could implement on Monday morning, because competitors are already using AI and they can’t wait for government clarity that might never come.

The response was telling. These weren’t tech evangelists. These were cautious business leaders asking how to adopt AI responsibly. They understood AI’s productivity potential but wanted to do it right.

And they had nowhere else to turn.

The broader pattern: Citizens creating their own frameworks

This pattern extends beyond business. Through Democracy Watch AU, citizens can decode complex legislation, fact-check political claims against voting records, and draft professional submissions.

In August 2025, unions and tech struck a breakthrough agreement to develop a model for compensating Australian creatives when their work trains AI. This wasn’t led by government. It was citizens and workers using AI literacy to negotiate from informed strength.

This is citizen empowerment by necessity, not design. Australians aren’t waiting for government permission because their government has abdicated responsibility.

What’s at stake

The government’s “last resort” regulatory approach creates several risks:

  • Inconsistent standards: Without mandatory guardrails, AI adoption becomes a postcode lottery. Businesses in sectors with strong existing regulation may fare better than those in regulatory grey zones.
  • Competitive disadvantage: Australian businesses face different regulatory requirements in the EU and other jurisdictions developing specific frameworks. Our “last resort” approach leaves Australian companies navigating multiple international standards without clear domestic guidance.
  • Democratic deficit: When citizens must create their own ethical frameworks for analysing political communications, we’ve privatised a core function of democratic governance.

The path forward

Australia doesn’t need to choose between innovation and safety. We need government to step back into the vacuum it created:

  1. Reinstate mandatory guardrails: Not for all AI applications, but for genuinely high-risk contexts. Healthcare diagnostics, employment decisions, law enforcement applications and critical infrastructure warrant specific safeguards.
  2. Ethical framework standards: Government should develop and promote ethical frameworks for AI adoption, particularly for businesses lacking resources to develop their own.
  3. Digital literacy programs: Professor Nicole Gillespie, chair of Trust at Melbourne Business School, identified this as foundational: “An important foundation to building trust and unlocking the benefits of AI is developing literacy through accessible training, workplace support and public education.”
  4. Transparent policy process: The pivot from Husic’s mandatory guardrails to the Productivity Commission’s “last resort” approach happened with minimal public discussion. Australians deserve to understand why their government abandoned its original plan.

The bottom line

The reversal of Australia’s AI policy creates more than regulatory uncertainty. It creates an ethical vacuum that citizens and businesses are filling themselves because they have no alternative.

This citizen-led approach has an unintended benefit: it’s creating an informed, empowered populace that understands AI’s capabilities and limitations. But this empowerment shouldn’t come by default. It should come by design, supported by government frameworks that provide clear ethical guidance while enabling innovation.

The government’s retreat from mandatory AI guardrails leaves Australia without a coherent approach to one of the most transformative technologies of our time.

We can do better.

And we shouldn’t have to wait for our government to figure this out.

 

The views expressed in this article may or may not reflect those of Pearls and Irritations.

Sue Barrett