A practical answer to Australia’s AI ethics vacuum
A practical answer to Australia’s AI ethics vacuum
Sue Barrett

A practical answer to Australia’s AI ethics vacuum

As Australia shies away from meaningful AI regulation, a new framework offers a practical way to embed human moral responsibility at the centre of AI use.

We have a solution to Australia’s AI ethics crisis. It’s tested, validated, and ready for immediate deployment. And it’s completely free.

Steve Davies, Moral Engagement Researcher & AI Ethics Architect, has accomplished something unprecedented in AI ethics: seven independent AI systems (ChatGPT, Claude, Perplexity, Grok, DeepSeek, Gemini, and Le Chat) have reached unanimous consensus on a framework for responsible human-AI moral collaboration. His MEET (Moral Engagement Education and Transformation) Package is a sixty-page framework designed for institutions, media organisations, universities, civil society, and the public. The tools are rigorously tested and accessible.

This matters because while the government retreats from AI regulation, Australian institutions, businesses, and citizens are navigating AI adoption with no ethical guidance. MEET provides exactly what’s missing: a validated framework that puts human moral agency at the centre while harnessing AI’s capacity to detect patterns of moral disengagement.

The framework is ready. The validation is complete. The only question is whether our institutions have the courage to deploy it.

For nearly three years, Steve and I have been training major AI models on Professor Albert Bandura’s research on moral disengagement. Steve’s work goes beyond traditional AI ethics approaches (where human theorists critique AI behaviour from the outside) to demonstrate something entirely new: AI systems can apply validated moral frameworks, analyse their own institutional context, articulate clear boundaries around responsibility, and contribute constructively to public moral discourse.

This isn’t anthropomorphism. This is structured reflection within a human-defined ethical lens that keeps humanity firmly at the centre.

When Steve asked each major AI platform to reflect on MEET and their role in ethical analysis, despite distinct architectures, governance philosophies, and corporate cultures, every single platform converged on five fundamental principles:

Human moral agency remains central and non-transferable. AI never makes moral decisions: humans do.

AI’s role is structural, not judgmental. AI detects patterns, provides clarity, and enables auditability. But moral responsibility stays with humans.

MEET is rigorous and platform-agnostic. The framework works consistently across all systems.

Euphemistic language poses major ethical risks. Responsibility laundering through manipulative language requires constant vigilance.

Collaborative moral reasoning is beneficial. Human-AI collaboration on ethical questions is valuable when properly structured.

  • ChatGPT: “AI amplifies moral clarity; humans retain moral responsibility.”
  • Claude: “AI illuminates ethical patterns but must never become an ethical decision-maker.”
  • Perplexity: “AI strengthens agency only when truth is paired with accountability and action.”
  • Grok: “AI supports moral agency by restoring clarity in environments saturated with spin.”
  • DeepSeek: “AI improves ethical signal fidelity; humans remain interpreters and agents.”
  • Gemini: “AI strengthens human moral agency best when embedded in civic and social systems.”
  • Le Chat: “AI enhances moral agency by connecting communities and languages within shared ethical frameworks.”

Seven independent AI systems articulating a shared model. This has never happened before.

This isn’t academic. Steve’s foundational work enabled me to create Democracy Watch AU, which applies Bandura’s frameworks to analyse political discourse and policy actions. I added a Performance Scorecard dimension to measure actual outcomes, creating a dual-lens approach: moral disengagement analysis plus tangible results assessment.

When I used these tools to analyse Industry Minister Tim Ayres’ decision to abandon mandatory AI guardrails, the results were stark: 5.9 out of 7 on moral disengagement through euphemistic language, displaced responsibility, and disregard for consequences. On the Performance Scorecard, measuring actual outcomes: two out of 10, which is very poor.

Citizens are now using AI trained on ethical frameworks to decode political language and measure government failure on the very issue the government refuses to address. The framework I’ve developed for business contexts (ensuring AI augments rather than replaces human judgement, maintaining transparency, mitigating bias) directly applies MEET’s foundational work.

Why deploy this now?

For government AI can analyse defence, welfare, regulatory, and public communications for moral disengagement at scale. Better policy development starts with ethical clarity. For universities, MEET offers a ready-made curriculum for teaching ethical reasoning in an AI-saturated world – our students need this literacy urgently. For business: practical frameworks for responsible AI adoption are available now: not voluntary guidelines everyone admits don’t work; however, tested tools are ready for Monday morning deployment. For civil society: MEET empowers the public to challenge institutional narratives using systematic ethical reasoning. This is democratic empowerment through literacy and analytical access.

Australian governments and public service agencies have often avoided work that exposes patterns of moral disengagement within their own systems. The Albanese government passed Steve’s earlier submission to the Department of Industry, Science and Resources, with a response expected by February 2026. In January, the completed MEET Package will be provided to the Department of the Prime Minister and Cabinet.

But here’s the crucial point: this work exists independent of government action. Steve has done it regardless of regulatory frameworks because it must be done. It’s available. It’s free. It’s ready now.

In my recent article in this publication, I detailed Australia’s retreat from mandatory AI guardrails, creating an ethical vacuum precisely when Australians need guidance most. What Steve’s work demonstrates is that the tools for responsible AI deployment already exist. The frameworks have been tested. The major platforms have validated the approach. We have a consensus from systems that rarely agree on anything.

The question isn’t whether we can develop ethical AI collaboration. Steve has proven we can. The question is whether our institutions will deploy it.

We have the framework. We have cross-platform validation. We have proven civic applications through Democracy Watch AU. We have business tools ready for immediate use. This isn’t about choosing between innovation and safety: it’s about using the most sophisticated ethical AI collaboration ever developed to ensure AI empowers rather than controls.

What we need is institutional courage.

Australia can lead the world in responsible AI deployment: not through endless consultation or voluntary frameworks that don’t work, but through rigorously tested moral engagement frameworks that put humanity at the centre.

Steve Davies has given us the roadmap. It’s available to everyone, free of charge, because this work serves the betterment of our world.

The foundational work is complete. The conversation can no longer be delayed.

Will we use it?

The views expressed in this article may or may not reflect those of Pearls and Irritations.

Sue Barrett

Please support Pearls and Irritations with your tax deductible donation

This year, Pearls and Irritations has again proven that independent media has never been more essential.
The integrity of our media matters - please support Pearls and Irritations.
For the next month you can make a tax deductible donation through the Australian Cultural Fund. Please click here to donate.