Robodebt for the environment? AI will not fix Australia’s broken environmental laws
April 12, 2026
Using artificial intelligence to speed up environmental approvals risks entrenching flawed laws, poor data and declining biodiversity outcomes.
Australia is again being promised a technological shortcut to a complex policy problem. This time, it is the suggestion that artificial intelligence could be used to speed up environmental approvals under the Environment Protection and Biodiversity Conservation Act 1999.
The argument is superficially appealing. Environmental assessments are often slow, data-intensive, and contested. If AI could streamline those processes, reduce delays, and cut costs, why wouldn’t we use it?
Because the problem is not the speed of the system. It is the quality of the foundations on which it rests.
Proposals to embed AI into environmental decision-making risk repeating a familiar pattern in public administration: using automation to paper over deeper structural weaknesses. Australia does not need to speculate about how that can end. The Robodebt scheme demonstrated what happens when flawed data and automated processes replace expert judgement – errors are not only made, they are replicated at scale.
Environmental approvals under the EPBC Act share many of the same risk factors.
First, the law itself is too vague. The Act relies heavily on broad language and ministerial discretion. This ambiguity already slows decision-making by human assessors. For an AI system, which depends on clear rules and defined thresholds, it is a fundamental limitation. Without explicit standards – including clear definitions of what constitutes unacceptable environmental impact – automation is not just difficult, it is unsafe.
Second, the data required to support reliable decisions are often missing. Many of Australia’s threatened species lack even basic, publicly available information on their distribution or population trends. Some have never been systematically monitored. AI systems are only as good as the data they can draw on. In this context, they would jump to conclusions based on incomplete, outdated, or highly uncertain datasets. That is not a recipe for better decisions; it is a pathway to systematically flawed ones.
Third, environmental assessments are not purely technical exercises. They require judgement – often informed by unpublished data, expert knowledge, and consultation with local communities and First Nations groups. Experienced assessors know when to question a dataset, when conditions on the ground have changed, and when additional expertise is needed. These are not tasks that can be meaningfully automated.
There is also a deeper problem. If AI systems are trained on past decisions made under the EPBC Act, they risk learning from a system that has demonstrably failed to halt biodiversity decline. Automation, in that case, would not improve outcomes. It would entrench and accelerate existing shortcomings.
None of this is an argument against AI itself. Used appropriately, it could assist with routine tasks – compiling information, drafting preliminary assessments, or identifying gaps in documentation. But these are supporting roles. The core function of environmental assessment – determining whether a project will cause unacceptable harm to our most nationally important environmental values – must remain a human responsibility.
If the goal is to improve the efficiency and effectiveness of environmental approvals, there are more direct and reliable pathways.
The first is to fix the rules. Clear, enforceable National Environmental Standards would reduce ambiguity, improve consistency, and speed up decision-making without compromising outcomes.
The second is to invest in data. Long-term ecological monitoring, better species distribution information, and accessible, high-quality datasets are essential for any credible assessment system – whether human or assisted by technology.
The third is to invest in people. Skilled assessors with the expertise to interpret evidence, engage with stakeholders, and exercise judgement are not a bottleneck to be removed. They are the foundation of a functioning system.
Technology cannot compensate for weak laws, poor data, or under-resourced institutions. Attempting to use AI to do so risks creating a system that is faster, but also less transparent, less accountable, and more prone to error.
Australia has already seen the consequences of automated decision-making deployed inappropriately. We should not repeat that mistake in a domain where the stakes include not just administrative fairness, but the persistence of species and ecosystems.
AI may eventually have a role in environmental governance. But until the underlying system is strengthened and fit for purpose, that role should remain firmly in support – not in control.