AI needs governance, not a 'plan for a plan'
AI needs governance, not a 'plan for a plan'
John H Howard

AI needs governance, not a 'plan for a plan'

Australia’s National AI Plan prioritises infrastructure and adoption, but leaves governance and liability unresolved, creating uncertainty and risk, especially for smaller firms.

When Tamworth switched on its first electric street lights in 1888, the public was terrified. It symbolised ambition, but also hesitation. Gas lighting was familiar and physical; electricity was new, invisible, and dangerous. We seem to be at a similar juncture with Artificial Intelligence.

While the Government’s recent _National AI Plan_, released on 2 December 2025, makes bold moves on infrastructure, its regulatory stance remains tentative. While a “plan for a plan” buys time, it risks creating a two-speed safety landscape where SMEs delay essential governance work. To avoid a fragile experiment, it is essential to move from broad roadmaps to strengthening the institutions that will turn risky technologies into dependable infrastructure.

Artificial intelligence is expanding faster than Australia’s governance arrangements can manage. The challenge is building confidence in systems that operate at scale and rely on highly complex data flows. Addressing this challenge is foundational for public trust.

When electricity was introduced, accidents were frequent, and infrastructure quality varied. Unlike steam power, which advertised hazards through heat and noise, electricity’s danger was invisible. It required new systems of assurance built on technical expertise and coordinated regulation.

Today, AI raises a similar, but deeper, issue. People struggle to understand how digital models produce outputs, especially in high-stakes national and global financial settings. The discomfort reflects the disquiet and apprehension of delegating decisions to systems that are hard to observe. Anecdotes about errors and hallucinations amplify this apprehension. As with electrification, policymakers must build trust through clear expectations and competent oversight.

However, the current policy landscape indicates hesitation to intervene. There is fear in Canberra that regulation might stifle innovation, as with the nineteenth-century ‘Red Flag Acts,’ which required cars to be preceded by a person carrying a warning flag. But history teaches that safety is an enabler, not a brake. Institutional strength, standards, and the clarification of liability made electricity investable.

The National AI Plan prioritises adoption and infrastructure over comprehensive regulation. It relies on existing privacy and consumer laws, supplemented by voluntary guidance.

It outlines future intentions and establishes an AI Safety Institute that initially lacks enforcement powers. While this avoids the rigidity of the European Union’s legislative model, it creates significant transitional risks.

Uncertainty makes risk difficult to price. Without clear compliance targets, organisations face inconsistent expectations between domestic guidance and binding international rules. This is acute for Australian firms exporting digital services. They face a global market where compliance is binding, but local law remains voluntary.

For small and medium enterprises (SMEs), uncertainty creates a dangerous incentive to delay. Without clear rules, many firms will hesitate to invest in auditing tools. They may adopt a ‘wait and see’ approach, deferring governance work until regulations are finalised. This accumulates ’technical debt’ in safety and ethics, which will be expensive to rectify later.

The National AI Plan shows genuine strategic weight in its treatment of physical infrastructure. A key feature is the energy co-requisite for hyperscale data centres. Operators must now contribute to renewable energy generation to support new capacity.

This aligns the exponential growth of compute power with national net-zero objectives. For the first time, digital infrastructure is explicitly tied to the physical constraints of the energy grid. This introduces a new industrial geography. Previously, innovation hubs followed talent. Now, AI compute is tethered to energy availability and transmission capacity.

This returns us to an industrial logic similar to the early 20th century. It creates complex coordination demands. Commonwealth agencies, state energy authorities, and local planning bodies must align their approval processes. Issues such as data centre siting, water cooling, and land-use permissions now require integrated planning rather than siloed assessment.

Electrification became reliable because governance aligned across levels of government. Local authorities managed installation, states shaped utility markets, and national bodies developed standards. Fragmentation would have produced uneven safety.

AI presents a parallel challenge. Privacy, health regulation, and planning systems all influence deployment. These responsibilities span commonwealth, state, and territory governments. Without coordination, developers face variable rules, and users receive different levels of protection depending on the jurisdiction.

Furthermore, liability remains unresolved. As electrical systems expanded, courts had to determine who was responsible for accidents or errors. The balance reached between strict liability and negligence created predictable conditions for industry. Firms that met recognised standards could operate with confidence.

AI is entering this phase. Policymakers must decide how responsibility is assigned among developers, deployers, and users. A liability framework linked to compliance with standards would support a stable market. Investors currently prioritise infrastructure-heavy projects because the risks are tangible. Applications relying on complex AI models face tighter funding conditions because liability questions remain open.

Electricity became safe through institutional design, not bans. Its early controversies gave way to predictable routines that enabled innovation. This evolution was driven by the interaction of engineering practice, insurance requirements, and legal clarity.

Standards Australia, established in 1922, helped define common expectations about wiring. Insurers reinforced these by linking coverage to compliance. A similar process is emerging with AI. Insurers and legal advisers are already asking whether systems have been audited. These market signals are shaping behaviour before formal regulation is mature.

However, relying solely on market signals is insufficient. The government must formalise these emerging norms into a coherent system. The AI Safety Institute will need to move quickly from building capability to establishing authority.

Australia’s goal must be to ensure technology becomes reliable, predictable, and well understood. The history of electricity shows this is achievable when public policy focuses on practical institutions.

A “plan for a plan” serves as a helpful signal, but it must eventually yield to concrete operational reality. The current regulatory hesitation leaves SMEs exposed and creates a vacuum that international regulators will likely fill. To secure Australia’s digital safety, we must complete the rewiring of our governance and build the architecture of assurance that enables AI to transition from a fascinating experiment to a dependable utility.

The views expressed in this article may or may not reflect those of Pearls and Irritations.

John H Howard

Please support Pearls and Irritations with your tax deductible donation

This year, Pearls and Irritations has again proven that independent media has never been more essential.
The integrity of our media matters - please support Pearls and Irritations.
For the next month you can make a tax deductible donation through the Australian Cultural Fund. Please click here to donate.