Deep thinking needed on AI, not shallow predictions
February 23, 2026
Confident predictions about artificial intelligence dominate public debate – but history suggests forecasting technological futures is a poor guide for policy. What matters more are the conditions that shape how AI is actually used.
Every week brings another confident prediction about artificial intelligence. Technology executives promise transformation. Consulting firms project massive productivity gains. Union leaders warn of job losses. AI researchers debate existential risk.
Each prediction generates headlines, and each contradicts the others.
Policymakers tasked with regulating AI, investing in infrastructure, or preparing the workforce are left to navigate this noise. Which expert should they believe? The honest answer is that nobody knows what AI will do. The better question is whether prediction is even the right approach.
The history of technological forecasting should give us pause. In the 1960s, experts predicted nuclear-powered cars and moon bases by 2000. In the 1990s, the paperless office was just around the corner. More recently, self-driving cars were supposed to be ubiquitous by now. Technologies that were expected to transform society often disappointed, while technologies dismissed as toys became foundational infrastructure.
The pattern repeats because prediction requires knowing things that cannot be known: how technologies will evolve, how organisations will adapt, how regulations will develop, how users will respond. Confident forecasts about AI’s trajectory offer the comforting illusion that the future can be known and planned for. That comfort is false.
There is a different approach. Instead of asking what AI will do, we might ask what determines whether AI creates value or causes harm in any given context. That question can be answered, and the answer provides guidance that remains useful regardless of how predictions resolve.
The evidence from organisations that have deployed AI points consistently to the same conclusion: the technology itself is rarely what matters most. What matters is what the technology combines with. Data quality, workforce skills, management capability, organisational processes, and governance arrangements determine whether AI delivers benefit or expensive disappointment.
This explains a puzzle that has troubled economists. AI capabilities have advanced remarkably, but productivity statistics show only modest gains. The gap exists because capability does not automatically translate into outcomes. The organisations that have invested in AI but not in the supporting conditions have not captured the benefits. The investment shows up in the statistics; the returns do not.
The same pattern appeared with previous technologies. The economist Robert Solow observed in 1987 that computers were everywhere except the productivity statistics. That paradox eventually resolved, but only after decades of investment in software, training, and business process redesign. The productivity benefits from information technology arrived roughly 15 years after the investments were made.
If AI follows the same pattern – and the evidence suggests it is – then the binding constraints are not technological. They are organisational, institutional, and human. The question for policy is not how to accelerate AI capability but how to build the conditions that allow AI capability to translate into productivity, security, and wellbeing.
This reframing has practical implications. Think about workforce policy. The prediction-focused debate asks whether AI will displace 40 per cent of jobs or create new ones. The evidence-focused question asks what determines whether AI augments workers or replaces them. The answer points to skill levels, task design, management choices, and bargaining power. These are factors that policy can influence.
Or consider infrastructure investment. Australia has committed billions to data centre construction, with major technology companies announcing substantial projects. The prediction debate asks whether this investment will pay off. The more useful question asks what conditions must be present for infrastructure investment to generate broad economic benefit rather than value captured primarily by global technology firms.
The February 2026 market correction, which wiped trillions from technology valuations, suggests investors are beginning to ask similar questions. The gap between AI infrastructure spending and demonstrated revenue has become too large to ignore. Studies finding that most enterprise AI pilots fail to deliver measurable returns are accumulating. The transformation narrative is meeting implementation reality.
None of this means AI is overhyped. The capabilities are real and advancing. What it means is that capabilities alone do not determine outcomes. The same AI system can transform one organisation and fail completely in another, depending on data quality, workforce readiness, management practice, and governance arrangements. Understanding this is more useful than any prediction about AI’s trajectory.
For Australian policymakers, the implications are significant. Our productivity performance has been subdued for years. Management capability in adopting advanced technologies is uneven. Pathways from research excellence to industry application remain narrower than they should be.
AI will not correct these structural issues by itself. If anything, it will magnify existing strengths and weaknesses.
The countries and organisations that succeed in the AI era may not be those with the most advanced technology. They may be those that build the supporting conditions most effectively: skilled workforces, capable management, quality data infrastructure, effective governance. These are not glamorous investments. They do not generate headlines. But they are what separates AI potential from AI performance.
The prediction industry will continue. Technology companies will promise transformation because that is what generates investment. Consulting firms will project productivity gains because that is what sells services. The noise will not diminish.
What policymakers need is not another prediction to add to the cacophony. They need ways of thinking about AI that remain useful when experts disagree and the stakes are high. They need frameworks that connect technology to capability, governance, and economic performance. They need guidance that helps make decisions under uncertainty rather than promising to resolve it.
AI trajectories are not determined by the technology. They are shaped by decisions about what we invest in, how we govern, which industries we prioritise, and how we balance openness with sovereignty. Human agency, exercised through policy, investment, and institutional design, can shape the direction of AI. We are not spectators. We are participants, and our choices matter.
These issues are addressed in the forthcoming book Making Sense of AI: Nine Conversations, One Framework (Acton Institute for Policy Research and Innovation, 2026).
John H. Howard is Executive Director of the Acton Institute for Policy Research and Innovation and an Honorary Visiting Professor at the University of Technology Sydney. He can be contacted at john@actoninstiyute.au