AI policy is stuck on productivity – and democracy is paying the price
AI policy is stuck on productivity – and democracy is paying the price
John H Howard

AI policy is stuck on productivity – and democracy is paying the price

Artificial intelligence is increasingly framed in terms of efficiency and growth. But that framing sidelines harder questions about power, choice and democratic governance.

The policy conversation around artificial intelligence has become dominated by two metrics: productivity gains and innovation outputs. These framings are understandable. Governments seeking to justify AI investment want measurable returns. Businesses want efficiency improvements they can quantify. Researchers want evidence that their work delivers impact. The economic framing, for all its practical appeal, obscures questions that matter for democratic societies.

The conflation of productivity and innovation in policy discourse warrants careful attention. Productivity concerns efficiency. Innovation concerns novelty. AI contributes to both, but through different mechanisms and with different implications for how benefits and risks are distributed across the economy and society.

In the application of AI, there is a distinction between automation and augmentation. This is much more than a technical classification: it reflects competing theories of value creation with divergent distributional consequences.

Automation logic focuses on task substitution. It asks: which human activities can AI perform more cheaply, quickly or accurately? The economic case rests on labour cost reduction and throughput increases. In research contexts, automation enables high-throughput screening, automated literature review, routine data processing, and systematic replication of experimental protocols. The gains are real but bounded by the scope of tasks amenable to algorithmic execution.

Augmentation logic asks how AI can enable humans to do things they could not otherwise do, or to operate at levels of complexity and scale previously inaccessible. This framing positions AI as cognitive infrastructure that amplifies human judgment, creativity and insight rather than replacing it. In research contexts, augmentation enables researchers to explore hypothesis spaces too large for manual investigation, to perceive patterns in data beyond human perceptual limits, and to iterate through conceptual alternatives at speeds that compress the discovery cycle.

Most AI deployments involve both dynamics, but organisations and systems usually emphasise one logic over the other. Those emphasising automation typically achieve faster, more measurable returns but may encounter limits as automatable tasks are exhausted. Those emphasising augmentation often face longer adoption curves and more diffuse benefits, but may achieve more durable competitive advantage and capability growth.

For science and research systems, the automation-augmentation distinction has structural implications that extend well beyond laboratory efficiency. Accelerated discovery is the most visible impact.

The productivity paradox applies here. Despite transformative potential, aggregate productivity effects in research systems remain contested. Part of this reflects measurement challenges, including how to measure the productivity of a research system when outputs include intangible knowledge assets and long-latency innovations. There are also genuine implementation lags where realising AI’s potential requires complementary investments in data infrastructure, skills, and institutional adaptation.

The automation-augmentation distinction reflects competing interests and distributional stakes that deserve explicit policy attention.

Automation narratives are usually advanced by people and organisations positioned to capture efficiency gains, including large corporations seeking labour cost reduction, technology vendors selling automation solutions, and consultancies marketing transformation services. This framing emphasises inevitability and competitive necessity: automation is coming regardless, and those who fail to adopt will be left behind. This picture can constrain policy space by presenting automation as an exogenous force to be accommodated rather than a choice to be deliberated.

Augmentation narratives appeal to professionals seeking to enhance rather than defend their roles, to smaller organisations lacking scale for full automation, and to those concerned with maintaining human agency in consequential decisions. The framing emphasises choice and design about how AI can be developed and deployed to extend human capability.

Policy discourse that accepts automation as the default trajectory tends to focus on managing consequences: reskilling displaced workers, redistributing gains through taxation, and creating safety nets. Policy discourse that treats the automation-augmentation balance as a choice opens space for shaping trajectories through procurement, regulation, research funding, and institutional design.

A democratic governance perspective introduces legitimacy requirements that technocratic and market-based approaches do not address.

Economic framings of AI productivity typically adopt aggregate measures: output per hour worked, total factor productivity, GDP growth. These framings embed assumptions about what counts as valuable output, whose labour is being measured, and how gains and losses are distributed.

A governance perspective asks distributional questions that aggregate measures obscure. Productivity gains from AI-enabled automation may increase measured output while concentrating returns among capital owners and highly skilled workers, displacing middle-skill employment, and hollowing out the economic base of particular regions and communities.

Decisions about AI adoption in public services involve questions of accountability, transparency and procedural fairness that efficiency metrics cannot resolve. When an AI system denies a benefit, recommends a sentence, or triages a patient, democratic principles require that affected individuals can understand the basis for decisions, contest errors, and hold decision-makers accountable.

These requirements often sit in tension with the opacity of complex AI systems and the speed and scale at which they operate.

The way policymakers frame the AI challenge shapes the interventions they consider appropriate. A competitiveness frame emphasises national positioning in a global technology race, prioritising speed of adoption and removal of barriers. A growth frame emphasises aggregate productivity improvement, prioritising efficiency gains and distributional policies to manage displacement. A capability frame emphasises AI’s potential to extend what individuals, organisations and societies can achieve, prioritising augmentation applications and broad access. And a democratic frame emphasises accountability and citizen voice, prioritising transparency requirements and constraints on AI in high-stakes public decisions.

These frames establish different priorities and direct attention to different problems. Current policy discourse in most jurisdictions is dominated by competitiveness and growth frames, with capability and democratic frames remaining marginal influences. For innovation ecosystems, this has consequences.

Ecosystems with strong governance capacity may be better positioned to pursue augmentation-oriented strategies and to ensure productivity gains translate into broadly shared benefits. Ecosystems where governance is weak or captured by narrow interests may default to automation-dominant trajectories shaped primarily by incumbent commercial actors.

The policy question is how to govern the choices that AI adoption entails. Will those choices be made through democratic deliberation, with meaningful voice for citizens affected by AI deployment? Or will they be left to market dynamics and technocratic judgment?

The answer to that question will shape whether AI serves as infrastructure for broadly shared capability, or as a mechanism for concentrating gains among those already positioned to capture them. Democratic societies deserve governance arrangements equal to the significance of that choice.

The views expressed in this article may or may not reflect those of Pearls and Irritations.

John H Howard

Please support Pearls and Irritations

This year, Pearls and Irritations has again proven that independent media has never been more essential.
The integrity of our media matters - please support Pearls and Irritations.
click here to donate.