Trust governments with AI? Perhaps not, when there is a revenue stream involved
Feb 2, 2024International Monetary Fund (IMF) managing director, Kristalina Georgieva, was recently quoted in the Guardian (Tuesday 16/1/24) saying that “in most scenarios artificial intelligence (AI) would probably worsen overall inequality across the global economy and could stoke social tensions without political intervention”.
Australia’s vulnerability to such AI-induced inequality would appear to be high, while our chances of any mitigating political intervention appears to be very low, based on current and past examples of the way state and federal governments have sought to use AI.
Too often we find that the seduction of increased revenue, sugar-coated by alleged positive outcomes (such as saving lives, or nailing thieves and cheats) has led to the roll-out of programs and schemes that have had the reverse effect. Rather than check for errors or undertake rectification, the initial tendency is to double-down and attack critics or innocent victims of the programs.
The Queensland Government appears to be providing the latest example. Catching speeding drivers performs a social good (making roads safer), but also provides a revenue stream for state governments. Along with speeding, in recent years using a mobile phone while driving has been identified as hazardous and, where detected, leads to fines. To police mobile phone use, governments are increasingly turning to fixed camera technologies that can identify not just a phone held to the ear, but a phone placed in the lap. As this technology can also detect whether seat belts are worn, Queensland added camera-detected seatbelt infringements in 2022, and other States are now following this lead.
In Queensland, fines for not wearing a seat belt have jumped from $391 in 2018 to $1161 today. (In other states, fines are still around $400 for this offence.) Miscreants are advised by mail of the offence and the fine to be paid, backed up by a photo showing the absence of a seat belt. Anyone wishing to challenge the fine must attend court and risk the amount doubling when court costs are added. If the defence relates to the quality of the photograph, the offender will be warned that the prosecution will call expert witnesses to back up the photograph technology and that the cost of these witnesses could add thousands in extra costs for the offender.
Although most people would admit to speeding or using a mobile phone at some point in their driving career, and would probably say “fair cop” and pay up if detected, seat belts are a different matter. After fifty years of compulsory seat belt laws, putting on a seat belt is automatic for the vast majority of the population. Indeed – in 2020 the Victorian Transport Accident Commission (TAC) estimated that 98% of Australian drivers buckle up every time they drive.
Given the rapidly increasing number of seatbelt infringement notices being issued in Queensland (52,079 issued by cameras in 2022) there is a growing concern about the reliability of the seat belt detection technology. The photos supplied are far from compelling evidence. They are often grainy black and white shots with obscure details, which could lend weight to the claims of those who say they were actually wearing their belts when photographed. The technology uses AI to detect non-compliance with the seat belt law and if there is any doubt, a “trained operator” checks the image before a fine is issued.
To date, the prospect of incurring a much higher fine has probably discouraged those who claim ‘innocence’ from contesting their fine, but how long will this last? The size of the fine imposed, plus the fact that administration of the scheme is not with the Queensland police nor the Department of Transport, but rather with Treasury, is adding to growing suspicion that this is more about the money than saving lives. Interestingly, Queensland speeding fines start at just $287 and go up on a sliding scale, only reaching the level of seat belt fines when speeds are >30KPH over the limit.
And it’s not as if governments are without form here.
In 2004, the Victorian Government was forced to repay $25.8 million to 90,000 drivers wrongly fined or who had received licence cancellations for alleged speeding detected by cameras. Then Victorian Premier Bracks said the government would explore legal options against Poltech International which was contracted to install, maintain and repair the cameras. Bracks blamed errors on the company, saying the faulty readings “were due to poor installation and maintenance, degradation of sensors and electromagnetic interference” (Sydney Morning Herald 16 May 2004).
More recently we have had the much more spectacular failure of the Turnbull/Morrison Government’s Robodebt scheme. Initially conceived as a process to make targeted early interventions to help citizens break cycles of welfare dependency – by more sophisticated mining and manipulation of the vast stores of Centrelink data – the program eventually metastasized into Robodebt. The objective had changed from helping people get off welfare, to using algorithms and automation to catch “welfare cheats” and contribute substantially to the Budget bottom line.
In the end, apart from damaging many lives (including a number of suicides) it cost the Government hundreds of millions and possibly contributed to the demise of the Morrison Government.
In both these instances, it was eventually established that citizens were not speeding, nor cheating the welfare system. How likely is it that the falsely accused – and prosecuted and punished – by faulty technology, will happily accept a much greater role for and control over their lives by AI? And how likely are they to vote for governments specifically identified with these schemes? Apart from moving their votes, it is probable that many affected will find comfort in anti-science and anti “big government/deep state” movements.
Nothing screams inequality more than an incapacity to find redress for unjust accusations.
“Computer says no” was an hilarious skit on a British comedy show a decade or so ago. Today, it is an increasing and very unfunny reality for many.
To gain acceptance for an AI future, governments are going to have to be much more responsive to technology issues when they arise and better able to resist the seduction of automated systems for providing alternative streams of income. Further, they will need to develop the back up of a human-based public service that can be trusted to carefully evaluate before installation, monitor the operation of and deal with any problems that may arise from any AI-based program. And we haven’t even mentioned the issue of protecting the security of data.