Will AI become our servant or our master?

Nov 18, 2024
Data analysis science and big data with AI technology. Analyst or Scientist uses a computer and dashboard for analysis of information on complex data sets on computer. Insights development engineer.

AI is already showing dangerous signs of delivering more harm than good. The motivations behind its creation show why.

Alessandra Pucci (Pearls & Irritations, October 27) has raised several important points that need careful investigation. But because of the interactions between AI and the collective human brain, these are unlikely to be adequately investigated.

As Alessandra has shown, AI can provide great benefits in studying events like protein folding but, as she suggests, every advance in AI (as with every technological advance) results in a decrease in our human skills to think about and manage our interactions with our surroundings. 

First, as AI is a product of human brains, will it be able to avoid our human faults and frailties?

These include our lust for wealth and power, and the many irrational and destructive activities large groups of humans regularly indulge in.

Israel, for example, is using AI to identify where terrorists are in Gaza, or where they might be, or are predicted to be, and that they are likely to relocate at any moment. So if they are to be killed, they have to be killed immediately. And without further investigating whether the targets are innocents, they are in most cases killed. AI has been told they are terrorists and as AI has been instructed to believe what it is told, terrorists they are.

This brings to mind science fiction writer Isaac Asimov’s three laws of robotics (1942) which stipulated that “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” Asimov fictionalised these laws to be introduced in 2058 but largely through human greed and frailty our “progress” has beaten him to the punch.

We have already built human faults into AI and it would be illogical to expect that the motivation that has installed these faults will see any need to remove them, much less have the ability to do so.

If we look at the motive for creating AI, its potential faults become evident. Evidence is provided by the billions of dollars that have already been poured into it. This investment is supposedly to “make money” or to create wealth, but it will almost certainly only transfer wealth from users to investors. 

AI’s capacity to generate (rather than transfer) wealth is yet to be proven, although the protein-folding example could generate wealth – if it is possible to truly generate wealth in a closed economic system.

Another unknown for AI is whether it can escape the bounds of the language it uses. Whether this language is English, mathematics, Linux, Python or any other computer language, or the language of chemistry or economics, AI has yet to demonstrate it has the human capacity to innovate.

Much of innovation is trial and error, often with more error than success, by sometimes thousands of people and over centuries. Consider our attempts to understand the world around us, the laws of physics for example: how long have we been puzzling over gravity? After a few millennia Newton told us what it did, but had no idea why or how.

Then Einstein scrubbed Newton’s idea and proposed his space-time theory of Special Relativity, then modified it to General Relativity, and now perhaps a thousand theoreticians and experimentalists have blamed it all on the Higgs boson that was seen, or inferred, for less than a thousandth of a second. Meanwhile Dark Matter has been postulated but not observed, yet threatens to throw all our theories into disarray.

And, like Newton, Einstein had no idea how gravity works, let alone why. That introduces all the philosophical imponderables that have bothered us since we learned to talk. Can AI tell us anything new that will resolve the freewill-determinism debate? What about the wave-particle duality that has confounded physicists for so long?

Will a relatively small number of AI operators be able to find answers to these continuing mysteries, or discover and repair similar errors before damage is done and we are led astray?

If AI proves to be genuinely generative, it might be able to solve these and other problems. Even so, it seems the possibility of AI promoting ethical behaviour is far more remote than its capacity to promote conflict. Human nature could be AI’s inescapable and destructive flaw.

Writing in The Weekend Australian (October 26), Alan Dupont quotes Thomas Hadju of Adelaide University saying that generative AI is a seismic shift “that will reshape the landscape of cognitive work” and that as AI becomes more sophisticated “our ability to discern when to trust its judgments, and when to apply human insight, will be crucial.” In other words we won’t be able to rely on it. So, when and when not will we be able to trust it?

Dupont adds that, “Solving the discernment problem – the ability to distinguish quality, relevance, and ethical implications – becomes paramount.” He finishes by quoting from Stephen Hawking: “[AI] will either be the best thing that’s ever happened to us, or it will be the worst thing. If we are not careful, it very well may be the last thing.”

Will AI ever warn us that, “The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.”

The people shaping AI are the same as those who stampeded over Asimov’s three laws and unless we ask AI to consider him Shakespeare will be ignored. So we need to tread carefully if we are not to become underlings by creating a smart but possibly deranged monster that will rule over us.

Share and Enjoy !