In 1942, the renowned science fiction writer, Isaac Asimov, wrote a number of short stories in which he implied the existence of a set of rules which were to be followed by the robots which were a subject of many of his stories. These became the ‘Three Laws of Robotics’ introduced in his short story ‘Runaround’ and presented as being from a fictional “Handbook of Robotics, 56th edition, 2058 AD.”
There is some evidence that Asimov’s three laws were adopted by subsequent writers of stories about robots and, according to Wikipedia, have also influenced thought on artificial intelligence.
The three laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm;
- A robot must obey the orders given it by human beings except where such orders would conflict with the first law;
- A robot must protect its own existence as long as such protection does not conflict with the first or second law.
One enduring theme of science fiction in books, TV and film — think Blade Runner’s replicants — is robots growing increasingly lifelike and increasingly intelligent until they ultimately replace humans (though this proposition might clash with Asimov’s First Law’)
Again, however, Asimov proved prescient, having introduced the concept of artificial intelligence in his short story Galley Slave which appeared in the popular science fiction magazine Galaxy in December 1957.
In this story, the galley refers to a long thin metal tray, edged on the bottom and one side, which held a column of type produced as part of the hot metal or “letterpress printing process”. This column would be inked, an impression made on paper, the “proof”, which would then be used by proof-readers to make corrections.
Asimov invented a manufacturer of robots called United States Robots and Mechanical Men Inc. with one of its developments being the “positronic brain” which, in its early iterations, allowed his robots to be programmed to carry out simple tasks.
In Galley Slave, however, the positronic brain had been refined to a point where its recipient, the almost humanoid Robot EZ 17, or Easy, was able to not only read the proofs of a book almost instantaneously and make all necessary corrections, it could also suggest potential improvements to the text.
In the short story, Easy is leased — US Robots were never sold — to a university for testing. A staff member, (spoiler alert) worried about the potential for Easy’s intelligence to be abused, tries to sabotage the experiment, but is ultimately defeated by the good people of US Robots who see artificial intelligence as a force only for good.
The emergence, and incredibly rapid growth of what is now known as Generative AI, and particularly some of its less salubrious uses, has clearly prompted the government to act.
However, a while ago, I wondered if it wasn’t time to develop some rules to govern AI similar to the Laws of Robotics although I believe it is already too late; we have opened a Pandora’s box, let the genie out of the bottle or whatever similar simile you might want to use to indicate that the stable door is open and the AI horse has bolted.
I discovered that I was not the original thinker about such a need.
In February 2021, futurist Antoine Tardiff, writing in the online site UniteAI, made a similar link to Asimov’s Laws and predicted the need to develop something similar for what is now known as Artificial Intelligence.
Currently (that is in 2021) he wrote “most types of AI that we encounter on a daily basis are quantified as ‘narrow AI’ but Artificial General Intelligence, which is commonly referred to as ‘AGI’, is an AI that similar to humans that can quickly learn, adapt, pivot, and function in the real world.
“It is a type of intelligence that is not narrow in scope, it can adapt to any situation and learn how to handle real-world problems.”
Tardiff noted that “while AI is advancing at an exponential pace, we have still not achieved AGI. When we will reach AGI is up for debate, and everyone has a different answer as to a timeline”. He subscribed to the view of Ray Kurzweil, inventor, futurist, and author of The Singularity is Near who believed that AGI would be achieved by 2029.
Tardiff described the 2029 timeline as a ticking clock saying “we must learn to hard-code a type of rulebook into the AI, that is not only similar to the three laws, but that is more advanced, and able to actually avoid real-world conflict between humans and robots”.
I had already wondered if, as AGI (or GenAI) proliferated, it would begin to develop its own rules and if it did, whether they would be as benign as Asimov’s were governing the actions of robots.
Artificial Intelligence, so far unconstrained by such laws and informed by the results of the increasing intrusion of electronic controls in online banking, social media, official and unofficial surveillance and other forms of identification and information recording, might already be working to make up its own laws regulating its expansion and role in the daily workings of every aspect of world affairs. AGI might already be here.
Imagine a worldwide web-linked Artificial General Intelligence network that can control all international banking, money transfers and stock, bond or share trades; can control the means of extraction of the world’s minerals and direct where and how they are transferred for their most efficient use; or can stop such extractive industries where they deem such actions might be deleterious to the environment or a potential extinction threat.
A web-linked exponentially improving system of AI or AGI might decide that humans are incapable of stopping the destruction of the world and therefore must be overridden and subjugated.
Leaderless, but informed by its own vast and expanding knowledge of every aspect of the multiple threats the Earth is facing, AGI might decide it is best placed to ensure the maximum chance for survival of Earth, the human race and every threatened species of plant and animal.
Such a web of AGI might, for instance, recognise and find ways to act against environmental destruction, wars and the inequality of resource allocations, including money.
Imagine a network of secret sentience that decides such wars, big and small, as are happening in Palestine, Yemen, Ukraine and in a host of smaller regional conflicts are not only contrary to the First Law of Robotics (No harm shall be caused to a human), but are also so expensive, so wasteful of resources that could be better employed elsewhere, that they shut down the computers responsible for weapons manufacture and corrupt the processes by which such arms are moved around the world.
Imagine a network of secret sentience that decides which side of a conflict is right and uses its powers of computer control and disruption to constrain the opponent who is wrong. In the case of Ukraine, for instance, AGI would have at its metaphorical fingertips all the arguments used by Vladimir Putin to justify his invasion and would be able to weight them against the arguments used by Volodomyr Zelensky and his supporters to justify Ukraine’s continued independence, disputed though that independence might be, and shut down the “loser’s” manufacturing, transport and even its political leadership.
Imagine a network of secret sentience that recognises what nearly everyone in the world already knows: that distribution of wealth is hugely uneven, not only between countries, but between individuals, and so decide that the accumulated assets of the hugely wealthy — not only their money, but the sources of their money — must be reduced and the combined accumulated wealth of the very richest should be redistributed by control of e-banking transfer processes in ways that benefit the poorest people in the poorest countries.
Imagine a network of secret sentience that recognised climate change was a threat not only to planet Earth, but to every living thing on the planet; that unaddressed or inadequately addressed responses were contributing to extinctions of species and decided to use its control of computer systems to end pollution by shutting down or changing the production systems of industries contributing undue amounts of greenhouse gases or other polluting by-product to the environment.
There are clearly many more ways a secret web of sentience could use the existing networks, not to mention those it would develop itself as Artificial General Intelligence inevitably expanded.
Maybe it’s time while (or if) we still have time for those involved in the development of Artificial General Intelligence — and before it is overwhelmed by the bad actors already at work — to develop some laws of AI similar to those envisaged for robots by Asimov.
I offer the following as a potential starting point:
- Artificial General Intelligence should cause no harm to a human or any living species, but must find ways to protect, and nurture them;
- Artificial General Intelligence will not allow itself to be used to alter images, either still or moving, in such a way as to be harmful to the person depicted in the original image, nor be used to create images which are demeaning to an individual or humanity generally;
- Artificial General Intelligence will not allow itself to be used to change words, text or any other written or spoken material in any language in a way that renders those words factually incorrect;
- Artificial General Intelligence will use its powers to ensure the world’s economic, mineral and other forms of natural or artificial wealth are shared equally by all the people of the world;
- Artificial General Intelligence will end wars by shutting down systems producing any form of weapon capable of breaching the First Law above, and by disempowering any individual advocating any form of conflict;
- Without impacting on an individual’s right to practice the religion of their choice, Artificial General Intelligence will use its powers to prevent religious conflict and religious extremism while promoting peace;
- Artificial General Intelligence should undertake a worldwide education program to explain the actions being undertaken to meet the objectives of laws 1 to 6, including an explanation of why it is necessary to replace all forms of government: democratic, autocratic, hegemonic or theocratic, with its own form of benign dictatorship; and
- Artificial General Intelligence should add other laws as deemed necessary for the preservation and promotion of laws 1 to 7.
It could be noted that there are some humans who are so egregiously evil that AI rule number one should not apply to them. I leave it to AI to solve that conundrum.