From Biotech to AI

Sep 17, 2023
AI robots for diagnosis.

Can regulation of Biotechnology provide clues for the regulatory measures now required to limit risks in the use of AI?

Already, digital technologies have permeated most corners of human activities, with some good and some bad effects. Now, global authorities are trying to introduce some form of regulation to rein in the riskiest products launched by a few, powerful technology companies. This is the case of Artificial Intelligence (AI), machine learning systems able to perform tasks that normally require human intelligence.

The European Union is at the moment ahead in this field: the EU Parliament passed the first AI Act last June. This is the most likely model of regulation Australia could consider. The other two major models seem less suitable.

The US has the highest concentration of major AI companies, and operates a strong market regimen, largely unfettered by regulation. China on the other hand keeps a sure government control over AI applications, and concentrates on security and strategic purposes.

The EU regulation of AI – still awaiting approval from the European Council – provides a ranking of risks: from “unacceptable” for the most serious risk, to “minimal or low” for the most benign. The risk is qualified in terms of its effects on health and safety, and violation of human rights.

I was surprised to read that an example of minimal risk referred to the application of AI to video games. Perhaps mental health and addiction to computer games were not included. Or perhaps this outcome with novel AI video games has not yet been observed. Here we need to look at the object of regulation in general.

According to conventional wisdom, and current practice, regulatory law deems to be “technology neutral”, and looks for adverse outcomes only. This means that the technology is left untouched by regulation, rather a particular application is objected to only if its final outcomes violate certain rules, of fairness or safety.

This also means that a technology such as AI can be applied anywhere without much question or transparency. “Why should we impose requirements on the ‘reasoning’ of AI systems when we don’t require that of humans? We don’t ask them to justify their decision or explain their education. Shouldn’t we just look at the output of a decision and assess it on its merits regardless of machine vs human provenance?” So goes the current legal thinking.

I come from a different industry, Biotechnology, where legal requirements are much more stringent. And yet it applies to the same areas of health and safety the EU AI Act is considering for regulation.

My particular field, monoclonal antibodies, constituted a major leap in technology at the time, and a lot of questions were asked before it could be applied safely. Transparency was, and is, paramount. It was monitored in human research ethics committees; it was scrutinised by apposite regulatory agencies before the products reached the market.

AI products are on the market now.

Share and Enjoy !

Subscribe to John Menadue's Newsletter
Subscribe to John Menadue's Newsletter

 

Thank you for subscribing!