Summary: Effectively regulating artificial intelligence on a global scale will require countries to agree on a framework they can all stick to. Efforts to police research in nuclear technology could provide a good starting point
Original author and publication date: Simon Chesterman – August 4, 2021
Futurizonte Editor’s Note: Who will regulate the International AI Agency? Who watches the Watchers?
From the article:
Earlier this year, the European Union proposed a draft regulation to protect the fundamental rights of its citizens from certain applications of artificial intelligence. In the USA last month, the Biden administration launched a taskforce seeking to spur AI innovation. On the same day, China adopted its new Data Security Law, asserting a stronger role for the state in controlling the data that fuels AI.
These three approaches – rights, markets, sovereignty – highlight the competing priorities as governments grapple with how to reap the benefits of AI while minimising harm.
A cornucopia of proposals offers to fill the policy void. For the most part, however, the underlying problem is misconceived as being either too hard or too easy. Too hard, in that great effort has gone into generating ethical principles and frameworks that are unnecessary or irrelevant, since most essentially argue that AI should obey the law or be ‘good’. Too easy, in that it is assumed that existing structures can apply those rules to entities that operate with speed, autonomy, and opacity.
Personally, I blame Isaac Asimov and his frequently quoted three laws of robotics. They’re good science fiction, but if they had actually worked then his literary career would have been brief.
The future of regulating AI will rely on laws developed by states and standards developed by industry. Unless there is some global coordination, however, the benefits of AI – and its risks – will not be equitably or effectively distributed.
Useful lessons can be taken here from another technology that was at the cutting edge when Asimov started publishing his robot stories – nuclear energy.
First, it is a technology with enormous potential for good and ill that has, for the most part, been used positively. Observers from the dark days of the Cold War would have been pleasantly surprised to learn that nuclear weapons were not used in conflict after 1945 and that only a handful of states possess them the better part of a century later.
The international regime that helped ensure this offers us a possible model for the future global regulation of AI. The grand bargain at the heart of President Eisenhower’s 1953 ‘Atoms for Peace’ speech and the creation of the International Atomic Energy Agency (IAEA) was that nuclear energy’s beneficial purposes could be shared with a mechanism to ensure that it was not weaponised.
The equivalent weaponisation of AI – either narrowly, through the development of autonomous weapon systems, or broadly, in the form of a general AI or superintelligence that might threaten humanity – is today beyond the capacity of most states. For weapon systems at least, that technical gap will not last long.