Some of the leading AI experts have published a paper calling on governments to take proactive measures in managing the risks stemming from the rapid advancement of AI technologies. The paper emphasises the need for regulatory frameworks that address the extreme risks posed by the most sophisticated AI systems, which can potentially facilitate large-scale criminal or terrorist activities. Moreover, these experts contend that immediate action is required from both national bodies and international governance entities to establish and enforce standards in order to prevent recklessness and misuse of technology.
The central focus of this paper is action on the allocation of resources. Specifically, the paper highlights the need for at least one-third of research and development funding to be dedicated to ensuring the safety and ethical use of AI systems. This financial commitment, as emphasised, is crucial in light of the rapid progress of AI technology, which has far outpaced the establishment of adequate safety measures.
Yoshua Bengio, a pre-eminent AI researcher often referred to as the 'godfather of AI,' underlined the pressing nature of these investments in AI safety. Bengio's concerns revolve around the pace of AI development, which, he argues, is significantly outstripping the precautions being taken to safeguard against potential risks.
These calls for increased safety measures come amidst warnings from academics and industry leaders, including Elon Musk, who have highlighted the potential risks associated with AI. Conversely, some companies have expressed concerns about compliance costs and disproportionate liability risks. However, British computer scientist Stuart Russell dismissed these concerns, stating that there are more regulations on sandwich shops than there are on AI companies.
Currently, no country has a comprehensive regulation focused solely on AI safety. The proposed regulations by the UK and the European Union have yet to become law, as lawmakers are still working to address several issues. The lack of regulation poses a significant concern, as recent state-of-the-art AI models are deemed by some as too powerful and significant to be developed without proper precautions and oversight.