A 91-page white paper about regulating artificial intelligence, issued Wednesday by the United Kingdom's Secretary of State for Science, Innovation and Technology, and presented to the British parliament, hinges on the notion that AI is mostly defined by its uncertainty and unpredictability.
"To regulate AI effectively, and to support the clarity of our proposed framework, we need a common understanding of what is meant by 'artificial intelligence'," the white paper states.
Also: Tech leaders sign petition to halt AI boom
"There is no general definition of AI that enjoys widespread consensus," and, "that is why we have defined AI by reference to the two characteristics that generate the need for a bespoke regulatory response."
That definition focuses on just one corner of AI, the most successful to date, machine learning, or neural networks. It zeros in on two characteristics that are seen in machine learning, namely, that it is unclear why the programs function as they do, and the programs can have unexpected output.
Britain's Secretary of State for Science, Innovation and Technology proposed a framework for regulating AI that would be industry-friendly while also, it says, building trust in the public for the technology.
Secretary of State for Science, Innovation and TechnologyThe white paper states:
- The 'adaptivity' of AI can make it difficult to explain the intent or logic of the system's outcomes:
- AI systems are 'trained' -once or continually -and operate by inferring patterns and connections in data which are often not easily discernible to humans.
- Through such training, AI systems often develop the ability to perform new forms of inference not directly envisioned by their human programmers.
- The 'autonomy' of AI can make it difficult to assign responsibility for outcomes:
- Some AI systems can make decisions without the express intent or ongoing control of a human.
The white paper proposes that because of uncertainty, regulators should not regulate the technology itself, but rather, selectively regulate outcomes of the use of the technology.
"Our framework is context-specific," it states. "We will not assign rules or risk levels to entire sectors or technologies.
"Instead, we will regulate based on the outcomes AI is likely to generate in particular applications.
Also:GPT-4: A new capacity for offering illicit advice and displaying 'risky emergent behaviors'
The intent is that different uses have different consequences and different seriousness, the report makes clear:
For example, it would not be proportionate or effective to classify all applications of AI in critical infrastructure as high risk. Some uses of AI in critical infrastructure, like the identification of superficial scratches on machinery, can be relatively low risk. Similarly, an AI-powered chatbot used to triage customer service requests for an online clothing retailer should not be regulated in the same way as a similar application used as part of a medical diagnostic process.
The white paper, titled "A pro-innovation approach to AI regulation," is at pains to reassure industry it will not squelch development of the technology. The paper emphasizes throughout the need to keep Britain commercially competitive by not taking a heavy hand in regulation, while trying to promote trust of AI among the public.
"Industry has warned us that regulatory incoherence could stifle innovation and competition by causing a disproportionate amount of smaller businesses to leave the market," is one of many concerns mentioned by the report.
Also: What is ChatGPT and why does it matter? Here's what you need to know
The preface by Secretary Michelle Donlan declares "this white paper will ensure we are putting the UK on course to be the best place in the world to build, test and use AI technology." The report notes that surveys have crowned Britain "third in the world for AI research and development," an important rank to preserve, suggests Donlan.
"A future AI-enabled country is one in which our ways of working are complemented by AI rather than disrupted by it," writes Donlan, citing the prospect not only of medical breakthroughs such as AI-aided diagnoses, but also the prospect that AI can automate menial workplace tasks.
Also:OpenAI's GPT-4 paper breaks with AI practice of disclosing a program's technical details
The paper suggests regulation adopt a light touch, at least initially. Regulators are to be encouraged within their areas of speciality to try things and see what works, and not to be burdened with "statutory" rules for AI.
"The principles will be issued on a non-statutory basis and implemented by existing regulators," it states. "This approach makes use of regulators' domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used."
At a later date, the report says, "when parliamentary time allows, we anticipate introducing a statutory duty on regulators requiring them to have due regard to the principles," although it leaves open the prospect such statutory rules may not be necessary if regulators are successful in exercising their own judgment.
The white paper is informed by other surveys. A direct precursor is the policy paper released last July by the Secretary of State for Digital, Culture, Media and Sport, titled, "Establishing a pro-innovation approach to regulating AI."
A companion paper released this month by Sir Patrick Vallance, the Government Chief Scientific advisor, titled "Pro-innovation Regulation of Technologies Review," makes several recommendations that are incorporated in the Secretary of State's white paper.
Also:ChatGPT's success could prompt a damaging swing to secrecy in AI, says AI pioneer Bengio
Chief among them is the proposal for a business friendly "sandbox." That would be a facility that would incubate AI technologies, overseen by regulators, where companies could try out new AI programs with more relaxed rules to allow greater experimentation.
The paper emphasizes that as other countries move forward with regulatory proposals, there is an urgency for Britain not to be left behind. There is a "short time frame for government intervention to provide a clear, pro-innovation regulatory environment in order to make the UK one of the top places in the world to build foundational AI companies," it states.
Also: How (and why) to subscribe to ChatGPT Plus
Many concerns raised by AI use, including the ethics of carbon footprint caused by training so-called large language models, are left out of the report.
For example, an open letter published this week by think tank The Future of Life Institute, signed by over 1,300 individuals including scientists and tech industry members, calls for a moratorium on developing large language models, warning that proper care is not being taken regarding dangers such as "machines flood our information channels with propaganda and untruth."
The U.K. paper makes no such sweeping recommendations, which it says are outside the scope of British regulation:
The proposed regulatory framework does not seek to address all of the wider societal and global challenges that may relate to the development or use of AI. This includes issues relating to access to data, compute capability, and sustainability, as well as the balancing of the rights of content producers and AI developers. These are important issues to consider -especially in the context of the UK's ability to maintain its place as a global leader in AI -but they are outside of the scope of our proposals for a new overarching framework for AI regulation.