The European Union is about to pass the world's first comprehensive AI law. A final text of the legislation, known as the AI Act, could be agreed upon this week. It will be formally adopted by the EU parliament in order to become an enforceable law early next year. The law has been in development for four years and will govern the use of AI in the EU based on a classification of the risks that AI systems pose to users. The higher the risk level, the more regulation there is.
AI systems that negatively affect safety or fundamental rights will be considered high-risk.
One area of disagreement that remains is the use of live facial recognition. Member states want to keep it as a matter of border security and public order, but members of parliament believe using it in public spaces is an invasion of privacy.
The AI Act will mandate AI 'foundation models' developers to register in an EU database and require them to 'guarantee robust protection of fundamental rights.' Tech companies will also have to regularly disclose data on their energy consumption to increase transparency in training their models and guide public policy.
The EU AI Act is a significant step towards regulating AI and its impact on society. Generative AI foundation models, like OpenAI's ChatGPT, will have to disclose any AI-sourced data, prevent illegal content generation, and disclose copyrighted content that the developers used to train the AI system.
The legislation aims to provide safeguards for how AI technology will develop and will give Brussels the power to ban AI applications and services that can cause harm to EU citizens. The effect could be felt beyond the EU borders, with a 'Brussels effect' similar to the extraterritorial influence of the General Data Protection Regulation. The EU AI Act final touch happens days before a packed AI international agenda. The global AI safety summit opens in the UK, and at the White House, President Biden hosts a "Safe, Secure, and Trustworthy AI" event to unveil its long-awaited AI executive order.