The European Union has finalised its AI Act, a significant regulatory framework aimed at governing the use of AI within its member states. Published in the EU's Official Journal, the law will officially come into effect on 1 August, with a phased implementation set to unfold over the next several years. By mid-2026, all provisions are expected to be fully applicable, marking a gradual rollout to accommodate various deadlines and compliance requirements.
Under the AI Act, different obligations are imposed on AI developers based on the perceived risk of their applications. Low-risk uses of AI will generally remain unregulated, while high-risk applications-such as biometric uses in law enforcement and critical infrastructure-will face stringent requirements around data quality and anti-bias measures. The law also introduces transparency requirements for developers of general-purpose AI models, like OpenAI's GPT, ensuring that the most powerful AI systems undergo systemic risk assessments.
The phased approach begins with a list of prohibited AI uses becoming effective six months after the law's enactment in early 2025. That includes bans on practices such as social credit scoring and unrestricted compilation of facial recognition databases. Subsequently, codes of practice for AI developers will be implemented nine months after the law takes effect to guide compliance with the new regulations. Concerns have been raised about the influence of AI industry players in shaping these guidelines, prompting efforts to ensure an inclusive drafting process overseen by the newly established EU AI Office.
By August 2025, transparency requirements will apply to general-purpose AI models, with additional time granted to comply with some high-risk AI systems until 2027. These measures reflect the EU's proactive stance in balancing innovation with robust regulation to foster a competitive AI landscape while safeguarding societal values and interests.