G7 leaders have called for the development and adoption of technical standards to ensure the trustworthiness of AI. While acknowledging the potential for different approaches in achieving trustworthy AI, the leaders stressed the importance of regulations for digital technologies, including AI, aligning with shared democratic values. The leaders expressed concerns that the governance of AI has not kept up with its rapid advancement. G7 leaders also agreed that ministers would discuss the technology as the 'Hiroshima AI Process' and deliver the results by the end of the year, conforming to a working lunch outline.
In previous weeks, the global attention was directed towards the regulation of AI. In the EU, AI Act has received approval from the Civil Liberties and Internal Market committees of the European Parliament. This significant step means that the proposal will now progress to plenary adoption in June, marking the final stage of the legislative process -which will involve negotiations with the EU Council and Commission.
The US regulators approach AI regulation with greater caution, and currently, there is a heated debate without any definitive steps taken. Last week, the global media focus was on Sam Altman's testimony before US Congress, where the CEO of OpenAI expressed concerns about AI and called for regulatory measures. Altman outlined his plan for regulating AI, proposing the formation of a new government agency responsible for licensing large AI models, with authority to revoke licences from companies that fail to meet government standards. He emphasised the importance of establishing safety standards for AI models, including evaluatingtheir dangerous capabilities.
In the Far East, China has adopted a more limited approach to AI regulation, releasing draft laws aligning with its socialist ideals.