Days after OpenAI CEO Sam Altman said the company might have to cease operations in Europe if the EU's AI Act regulations passed in their current form, he has has seemingly rolled back on his comments.
Despite recently telling US lawmakers he was in favor of regulating AI, when speaking to reporters in the UK earlier this week, Altman said he had "many concerns" about the EU's AI Act and even accused the bloc of "over-regulating."
OpenAI is the Microsoft-backed firm that has developed the groundbreaking but somewhat controversial ChatGPT generative AI system.
"We will try to comply, but if we can't comply we will cease operating," said Altman, according to a report from The Financial Times. The act is currently being debated by representatives of the EU's Parliament, Council and Commission, and is due to be finalized next year.
However, in a Tweet posted on Friday morning, Altman appeared to dial down the rhetoric, writing: "very productive week of conversations in europe about how to best regulate AI! we are excited to continue to operate here and of course have no plans to leave."
His earlier comments had angered lawmakers in Europe, with a number of politicians arguing that the level of regulation being proposed by the EU was necessary in order to deal with the concerns around generative AI.
"Let's be clear, our rules are put in place for the security and well-being of our citizens and this cannot be bargained," EU Commissioner Thierry Breton told Reuters.
"Europe has been ahead of the curve designing a solid and balanced regulatory framework for AI which tackles risks related to fundamental rights or safety, but also enables innovation for Europe to become a frontrunner in trustworthy AI," he said.
Speaking at Senate Judiciary subcommittee on privacy, technology, and the law earlier this month, Altman told US lawmakers that regulation would be "wise" because people need to know if they're talking to an AI system or looking at content - images, videos or documents - generated by a chatbot.
When asked during the hearing whether citizens should be concernedC that elections could be gamed by large language models (LLMs) such as GPT-4 and its chatbot application ChatGPT, Altman said that it was one his "areas of greatest concern."
"The more general ability of these models to manipulate, persuade, to provide one-on-one interactive disinformation - given we're going to face an election next year and these models are getting better, I think this is a significant area of concern," he said.
"I think we'll also need rules and guidelines about what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities we're talking about. So, I'm nervous about it."