Cadastre-se agora para um orçamento mais personalizado!

OpenAI assembles team of experts to fight 'catastrophic' AI risks - including nuclear war

27 de outubro de 2023 Hi-network.com
Yuichiro Chino/Getty Images

As AI continues to revolutionize how we interact with technology, there's no denying that it's going to have an incredible impact on our future. There's also no denying that AI has some pretty serious risks if left unchecked. 

Enter a new team of experts assembled by OpenAI. 

Also: Google expands bug bounty program to include rewards for AI attack scenarios

Designed to help fight what it calls "catastrophic" risks, the team of experts at OpenAI -- called Preparedness -- plans to evaluate current and future projected AI models for several risk factors. Those include individualized persuasion (or matching the content of a message to what the recipient wants to hear), overall cybersecurity, autonomous replication and adaptation (or, an AI changing itself on its own), and even extinction-level threats like chemical, biological, radiological, and nuclear attacks.

If AI starting a nuclear war seems a little far-fetched, remember that it was just earlier this year that a group of top AI researchers, engineers, and CEOs including Google DeepMind CEO Demis Hassabis ominously warned, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

How could AI possibly cause a nuclear war? Computers are ever-present in determining when, where, and how military strikes happen these days, and AI will most certainly be involved. But, AI is prone to hallucinations and doesn't necessarily hold the same philosophies a human might have. In short, AI might decide it's time for a nuclear strike when it's not. 

Also: Organizations are fighting for the ethical adoption of AI. Here's how you can help

"We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models," a statement from OpenAI read, "have the potential to benefit all of humanity. But they also pose increasingly severe risks."

To help keep AI in check, OpenAI says, the team will focus on three main questions: 

  • When purposefully misused, just how dangerous are the frontier AI systems we have today and those coming in the future?
  • If frontier AI model weights were stolen, what exactly could a malicious actor do?
  • How can a framework that monitors, evaluates, predicts, and protects against the dangerous capabilities of frontier AI systems be built?

Heading this team is Aleksander Madry, Director of the MIT Center for Deployable Machine Learning and a faculty co-lead of the MIT AI Policy Forum.

Also: The ethics of generative AI: How we can harness this powerful technology

To expand its research, OpenAI also launched what it's calling the "AI Preparedness Challenge" for catastrophic misuse prevention. The company is offering up to$25,000 in API credits to up to 10 top submissions that publish probable, but potentially catastrophic misuse of OpenAI.

Artificial Intelligence

AI at the edge: Fast times ahead for 5G and the Internet of ThingsAI pioneer Daphne Koller sees generative AI leading to cancer breakthroughsWorried about AI gobbling up your job? Start doing these 3 things nowWith AI, organizations are now seeing software developers as great collaborators
  • AI at the edge: Fast times ahead for 5G and the Internet of Things
  • AI pioneer Daphne Koller sees generative AI leading to cancer breakthroughs
  • Worried about AI gobbling up your job? Start doing these 3 things now
  • With AI, organizations are now seeing software developers as great collaborators

tag-icon Tags quentes : Inteligência artificial Inovação

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.