Cadastre-se agora para um orçamento mais personalizado!

NOTÍCIAS QUENTES

Tech bigwigs: Hit the brakes on AI rollouts

13 de novembro de 2023 Hi-network.com

More than 1,100 technology luminaries, leaders,andscientists have issued a warning against labs performing large-scale experiments with artificial intelligence (AI) more powerful than ChatGPT, saying the technology poses a grave threat to humanity.

In an open letter published by Future of Life Institute, a nonprofit organization with the mission to reduce global catastrophic and existential risks to humanity, Apple co-founder Steve Wozniak and SpaceX and Tesla CEO Elon Musk joined other signatories in agreeing AI poses "profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs."

The petition called for a six-month pause on upgrades to generative AI platforms, such as GPT-4, which is the large language model (LLM) powering the popular ChatGPT natural language processing chatbot. The letter, in part, depicted a dystopian future reminiscent of those created by artificial neural networks in science fiction movies, such asThe Terminator and The Matrix. The letter pointedly questions whether advanced AI could lead to a "loss of control of our civilization."

The missive also warns of political disruptions "especially to democracy" from AI: chatbots acting as humans could flood social media and other networks with propaganda and untruths. And it warned that AI could "automate away all the jobs, including the fulfilling ones."

The letter called on civic leaders - not the technology community - to take charge of decisions around the breadth of AI deployments.

Policymakers should work with the AI community to dramatically accelerate development of robust AI governance systems that, at a minimum, include new AI regulatory authorities, oversight, and tracking of highly capable AI systems and large pools of computational capability. The letter also suggested provenance and watermarking systems be used to help distinguish real from synthetic content and to track model leaks, along with a robust auditing and certification ecosystem.

"Contemporary AI systems are now becoming human-competitive at general tasks," the letter said. "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders."

(The UK government today published a white paper outlining plans to regulate general-purpose AI, saying it would "avoid heavy-handed legislation which could stifle innovation," and instead rely on existing laws.)

Avivah Litan, a vice president and distinguished analyst at Gartner Research, said the warning from tech leaders is spot on, and currently there is no technology to ensure authenticity or accuracy of the information being generated by AI tools such as GPT-4.

The greater concern, she said, is that OpenAI already plans to release GPT-4.5 in about six months, and GPT-5 about six months after that. "So, I'm guessing that's the six-month urgency mentioned in the letter," Litan said. "They're just moving full steam ahead."

The expectation of GPT-5 is it will be an artificial general intelligence, or AGI, where the AI becomes sentient and can start thinking for itself. At that point, it continues to grow exponentially smarter over time. 

"Once you get to AGI, it's like game over for human beings, because once the AI is as smart as a human, it's as smart as [Albert] Einstein, then once it becomes as smart as Einstein, it becomes as smart as 100 Einsteins in a year," Litan said. "It escalates completely out of control once you get to AGI. So that's the big fear. At that point, humans have no control. It's just out of our hands."

Anthony Aguirre, a professor of physics at UC Santa Cruz and executive vice president of Future of Life, said only the labs themselves know what computations they are running.

"But the trend is unmistakable," he said in an email reply toComputerworld. "The largest-scale computations are increasing size by about 2.5 times per year. GPT-4's parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed."

The Future of Life Institute argued that AI labs are locked in an out-of-control race to develop and deploy "ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control."

Signatories included scientists at DeepMind Technologies, a British AI research lab and a subsidiary Google parent firm Alphabet. Google recently announced Bard, an AI-based conversational chatbot it developed using the LaMDA family of LLMs.

LLMs are deep learning algorithms - computer programs for natural language processing - that can produce human-like responses to queries. The generative AI technology can also produce computer code, images, video and sound.

Microsoft, which has invested more than$10 billion in ChatGPT and GPT-4 creator OpenAI, said it had no comment at this time. OpenAI and Google also did not immediately respond to a request for comment.

Jack Gold, principal analyst with industry resarch firm J. Gold Associates, believes the biggest risk is training the LLMs with biases. So, for example, a developer could purposely train a model with bias against "wokeness," or against conservatism, or make it socialist friendly or support white supremacy.

"These are extreme examples, but it certainly is possible (and probable) that the models will have biases," Gold said in an email reply toComputerworld. "I see that as a bigger short-to-middle-term risk than job loss - especially if we assume the Gen AI is accurate and to be trusted. So the fundamental question around trusting the model is, I think, critical to how to use the outputs."

Andrzej Arendt, CEO of IT consultancy Cyber Geeks, said while generative AI tools are not yet able to deliver the highest quality software as a final product on their own, "their assistance in generating pieces of code, system configurations or unit tests can significantly speed up the programmer's work.

"Will it make the developers redundant? Not necessarily - partly because the results served by such tools cannot be used without question; programmer verification is necessary," Arendt continued. "In fact, changes in working methods have accompanied programmers since the beginning of the profession. Developers' work will simply shift to interacting with AI systems to some extent."

The biggest changes will come with the introduction of full-scale AI systems, Arendt said, which can be compared to the industrial revolution in the 1800s that replaced an economy based on crafts, agriculture, and manufacturing.

"With AI, the technological leap could be just as great, if not greater. At present, we cannot predict all the consequences," he said.

Vlad Tushkanov, lead data scientist at Moscow-based cybersecurity firm Kaspersky, said integrating LLM algorithms into more services can bring new threats. In fact, LLM technologists, are already investigating attacks, such as prompt injection, that can be used against LLMs and the services they power.

"As the situation changes rapidly, it is hard to estimate what will happen next and whether these LLM peculiarities turn out to be the side effect of their immaturity or if they are their inherent vulnerability," Tushkanov said. "However, businesses might want to include them into their threat models when planning to integrate LLMs into consumer-facing applications."

That said, LLMs and AI technologies are useful and already automating an enormous amounts of "grunt work" that is needed but neither enjoyable nor interesting for people to do. Chatbots, for example, can sift through millions of alerts, emails, probable phishing web pages and potentially malicious executables daily.

"This volume of work would be impossible to do without automation," Tushkanov said. "...Despite all the advances and cutting-edge technologies, there is still an acute shortage of cybersecurity talent. According to estimates, the industry needs millions more professionals, and in this very creative field, we cannot waste the people we have on monotonous, repetitive tasks."

Generative AI and machine learning won't replace all IT jobs, including people who fight cybersecurity threats, Tushkanov said. Solutions for those threats are being developed in an adversarial environment, where cybercriminals work against organizations to evade detection.

"This makes it very difficult to automate them, because cybercriminals adapt to every new tool and approach," Tushkanov said. "Also, with cybersecurity precision and quality are very important, and right now large language models are, for example, prone to hallucinations (as our tests show, cybersecurity tasks are no exception)." 

The Future of Life Institute said in its letter that with guardrails, humanity can enjoy a flourishing future with AI. 

"Engineer these systems for the clear benefit of all, and give society a chance to adapt," the letter said. "Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall."

tag-icon Tags quentes : Inteligência artificial Segurança A Microsoft Google Chatbots Tecnologia Emergente Privacidade

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.