Cadastre-se agora para um orçamento mais personalizado!

NOTÍCIAS QUENTES

Senate hearings see a clear and present danger from AI - and opportunities

13 de novembro de 2023 Hi-network.com

There are vital national interests in advancing artificial intelligence (AI) to streamline public services and automate mundane tasks performed by government employees. But the government lacks in both IT talent and systems to support those efforts.

"The federal government as a whole continues to face barriers in hiring, managing, and retaining staff with advanced technical skills - the very skills needed to design, develop, deploy, and monitor AI systems," said Taka Ariga, chief data scientist at the US Government Accountability Office.

Daniel Ho, associate director for Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University, agreed, saying that by one estimate the federal government would need to hire about 40,000 IT workers to address cybersecurity issues posed by AI.

Artificial intelligence tools were the subject of two separate hearings on Capitol Hill. Before the Homeland Security and Governmental Affairs Committee, a panel of five AI experts testified that while adoption of AI technology is inevitable, removing human oversight of it poses enormous risks. And at a hearing of the Senate Judiciary subcommittee on privacy, technology, and the law, OpenAI CEO Sam Altman was joined by IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus in giving testimony.

The overlapping hearings covered a variety of issues and concerns about the rapid rise and evolution of AI-based tools. Beyond the need for more skilled workers in the US government, officials raised concerns about government agencies dealing with biases based on faulty or corrupt data from the AI algorithms, fears about election disinformation, and the need for better transparency about how AI tools - and the underlying large language models - actually work. 

In opening remarks, Homeland Security and Governmental Affairs committee Chairman Sen. Gary Peters (D-MI) said the US must take the global lead in AI development and regulation by setting standards that can "address potential risks and harms."

One of the most obvious threats? The data used by AI chatbots such as OpenAI'sChatGPT to produce answers is often inaccessible to anyone outside the vendor community - and even engineers who design AI systems don't always understand how the systems reach conclusions.

In other words, AI systems can be black boxes using proprietary technology often backed by bad data to produce flawed results.

Bad data in, bad results out?

Peters pointed to a recent study by Stanford University that uncovered a flawed Internal Revenue Service AI algorithm used to determine who should be audited. The system chose Black taxpayers at five times the rate of other races.

Peters also referenced AI-driven systems deployed by at least a dozen states to determine eligibility for disability benefits, "which resulted in the system denying thousands of recipients this critical assistance that help them live independently," Peters said.

Because the disability benefits system was considered "proprietary technology" by the states, citizens were unable to learn why they were denied benefits or to appeal the decision, according to Peters. Privacy laws that kept the data and process hidden weren't designed to handle AI applications and issues.

"As agencies use more AI tools, they need to ensure they're securing and appropriately using any data inputs to avoid accidental disclosures or unintended uses that harm Americans' rights or civil liberties," Peters said.

Richard Eppink, a lawyer with the American Civil Liberties Union of Idaho Foundation, noted a class action lawsuit has been brought by the ACLU representing about 4,000 Idahoans with developmental and intellectual disabilities who were denied funds by state's Medicaid program because of an AI-based system. "We can't allow proprietary AI to hold due process rights hostage," Eppink said.

At the other hearing on AI, Altman was asked whether citizens should be concerned that elections could be gamed by large language models (LLMs) such as GPT-4 and its chatbot application, ChatGPT.

"It's one of my areas of greatest concern," he said. "The more general ability of these models to manipulate, persuade, to provide one-on-one interactive disinformation - given we're going to face an election next year and these models are getting better, I think this is a significant area of concern."

Regulation, Altman said, would be "wise" because people need to know if they're talking to an AI system or looking at content - images, videos or documents - generated by a chatbot. "I think we'll also need rules and guidelines about what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities we're talking about. So, I'm nervous about it."

People, however, will adapt quickly, he added, pointing to Adobe's Photoshop software as something that at first fooled many until its capabilities were realized. "And then pretty quickly [people] developed an understanding that images might have been Photoshopped," Altman said. "This will be like that, but on steroids."

Watermarks to designate AI content

Lynne Parker, director of the AI Tennessee Initiative at the University of Tennessee, said one method of identifying content generated by AI tools is to include watermarks. The technology would allow users to fully understand the content's provenance or where it came from.

Committee member Sen. Maggie Hassan (D-NH) said there would be a future hearing on the topic of watermarking AI content.

Altman also suggested the US government follow a three-point AI oversight plan:

  • Form a government agency charged to license large AI models and revoke those that don't meet government standards.
  • Create LLM safety standards that include the ability to evaluate whether they're dangerous or not. Like other products, LLMs would have to pass safety tests such as not being able to "self-replicate," go rogue, and start acting on their own.
  • Create an independent AI-audit framework overseen by independent experts.

Altman, however, didn't address transparency concerns about how LLMs are trained, something Sen. Marsha Blackburn (R-TN) and other committee members have suggested.

Parker, too, called for federal action - guidelines that would allow the US government to responsibly leverage AI. She then listed 10, including the protection of citizen rights, the use of established rules such as NIST's proposed AI Management Framework, and the creation of a federal AI council.

Onerous or heavy-handed oversight that hinders the development and deployment of AI systems isn't needed, Parker argued. Instead, existing proposed guidelines, such as the Office of Science and Technology's Blueprint for an AI Bill of Rights would address high-risk issues.

Defining the responsible use of AI is also important, something for which agencies like the Office of Management and Budget should be given responsibility.

One concern: vendors of chatbot and other AI technologies are working hard to obtain public information such as cell phone records and citizen addresses from state and federal agencies to assist in developing new applications. Those applications could track people and their online habits to better market to them.

China makes an AI push

The Senate committee also heard concerns that China is leading in both AI development and standards. "We seem to be caught in a trap," said Jacob Siegel, senior editor of news atTablet Magazine. "There's a vital national interest in promoting the advancement of AI, yet at present the government's primary use of AI appears to be as a political weapon to censor information that it or its third-party partners deem harmful."

Siegel, whose online magazine focuses on Jewish news and culture, served as an intelligence officer and a veteran of the Iraq and Afghanistan War.

American AI governance to date, he argued, is emulating the Chinese model with a top down, political party-driven social control. "Continuing in this direction will mean the end of our tradition of self-government and the American way of life."

Siegel said his experiences in the war on terror provided him with a "glimpse of the AI revolution." He said the technology is already "remaking America's political system and culture in ways that have already proved incompatible with our system of democracy and self-government and may soon become irreversible."

He called out testimony given earlier this month by Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), who said China has already established guardrails to ensure AI represents its values. "And the US should do the same," Siegel said.

The Judiciary Committee held a hearing on March to discuss the transformative potential of AI as well as its risks. Today's hearing focused on how AI can help the government offer services more efficiently while avoiding intrusions on privacy, free speech, and bias.

Concerns about censorship

Sen. Rand Paul (R-KY) painted a particularly ominous, Orwellian-like scenario where AI such as ChatGPT not only acts through erroneous data it's fed, but can also knowingly produce disinformation and censor free speech based on what the government determines is for the greater good.

For example, Paul described how during the COVID-19 pandemic a private-public partnership worked in in concert with private companies, such as Twitter, to use AI to automate the discovery of controversial posts about vaccine origins and unapproved treatments and delete them.

"The purpose, so they claimed, was to combat foreign malign influence. But, in reality, the government wasn't suppressing foreign misinformation or disinformation. It was working to censor domestic speech by Americans," Paul said. "George Orwell would be proud."

Since 2020, Paul said, the federal government has awarded more than 500 contracts for proprietary AI systems. The senator claimed the contracts went to companies whose technology is used to "mine the internet, identify conversations indicative of harmful narratives, track those threats, and develop countermeasures before messages go viral."

tag-icon Tags quentes : Inteligência artificial Governo Chatbots Tecnologia Emergente Carreiras Privacidade de Dados TI governamental

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.