Cadastre-se agora para um orçamento mais personalizado!

NOTÍCIAS QUENTES

AI tools could leave companies liable for anti-bias missteps

13 de novembro de 2023 Hi-network.com

As lawmakers and others work to address privacy, security, and bias problems with generative artificial intelligence (AI), experts warned companies this week that their tech suppliers won't be holding the bag when something goes wrong - they will.

A panel of three AI and legal experts held a press conference Wednesday in the wake of several government and private business initiatives aimed at holding AI creators and users more responsible.

Miriam Vogel, CEO of the nonprofit EqualAI, an organization founded five years ago to reduce unconscious bias and other "harms" in AI systems joined two other experts to address potential pitfalls. 

Vogel, who is chair of the White House National AI Advisory Committee and a former associate deputy attorney general, said while AI is a powerful tool that can create tremendous business efficiencies, organizations using it must be "hypervigilant" that AI systems don't perpetuate and create new forms of discrimination.

"When creating EqualAI, the founders realized that bias and related harms are age-old issues in new medium. Obviously here, it can be harder to detect, and the consequences can grief and much graver," Vogel said. (EqualAI trains and advises companies on the responsible use of AI.)

Vogel was joined by Cathy O'Neil, CEO of ORCAA, a consulting firm that audits algorithms - including AI systems - for compliance and safety, and Reggie Townsend, vice president for data ethics at analytics software vendor SAS Institute and an EqualAI board member.

The panel argued that managing the safety and biases of AI is less about being tech experts and more about management frameworks that span technologies.

AI in many forms has been around for decades, but it wasn't until computer processors could support more sophisticated models and generative AI platforms such at ChatGPT that concerns over biases, security, and privacy escalated. Over the past six months, issues around bias in hiring and employee evaluation and promotion have surfaced, spurring municipalities, states, and the US government to create statutes to address the issue.                                                                                                         

Even though companies are typically licensing AI software from third-party vendors, O'Neil said, legal liability will be more problematic for users than for AI tech suppliers.

O'Neil worked in advertising technology a decade ago, when she said it was easier to differentiate people based on wealth, gender, and race. "That was the normalized approach to advertising. It was pretty clear from the get-go that this could go wrong. It's not that hard to find examples. Now, it's 10 years later and we know things have gone wrong."

Looking for points of failure

Facial recognition algorithms, for example, often work far better for white men and much worse for black women. The harms often fall to people who've historically been marginalized.

EqualAI offers a certification program for businesses that drums in one question over and over again: For whom might this fail? The question forces company stakeholders to consider those facing an AI-infused application, O'Neil said.

For example, could an automated job applicant tracking system potentially discriminate against someone with a mental health status during a personality test or could an algorithm used by an insurance company to determine premiums unlawfully discriminate against someone based on ethnicity, sex, or other factors?

"This is a hole EqualAI has filled. There is no one else doing this," O'Neil said. "The good news is it's not rocket science. It's not impossible to anticipate and put guard rails up to ensure people are protected from harm.

"How would you feel if you walked onto an airplane and saw no one in the cockpit? Each of the dials in an airplane is monitoring something, whether it's the air speed or amount of fuel in the tanks. They're monitoring the overall functioning of the system.

"We don't have cockpits for AI, but we should because we're basically flying blind often," O'Neil said. "So, you should be asking yourself, if you're a company..., what could go wrong, who could get hurt, how do we measure that, and what are the minimum and maximum we'd want to see in those measurements?

"None of it is really that complicated. We're talking about safety," she added. "The EEOC (Equal Employment Opprtunity Commission) has been very clear that they'll use all the civil rights laws in their power to govern AI systems in the same way they would any action. It doesn't matter whether it's an action recommended to you by an AI system. You're liable either way.

"They've also taken the step of pointing out specific laws of particular concern, in part, because so many AI systems are violating these laws, such as the Americans with Disabilities Act," Vogel said.

For example, voice recognition software is often trained on English speakers, meaning outputs can be affected by persons with speech impediments or thick, non-English accents. Facial recognition software can often misread or be unable to read the faces of minorities.

"If you're a woman, you're also not going to be heard as well as a man based on the information from which [the recognition software] was trained," Vogel said.

Early regulatory efforts need to be stronger

Townsend said a non-binding agreement struck July 21 between the White House and seven leading AI development companies to work toward safe and secure their technology didn't go far enough.

"I'd love to see these organizations...ensure there is adequate representation at the table making decisions. I don't think there was one woman who was a part of that display," Townsend said. "I want to make sure there are people at the table who've lived experiences and who look and feel different than those folks who were a part of the conversation. I'm certain all those organizations have those kinds of folks."

On Wednesday - the same day as panel discussion - ChatGPT creator OpenAI also announced the Frontier Model Forum, an industry body to promote the safe and responsible development of AI systems. Along with advancing AI safety research, the forum's stated mission is "identifying best practices and standards, and facilitating information sharing among policymakers and industry."

The panelists said the Forum is an important development as it's another step in the process of including the entire AI ecosystem in a conversation around safety, privacy and security. But they also cautioned that "big, well-funded companies" shouldn't be the only ones involved - and scrutiny needs to go beyond just generative AI.

"The AI conversation needs to be one that goes well beyond this one model. There are AI models in finance, AI models in retail, we use AI models on our phones for navigation," Townsend said. "The conversation around AI now is around large language models. We have to be diligent in our conversations around AI and their motivations."

Townsend also compared the building and management of AI systems to an electrical system: Engineers and scientists are responsible for the safe generation of electricity; electricians are responsible for wiring electrical systems; and consumers are responsible for the proper use of the electricity.

"That requires us all in ecosystem or supply chain to think about our responsibility and about outputs and inputs," Townsend said.

A large language model (LLM) is an algorithm, or a collection of code, that accepts inputs and returns outputs. The outputs can be manipulated through reinforcement learning and response or prompt engineering - teaching the model what the appropriate response to a request should be.

Companies that deploy AI, whether in consumer-facing applications or back-end systems, can't just pass it off as a problem for big tech and the AI vendors. Regardless of whether an organization sells products or services, once it deploys AI, it must think of itself as an AI company, Vogel said.

While companies should embrace all of the efficiencies AI technology brings, Vogel said, it's also critical to consider basic liabilities a company may have. Think about contract negotiations with an AI supplier over liability, and consider how AI tools will be deployed and any privacy laws that may apply.

"You have to have your eyes on all the regular liabilities you'd be thinking about with any other innovation," Vogel said. "Because you're using AI, it doesn't put you in a space outside of the realm of normal. That's why we're very mindful about bringing lawyers on board, because while historically lawyers have not been engaged in AI, they need to be.

"We've certainly been involved in aviation and don't have much legal training in aviation in law school. It's a similar situation here and with any other innovation. We understand the risks and help put in frameworks and safeguards."

Companies using AI should be familiar with the NIST risk management framework, the panel said. Organizations should also identify a point of contact internally for employees deploying and using the technology - someone with ultimate responsibility and who can provide resources to address problems and make quick decisions.

There also needs to be a process in place and clarity on what stages of the AI lifecycle will require which kind of testing - from acquiring an LLM to training it with in-house data. Testing of AI systems should also be documented so any future evaluations of the technology can take into account what's already been checked and what remains to be done.

"And, finally, you must do routine auditing. AI will continue to iterate. It's not a one-and-done situation," Vogel said.

tag-icon Tags quentes : Inteligência artificial IA generativa Tecnologia Emergente Chatbots Processamento de Linguagem Natural Fornecedor de Software

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.