Cadastre-se agora para um orçamento mais personalizado!

Mass adoption of generative AI tools is derailing one very important factor, says MIT

jun, 22, 2023 Hi-network.com
MR.Cole_Photographer/Getty Images

New artificial intelligence (AI) platforms and tools are emerging every day to assist developers, data scientists, and business analysts. However, this rapid growth in emerging technology is also increasing the complexity of AI constellations well beyond the capacity for responsibility and accountability in AI systems. 

That's the conclusion from a recent survey of 1,240 executives published by MIT Sloan Management Review and Boston Consulting Group (MIT SMR and BCG), which looked at the progress of responsible AI initiatives, and the adoption of both internally built and externally sourced AI tools - what the researchers call "shadow AI". 

Also: Meet the post-AI developer: More creative, more business-focused

The promise of AI comes with consequences, suggest the study's authors, Elizabeth Renieris (Oxford's Institute for Ethics in AI), David Kiron (MIT SMR), and Steven Mills (BCG): "For instance, generative AI has proven unwieldy, posing unpredictable risks to organizations unprepared for its wide range of use cases."

Many companies "were caught off guard by the spread of shadow AI use across the enterprise," Renieris and her co-authors observe. What's more, the rapid pace of AI advancements "is making it harder to use AI responsibly and is putting pressure on responsible AI programs to keep up."

They warn the risks that come from ever-rising shadow AI are increasing, too. For example, companies' growing dependence on a burgeoning supply of third-party AI tools, along with the rapid adoption of generative AI - algorithms (such as ChatGPT, Dall-E 2, and Midjourney) that use training data to generate realistic or seemingly factual text, images, or audio - exposes them to new commercial, legal, and reputational risks that are difficult to track.  

The researchers refer to the importance of responsible AI, which they define as "a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact."

Another difficulty stems from the fact that a number of companies "appear to be scaling back internal resources devoted to responsible AI as part of a broader trend in industry layoffs," the researchers caution. "These reductions in responsible AI investments are happening, arguably, when they are most needed." 

Also: How to use ChatGPT: Everything you need to know

For example, widespread employee use of the ChatGPT chatbot has caught many organizations by surprise, and could have security implications. The researchers say responsible AI frameworks have not been written to "deal with the sudden, unimaginable number of risks that generative AI tools are introducing".

The research suggests 78% of organizations report accessing, buying, licensing, or otherwise using third-party AI tools, including commercial APIs, pretrained models, and data. More than half (53%) rely exclusively on third-party AI tools and have no internally designed or developed AI technologies of their own. 

Responsible AI programs "should cover both internally built and third-party AI tools," Renieris and her co-authors urge. "The same ethical principles must apply, no matter where the AI system comes from. Ultimately, if something were to go wrong, it wouldn't matter to the person being negatively affected if the tool was built or bought."

The co-authors caution that while "there is no silver bullet for mitigating third-party AI risks, or any type of AI risk for that matter," they urge a multi-prong approach to ensuring responsible AI in today's wide-open environment. 

Also: ChatGPT and the new AI are wreaking havoc

Such approaches could include the following:

  • Evaluation of a vendor's responsible AI practices
  • Contractual language mandating adherence to responsible AI principles
  • Vendor pre-certification and audits (where available)
  • Internal product-level reviews (where a third-party tool is integrated into a product or service)
  • Adherence to relevant regulatory requirements or industry standards 
  • Inclusion of a comprehensive set of policies and procedures, such as guidelines for ethical AI development, risk assessment frameworks, and monitoring and auditing protocols

The specter of legislation and government mandates might make such actions a necessity as AI systems are introduced, the co-authors warn. 

Artificial Intelligence

The impact of artificial intelligence on software development? Still unclearAndroid 14's AI-generated wallpapers are super fun. Here's how to create themAI aims to predict and fix developer coding errors before disaster strikesGenerative AI is everything, everywhere, all at once
  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

tag-icon Tags quentes : Inteligência artificial Inovação

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.