Cadastre-se agora para um orçamento mais personalizado!

Enterprises are preparing to build their own LLMs - why that's a smart move

Jun, 12, 2024 Hi-network.com
background-abstract-shape-composition-geometric-structure-block-gettyimages-1079488878
BentlyChang/Getty Images

Are enterprises ready to build and maintain their own internal large language models (LLMs)? 

Artificial intelligence -- especially the generative variety -- has captivated the unbridled interest of tech professionals and executives alike. Consider this: Despite all the endless talk of budget cuts for cloud services and infrastructure in recent times, the money faucet has opened wide for AI funding. But is it flowing outward to outside services, or inward to resident talent and resources?  

Also: Here's how Apple's keeping your cloud-processed AI data safe (and why it matters)

A lot of outside entities, such as OpenAI, Microsoft, and Google, are seen as the primary providers of LLMs, infrastructure support, and expertise. However, interest in internal LLM projects is also on the rise. A new survey of 1,300 CEOs by TCS finds about half of those surveyed, 51%, said they are planning to build their own generative AI implementations. That means a lot of work ahead -- but fortunately, the groundwork has already been laid with the publicly available LLMs. 

"The foundational LLMs -- such as GPT, Claude, Llama -- can be best described as world-wise; can be seen as repackaging Internet knowledge," Dr. Harrick Vin, chief technology officer for TCS and co-author of the study, told . "They also possess high levels of multi-modal understanding and generation capabilities, along with reasoning abilities."  

Constructing these foundational models "is complex and expensive," said Vin, who pointed out that internal enterprise models would build upon the capabilities of these models. "These models will leverage the basic skills of foundational models -- such as language understanding and generation, reasoning, and general knowledge. But they need to extend and specialize them to the industry, enterprise and activity context."

Fortunately, "construction of such specialized models is far easier and inexpensive as compared to the development of foundational models," said Vin. "In fact, the relative ease of specializing foundational LLMs, which are broad-AI models, to create purpose-specific AI models and solutions is the primary reason for the democratization of AI."

These enterprise-specific LLMs are "referring to industry, enterprise, and activity-wise models constructed using some of the foundational models, either open-source or commercial," he continued. "We believe that an AI-mature enterprise in the future will have hundreds, or thousands, of purposive AI models, all built by compositing capabilities of foundational models with specific enterprise-specific capabilities."

Also: Businesses' cloud security fails are 'concerning' - as AI threats accelerate

Beyond building and implementing models, the business needs to be prepped for generative AI. More than half (55%) said they were actively making changes right now to their business or operating models, or to their products and services, to accommodate AI   Four in 10 executives said that in the future they have "a lot of changes to make to their business" before they can take full advantage of AI, the TCS survey shows. 

This points to a slow but powerful uptake of both generative and operational AI. Over the past year, "every enterprise has experimented with gen AI use cases -- and 2024 and beyond will be about scaling value," said Vin. "During the experimentation phase, however, every enterprise has realized that scaling value is challenging." 

At this time, only 17% are discussing Al and making enterprise-wide plans for it, the TCS survey shows. In addition, only 28% are ready to establish an enterprise-wide AI strategy to maximize its benefits to the company. Still, Vin sees a rapid upsurge of the technology. "There is a difference between implementing AI solutions on an ad hoc or a case-by-case basis, to building an enterprise-wide plan to build an AI-mature enterprise," Vin said. "The relatively low numbers in the survey refer to the creation of such enterprise-wide strategies. This is expected."

Also: Make room for RAG: How Gen AI's balance of power is shifting

As far as the adoption of AI solutions goes, Vin continued, "the numbers are quite high: 59% of corporate functions have AI implementations in-process or completed and another 34% are planning AI implementations. We are in the early phase of both technological maturity as well as an enterprise adoption, at scale, maturity. Most enterprises are starting to leverage AI and genAI for specific use cases, while embarking upon a longer-term journey to quantify benefits as well as manage corresponding cost and risks."

Over the past year, "every enterprise has experimented with genAI use cases -- and 2024 and beyond will be about scaling value," said Vin. "During the experimentation phase, however, every enterprise has realized that scaling value is challenging."
For starters, "building effective AI solutions requires high-quality data," he said. "Whereas enterprises do have a lot of data, it is often distributed across many, mutually inconsistent islands. Whereas most enterprises have embarked upon consolidation and data estate modernization journeys over the past several years, these journeys are far from complete. Further, the migration of the data estate to cloud environments is work-in-progress for most enterprises. This makes it difficult for enterprises to leverage cloud-hosted foundational LLMs along with their enterprise data."
Also: Generative AI may be creating more work than it saves 

In addition, enterprises "will need to improve their maturity to manage data lineage, usage, security and privacy proactively," said Vin. "They will need to master the art of determining what data can be used for what purpose, even inadvertently, to prevent biases and unfair practices. This is not just a design-time challenge, but also a run-time challenge." Needed are systems "to detect, in real-time, emergent conditions where AI models start deviating from expected behavior."  
Finally, roles and skills requirements are changing faster than companies can keep up. "With the infusion of AI, the role of enterprise knowledge workers will change from doers of work to trainers and interrogators of machines, reviewers of work done by machines, as well as owners of critical thinking and creativity," Vin said.  

tag-icon Tags quentes : Inovação

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.