Cadastre-se agora para um orçamento mais personalizado!

NOTÍCIAS QUENTES

Anthropic brings Tool Use for Claude out of beta, promising sophisticated assistants

May, 30, 2024 Hi-network.com
claude-tool-use-ga-splash-jpeg.png
Anthropic

An increasingly popular trend in generative artificial intelligence is to give AI models "agent" capabilities, the power to tap into external programs such as databases, or a web browser with live search functionality.

OpenAI popularized the notion of AI agents in November when it introduced its "Assistant" API, meant to make it easier for developers to call specific functions for their applications. On Thursday, OpenAI competitor Anthropic made its bid for developers' focus by making generally available what it calls Tool Use for Claude, which is designed "to automate tasks, personalize recommendations, and streamline data analysis by integrating AI with external tools and services."

Also:Anthropic launches a free Claude iOS app and Team, its first enterprise plan

Anthropic debuted Tool Use, also known as function calling, with the introduction of its Claude 3 family of models in March. There's already a fairly extensive set of posted instructions for developers for how to use the API in the beta version.

Today's announcement takes Tool Use out of beta and available through Anthropic's own Anthropic Messages API, the Amazon Bedrock service, and Google's Vertex AI.

Here's how Tool Use is supposed to work. You enter a prompt into Claude, such as, "What is the weather in New York." Claude interprets the prompt to produce an API call to an app that carries out the function, such as a weather app that returns weather data. The output of that app is then sent back to Claude as a message, and the model then formulates it into a natural-language response for you.

tool-use-function-calling-anthropic-jpeg.png

Example of a shell script that supplies Claude with a tool definition and gives a user prompt that will be interpreted by Claude to select the tool. 

Anthropic

Which app to call, and how to pass parameters, such as the city name, is either a JSON or a Python call that the LLM can formulate.

Anthropic emphasizes that the app that does the work, such as a weather app, is not provided by Anthropic -- it's provided by the developer. The LLM does not directly access the app, but rather only passes the request to the app and then receives the resulting data. Developers can either force Claude to use a particular tool, or allow the LLM to select a tool by interpreting the prompt.

Also: How LangChain turns GenAI into a genuinely useful assistant

The three different versions of Claude, called Haiku, Sonnet, and Opus, have different degrees of sophistication in how they form tool requests, Anthropic explains:

Opus is able to handle the most simultaneous tools and is better at catching missing arguments compared to other models. It is more likely to ask for clarification in ambiguous cases where an argument is not explicitly given or when a tool may not be necessary to complete the user request. Haiku defaults to trying to use tools more frequently (even if not relevant to the query) and will infer missing parameters if they are not explicitly given.

That basic construct can be extended to many paradigms, such as database queries for "retrieval-augmented generation," or RAG, a common approach to ground Generative AI in a known good source of data. 

Anthropic featured several clients who have been using Tool Use. Online learning assistant StudyFetch used Tool Use to offer students things such as navigating course materials via Claude. A startup called Hebbia used the technology to do things such as extract metadata from long documents and automate "multi-step workflows" for clients in financials services.

tag-icon Tags quentes : Inovação

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.