ym88659208ym87991671
Tool error handling | Документация для разработчиков

Tool error handling

Обновлено 24 мая 2024

Using a model to invoke a tool has some obvious potential failure modes. Firstly, the model needs to return a output that can be parsed at all. Secondly, the model needs to return tool arguments that are valid.

We can build error handling into our chains to mitigate these failure modes.

Setup

We'll need to install the following packages:

%pip install --upgrade --quiet langchain langchain-openai

And set these environment variables:

import getpass
import os

os.environ["OPENAI_API_KEY"] = getpass.getpass()

# If you'd like to use LangSmith, uncomment the below:
# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()

Chain

Suppose we have the following (dummy) tool and tool-calling chain. We'll make our tool intentionally convoluted to try and trip up the model.

# Define tool
from langchain_core.tools import tool


@tool
def complex_tool(int_arg: int, float_arg: float, dict_arg: dict) -> int:
"""Do something complex with a complex tool."""
return int_arg * float_arg
# Define model and bind tool
from langchain_community.tools.convert_to_openai import format_tool_to_openai_tool
from langchain_openai import ChatOpenAI

model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
model_with_tools = model.bind(
tools=[format_tool_to_openai_tool(complex_tool)],
tool_choice={"type": "function", "function": {"name": "complex_tool"}},
)
# Define chain
from operator import itemgetter

from langchain.output_parsers import JsonOutputKeyToolsParser
from langchain_core.runnables import Runnable, RunnableLambda, RunnablePassthrough

chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="complex_tool", return_single=True)
| complex_tool
)

We can see that when we try to invoke this chain with even a fairly explicit input, the model fails to correctly call the tool (it forgets the dict_arg argument).

chain.invoke(
"use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg"
)

Try/except tool call

The simplest way to more gracefully handle errors is to try/except the tool-calling step and return a helpful message on errors:

from typing import Any

from langchain_core.runnables import RunnableConfig


def try_except_tool(tool_args: dict, config: RunnableConfig) -> Runnable:
try:
complex_tool.invoke(tool_args, config=config)
except Exception as e:
return f"Calling tool with arguments:\n\n{tool_args}\n\nraised the following error:\n\n{type(e)}: {e}"


chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="complex_tool", return_single=True)
| try_except_tool
)
print(
chain.invoke(
"use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg"
)
)
    Calling tool with arguments:

{'int_arg': 5, 'float_arg': 2.1}

raised the following error:

<class 'pydantic.v1.error_wrappers.ValidationError'>: 1 validation error for complex_toolSchemaSchema
dict_arg
field required (type=value_error.missing)

Fallbacks

We can also try to fallback to a better model in the event of a tool invocation error. In this case we'll fall back to an identical chain that uses gpt-4-1106-preview instead of gpt-3.5-turbo.

chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="complex_tool", return_single=True)
| complex_tool
)
better_model = ChatOpenAI(model="gpt-4-1106-preview", temperature=0).bind(
tools=[format_tool_to_openai_tool(complex_tool)],
tool_choice={"type": "function", "function": {"name": "complex_tool"}},
)
better_chain = (
better_model
| JsonOutputKeyToolsParser(key_name="complex_tool", return_single=True)
| complex_tool
)

chain_with_fallback = chain.with_fallbacks([better_chain])
chain_with_fallback.invoke(
"use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg"
)
    10.5

Looking at the Langsmith trace for this chain run, we can see that the first chain call fails as expected and it's the fallback that succeeds.

Retry with exception

To take things one step further, we can try to automatically re-run the chain with the exception passed in, so that the model may be able to correct its behavior:

import json
from typing import Any

from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnablePassthrough


class CustomToolException(Exception):
"""Custom LangChain tool exception."""

def __init__(self, tool_call: dict, exception: Exception) -> None:
super().__init__()
self.tool_call = tool_call
self.exception = exception


def tool_custom_exception(tool_call: dict, config: RunnableConfig) -> Runnable:
try:
return complex_tool.invoke(tool_call["args"], config=config)
except Exception as e:
raise CustomToolException(tool_call, e)


def exception_to_messages(inputs: dict) -> dict:
exception = inputs.pop("exception")
tool_call = {
"type": "function",
"function": {
"name": "complex_tool",
"arguments": json.dumps(exception.tool_call["args"]),
},
"id": exception.tool_call["id"],
}

# Add historical messages to the original input, so the model knows that it made a mistake with the last tool call.
messages = [
AIMessage(content="", additional_kwargs={"tool_calls": [tool_call]}),
ToolMessage(tool_call_id=tool_call["id"], content=str(exception.exception)),
HumanMessage(
content="The last tool calls raised exceptions. Try calling the tools again with corrected arguments."
),
]
inputs["last_output"] = messages
return inputs


# We add a last_output MessagesPlaceholder to our prompt which if not passed in doesn't
# affect the prompt at all, but gives us the option to insert an arbitrary list of Messages
# into the prompt if needed. We'll use this on retries to insert the error message.
prompt = ChatPromptTemplate.from_messages(
[("human", "{input}"), MessagesPlaceholder("last_output", optional=True)]
)
chain = (
prompt
| model_with_tools
| JsonOutputKeyToolsParser(
key_name="complex_tool", return_id=True, return_single=True
)
| tool_custom_exception
)

# If the initial chain call fails, we rerun it withe the exception passed in as a message.
self_correcting_chain = chain.with_fallbacks(
[exception_to_messages | chain], exception_key="exception"
)
self_correcting_chain.invoke(
{
"input": "use complex tool. the args are 5, 2.1, empty dictionary. don't forget dict_arg"
}
)
    10.5

And our chain succeeds! Looking at the LangSmith trace, we can see that indeed our initial chain still fails, and it's only on retrying that the chain succeeds.

ПАО Сбербанк использует cookie для персонализации сервисов и удобства пользователей.
Вы можете запретить сохранение cookie в настройках своего браузера.