ym88659208ym87991671
Event Streaming | Документация для разработчиков

Event Streaming

Обновлено 24 мая 2024

NEW This is a new API only works with recent versions of langchain-core!

In this notebook, we'll see how to use astream_events to stream token by token from LLM calls used within the tools invoked by the agent.

We will only stream tokens from LLMs used within tools and from no other LLMs (just to show that we can)!

Feel free to adapt this example to the needs of your application.

Our agent will use the OpenAI tools API for tool invocation, and we'll provide the agent with two tools:

  1. where_cat_is_hiding: A tool that uses an LLM to tell us where the cat is hiding
  2. tell_me_a_joke_about: A tool that can use an LLM to tell a joke about the given topic

⚠️ Beta API ⚠️ ##

Event Streaming is a beta API, and may change a bit based on feedback.

Keep in mind the following constraints (repeated in tools section):

  • streaming only works properly if using async
  • propagate callbacks if definning custom functions / runnables
  • If creating a tool that uses an LLM, make sure to use .astream() on the LLM rather than .ainvoke to ask the LLM to stream tokens.

Event Hooks Reference

Here is a reference table that shows some events that might be emitted by the various Runnable objects. Definitions for some of the Runnable are included after the table.

⚠️ When streaming the inputs for the runnable will not be available until the input stream has been entirely consumed This means that the inputs will be available at for the corresponding end hook rather than start event.

eventnamechunkinputoutput
on_chat_model_start[model name]{"messages": [[SystemMessage, HumanMessage]]}
on_chat_model_stream[model name]AIMessageChunk(content="hello")
on_chat_model_end[model name]{"messages": [[SystemMessage, HumanMessage]]}{"generations": [...], "llm_output": None, ...}
on_llm_start[model name]{'input': 'hello'}
on_llm_stream[model name]'Hello'
on_llm_end[model name]'Hello human!'
on_chain_startformat_docs
on_chain_streamformat_docs"hello world!, goodbye world!"
on_chain_endformat_docs[Document(...)]"hello world!, goodbye world!"
on_tool_startsome_tool{"x": 1, "y": "2"}
on_tool_streamsome_tool{"x": 1, "y": "2"}
on_tool_endsome_tool{"x": 1, "y": "2"}
on_retriever_start[retriever name]{"query": "hello"}
on_retriever_chunk[retriever name]{documents: [...]}
on_retriever_end[retriever name]{"query": "hello"}{documents: [...]}
on_prompt_start[template_name]{"question": "hello"}
on_prompt_end[template_name]{"question": "hello"}ChatPromptValue(messages: [SystemMessage, ...])

Here are declarations associated with the events shown above:

format_docs:

def format_docs(docs: List[Document]) -> str:
'''Format the docs.'''
return ", ".join([doc.page_content for doc in docs])

format_docs = RunnableLambda(format_docs)

some_tool:

@tool
def some_tool(x: int, y: str) -> dict:
'''Some_tool.'''
return {"x": x, "y": y}

prompt:

template = ChatPromptTemplate.from_messages(
[("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.tools import tool
from langchain_core.callbacks import Callbacks
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

Create the model

Attention For older versions of langchain, we must set streaming=True

model = ChatOpenAI(temperature=0, streaming=True)

Tools

We define two tools that rely on a chat model to generate output!

Please note a few different things:

  1. The tools are async
  2. The model is invoked using .astream() to force the output to stream
  3. For older langchain versions you should set streaming=True on the model!
  4. We attach tags to the model so that we can filter on said tags in our callback handler
  5. The tools accept callbacks and propagate them to the model as a runtime argument
@tool
async def where_cat_is_hiding(callbacks: Callbacks) -> str: # <--- Accept callbacks
"""Where is the cat hiding right now?"""
chunks = [
chunk
async for chunk in model.astream(
"Give one up to three word answer about where the cat might be hiding in the house right now.",
{
"tags": ["tool_llm"],
"callbacks": callbacks,
}, # <--- Propagate callbacks and assign a tag to this model
)
]
return "".join(chunk.content for chunk in chunks)


@tool
async def tell_me_a_joke_about(
topic: str, callbacks: Callbacks
) -> str: # <--- Accept callbacks
"""Tell a joke about a given topic."""
template = ChatPromptTemplate.from_messages(
[
("system", "You are Cat Agent 007. You are funny and know many jokes."),
("human", "Tell me a long joke about {topic}"),
]
)
chain = template | model.with_config({"tags": ["tool_llm"]})
chunks = [
chunk
async for chunk in chain.astream({"topic": topic}, {"callbacks": callbacks})
]
return "".join(chunk.content for chunk in chunks)

Initialize the Agent

# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-tools-agent")
print(prompt)
print(prompt.messages)
    input_variables=['agent_scratchpad', 'input'] input_types={'chat_history': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]], 'agent_scratchpad': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]} messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')), MessagesPlaceholder(variable_name='chat_history', optional=True), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad')]
[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')), MessagesPlaceholder(variable_name='chat_history', optional=True), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad')]
tools = [tell_me_a_joke_about, where_cat_is_hiding]
agent = create_openai_tools_agent(model.with_config({"tags": ["agent"]}), tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)

Stream the output

The streamed output is shown with a | as the delimiter between tokens.

async for event in executor.astream_events(
{"input": "where is the cat hiding? Tell me a joke about that location?"},
include_tags=["tool_llm"],
include_types=["tool"],
):
hook = event["event"]
if hook == "on_chat_model_stream":
print(event["data"]["chunk"].content, end="|")
elif hook in {"on_chat_model_start", "on_chat_model_end"}:
print()
print()
elif hook == "on_tool_start":
print("--")
print(
f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
)
elif hook == "on_tool_end":
print(f"Ended tool: {event['name']}")
else:
pass
    /home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.
warn_beta(
    --
Starting tool: where_cat_is_hiding with inputs: {}


|Under| the| bed|.||

Ended tool: where_cat_is_hiding
--
Starting tool: tell_me_a_joke_about with inputs: {'topic': 'under the bed'}


|Sure|,| here|'s| a| long| joke| about| what|'s| hiding| under| the| bed|:

|Once| upon| a| time|,| there| was| a| mis|chie|vous| little| boy| named| Tim|my|.| Tim|my| had| always| been| afraid| of| what| might| be| lurking| under| his| bed| at| night|.| Every| evening|,| he| would| ti|pt|oe| into| his| room|,| turn| off| the| lights|,| and| then| make| a| daring| leap| onto| his| bed|,| ensuring| that| nothing| could| grab| his| ankles|.

|One| night|,| Tim|my|'s| parents| decided| to| play| a| prank| on| him|.| They| hid| a| remote|-controlled| toy| monster| under| his| bed|,| complete| with| glowing| eyes| and| a| grow|ling| sound| effect|.| As| Tim|my| settled| into| bed|,| his| parents| quietly| sn|uck| into| his| room|,| ready| to| give| him| the| scare| of| a| lifetime|.

|Just| as| Tim|my| was| about| to| drift| off| to| sleep|,| he| heard| a| faint| grow|l| coming| from| under| his| bed|.| His| eyes| widened| with| fear|,| and| his| heart| started| racing|.| He| must|ered| up| the| courage| to| peek| under| the| bed|,| and| to| his| surprise|,| he| saw| a| pair| of| glowing| eyes| staring| back| at| him|.

|Terr|ified|,| Tim|my| jumped| out| of| bed| and| ran| to| his| parents|,| screaming|,| "|There|'s| a| monster| under| my| bed|!| Help|!"

|His| parents|,| trying| to| st|ifle| their| laughter|,| rushed| into| his| room|.| They| pretended| to| be| just| as| scared| as| Tim|my|,| and| together|,| they| brav|ely| approached| the| bed|.| Tim|my|'s| dad| grabbed| a| bro|om|stick|,| ready| to| defend| his| family| against| the| imaginary| monster|.

|As| they| got| closer|,| the| "|monster|"| under| the| bed| started| to| move|.| Tim|my|'s| mom|,| unable| to| contain| her| laughter| any| longer|,| pressed| a| button| on| the| remote| control|,| causing| the| toy| monster| to| sc|urry| out| from| under| the| bed|.| Tim|my|'s| fear| quickly| turned| into| confusion|,| and| then| into| laughter| as| he| realized| it| was| all| just| a| prank|.

|From| that| day| forward|,| Tim|my| learned| that| sometimes| the| things| we| fear| the| most| are| just| fig|ments| of| our| imagination|.| And| as| for| what|'s| hiding| under| his| bed|?| Well|,| it|'s| just| dust| b|unn|ies| and| the| occasional| missing| sock|.| Nothing| to| be| afraid| of|!

|Remember|,| laughter| is| the| best| monster| repell|ent|!||

Ended tool: tell_me_a_joke_about
ПАО Сбербанк использует cookie для персонализации сервисов и удобства пользователей.
Вы можете запретить сохранение cookie в настройках своего браузера.