The core idea of agents is to use a language model to choose a sequence of actions to take. In chains, a sequence of actions is hardcoded (in code). In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.
Schema
LangChain has several abstractions to make working with agents easy.
AgentAction
This is a dataclass that represents the action an agent should take. It has a tool property (which is the name of the tool that should be invoked) and a tool_input property (the input to that tool)
AgentFinish
This represents the final result from an agent, when it is ready to return to the user. It contains a return_values key-value mapping, which contains the final agent output. Usually, this contains an output key containing a string that is the agent's response.
Intermediate Steps
These represent previous agent actions and corresponding outputs from this CURRENT agent run. These are important to pass to future iteration so the agent knows what work it has already done. This is typed as a List[Tuple[AgentAction, Any]]. Note that observation is currently left as type Any to be maximally flexible. In practice, this is often a string.
Agent
This is the chain responsible for deciding what step to take next. This is usually powered by a language model, a prompt, and an output parser.
Different agents have different prompting styles for reasoning, different ways of encoding inputs, and different ways of parsing the output. For a full list of built-in agents see agent types. You can also easily build custom agents, should you need further control.
Agent Inputs
The inputs to an agent are a key-value mapping. There is only one required key: intermediate_steps, which corresponds to Intermediate Steps as described above.
Generally, the PromptTemplate takes care of transforming these pairs into a format that can best be passed into the LLM.
Agent Outputs
The output is the next action(s) to take or the final response to send to the user (AgentActions or AgentFinish). Concretely, this can be typed as Union[AgentAction, List[AgentAction], AgentFinish].
The output parser is responsible for taking the raw LLM output and transforming it into one of these three types.
AgentExecutor
The agent executor is the runtime for an agent. This is what actually calls the agent, executes the actions it chooses, passes the action outputs back to the agent, and repeats. In pseudocode, this looks roughly like:
next_action = agent.get_action(...)
while next_action != AgentFinish:
observation = run(next_action)
next_action = agent.get_action(..., next_action, observation)
return next_action
While this may seem simple, there are several complexities this runtime handles for you, including:
Handling cases where the agent selects a non-existent tool
Handling cases where the tool errors
Handling cases where the agent produces output that cannot be parsed into a tool invocation
Logging and observability at all levels (agent decisions, tool calls) to stdout and/or to LangSmith.
Tools
Tools are functions that an agent can invoke. The Tool abstraction consists of two components:
The input schema for the tool. This tells the LLM what parameters are needed to call the tool. Without this, it will not know what the correct inputs are. These parameters should be sensibly named and described.
The function to run. This is generally just a Python function that is invoked.
References:
https://python.langchain.com/v0.1/docs/modules/agents/concepts/
No comments:
Post a Comment