Thursday, May 1, 2025

What is Supervisor agent architecture

In this architecture, we define agents as nodes and add a supervisor node (LLM) that decides which agent nodes should be called next. We use Command to route execution to the appropriate agent node based on supervisor's decision. This architecture also lends itself well to running multiple agents in parallel or using map-reduce pattern.


from typing import Literal

from langchain_openai import ChatOpenAI

from langgraph.types import Command

from langgraph.graph import StateGraph, MessagesState, START, END


model = ChatOpenAI()


def supervisor(state: MessagesState) -> Command[Literal["agent_1", "agent_2", END]]:

    # you can pass relevant parts of the state to the LLM (e.g., state["messages"])

    # to determine which agent to call next. a common pattern is to call the model

    # with a structured output (e.g. force it to return an output with a "next_agent" field)

    response = model.invoke(...)

    # route to one of the agents or exit based on the supervisor's decision

    # if the supervisor returns "__end__", the graph will finish execution

    return Command(goto=response["next_agent"])


def agent_1(state: MessagesState) -> Command[Literal["supervisor"]]:

    # you can pass relevant parts of the state to the LLM (e.g., state["messages"])

    # and add any additional logic (different models, custom prompts, structured output, etc.)

    response = model.invoke(...)

    return Command(

        goto="supervisor",

        update={"messages": [response]},

    )


def agent_2(state: MessagesState) -> Command[Literal["supervisor"]]:

    response = model.invoke(...)

    return Command(

        goto="supervisor",

        update={"messages": [response]},

    )


builder = StateGraph(MessagesState)

builder.add_node(supervisor)

builder.add_node(agent_1)

builder.add_node(agent_2)


builder.add_edge(START, "supervisor")


supervisor = builder.compile()



What is Supervisor (tool-calling)


In this variant of the supervisor architecture, we define individual agents as tools and use a tool-calling LLM in the supervisor node. This can be implemented as a ReAct-style agent with two nodes — an LLM node (supervisor) and a tool-calling node that executes tools (agents in this case).


from typing import Annotated

from langchain_openai import ChatOpenAI

from langgraph.prebuilt import InjectedState, create_react_agent


model = ChatOpenAI()


# this is the agent function that will be called as tool

# notice that you can pass the state to the tool via InjectedState annotation

def agent_1(state: Annotated[dict, InjectedState]):

    # you can pass relevant parts of the state to the LLM (e.g., state["messages"])

    # and add any additional logic (different models, custom prompts, structured output, etc.)

    response = model.invoke(...)

    # return the LLM response as a string (expected tool response format)

    # this will be automatically turned to ToolMessage

    # by the prebuilt create_react_agent (supervisor)

    return response.content


def agent_2(state: Annotated[dict, InjectedState]):

    response = model.invoke(...)

    return response.content


tools = [agent_1, agent_2]

# the simplest way to build a supervisor w/ tool-calling is to use prebuilt ReAct agent graph

# that consists of a tool-calling LLM node (i.e. supervisor) and a tool-executing node

supervisor = create_react_agent(model, tools)


No comments:

Post a Comment