Skip to content

Agents

langchain.agents

Entrypoint to building Agents with LangChain.

Reference docs

This page contains reference documentation for Agents. See the docs for conceptual guides, tutorials, and examples on using Agents.

FUNCTION DESCRIPTION
create_agent

Creates an agent graph that calls tools in a loop until a stopping condition is met.

create_agent

create_agent(
    model: str | BaseChatModel,
    tools: Sequence[BaseTool | Callable | dict[str, Any]] | None = None,
    *,
    system_prompt: str | None = None,
    middleware: Sequence[AgentMiddleware[AgentState[ResponseT], ContextT]] = (),
    response_format: ResponseFormat[ResponseT] | type[ResponseT] | None = None,
    state_schema: type[AgentState[ResponseT]] | None = None,
    context_schema: type[ContextT] | None = None,
    checkpointer: Checkpointer | None = None,
    store: BaseStore | None = None,
    interrupt_before: list[str] | None = None,
    interrupt_after: list[str] | None = None,
    debug: bool = False,
    name: str | None = None,
    cache: BaseCache | None = None,
) -> CompiledStateGraph[
    AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]
]

Creates an agent graph that calls tools in a loop until a stopping condition is met.

For more details on using create_agent, visit the Agents docs.

PARAMETER DESCRIPTION

model

The language model for the agent. Can be a string identifier (e.g., "openai:gpt-4") or a chat model instance (e.g., ChatOpenAI()). For a full list of supported model strings, see init_chat_model.

TYPE: str | BaseChatModel

tools

A list of tools, dicts, or Callable. If None or an empty list, the agent will consist of a model node without a tool calling loop.

TYPE: Sequence[BaseTool | Callable | dict[str, Any]] | None DEFAULT: None

system_prompt

An optional system prompt for the LLM. Prompts are converted to a SystemMessage and added to the beginning of the message list.

TYPE: str | None DEFAULT: None

middleware

A sequence of middleware instances to apply to the agent. Middleware can intercept and modify agent behavior at various stages.

TYPE: Sequence[AgentMiddleware[AgentState[ResponseT], ContextT]] DEFAULT: ()

response_format

An optional configuration for structured responses. Can be a ToolStrategy, ProviderStrategy, or a Pydantic model class. If provided, the agent will handle structured output during the conversation flow. Raw schemas will be wrapped in an appropriate strategy based on model capabilities.

TYPE: ResponseFormat[ResponseT] | type[ResponseT] | None DEFAULT: None

state_schema

An optional TypedDict schema that extends AgentState. When provided, this schema is used instead of AgentState as the base schema for merging with middleware state schemas. This allows users to add custom state fields without needing to create custom middleware. Generally, it's recommended to use state_schema extensions via middleware to keep relevant extensions scoped to corresponding hooks / tools. The schema must be a subclass of AgentState[ResponseT].

TYPE: type[AgentState[ResponseT]] | None DEFAULT: None

context_schema

An optional schema for runtime context.

TYPE: type[ContextT] | None DEFAULT: None

checkpointer

An optional checkpoint saver object. This is used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).

TYPE: Checkpointer | None DEFAULT: None

store

An optional store object. This is used for persisting data across multiple threads (e.g., multiple conversations / users).

TYPE: BaseStore | None DEFAULT: None

interrupt_before

An optional list of node names to interrupt before. Useful if you want to add a user confirmation or other interrupt before taking an action.

TYPE: list[str] | None DEFAULT: None

interrupt_after

An optional list of node names to interrupt after. Useful if you want to return directly or run additional processing on an output.

TYPE: list[str] | None DEFAULT: None

debug

Whether to enable verbose logging for graph execution. When enabled, prints detailed information about each node execution, state updates, and transitions during agent runtime. Useful for debugging middleware behavior and understanding agent execution flow.

TYPE: bool DEFAULT: False

name

An optional name for the CompiledStateGraph. This name will be automatically used when adding the agent graph to another graph as a subgraph node - particularly useful for building multi-agent systems.

TYPE: str | None DEFAULT: None

cache

An optional BaseCache instance to enable caching of graph execution.

TYPE: BaseCache | None DEFAULT: None

RETURNS DESCRIPTION
CompiledStateGraph[AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]]

A compiled StateGraph that can be used for chat interactions.

The agent node calls the language model with the messages list (after applying the system prompt). If the resulting AIMessage contains tool_calls, the graph will then call the tools. The tools node executes the tools and adds the responses to the messages list as ToolMessage objects. The agent node then calls the language model again. The process repeats until no more tool_calls are present in the response. The agent then returns the full list of messages.

Example
from langchain.agents import create_agent


def check_weather(location: str) -> str:
    '''Return the weather forecast for the specified location.'''
    return f"It's always sunny in {location}"


graph = create_agent(
    model="anthropic:claude-sonnet-4-5",
    tools=[check_weather],
    system_prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
    print(chunk)

AgentState

Bases: TypedDict, Generic[ResponseT]

State schema for the agent.