Agents
langchain.agents
¶
Entrypoint to building Agents with LangChain.
Reference docs
This page contains reference documentation for Agents. See the docs for conceptual guides, tutorials, and examples on using Agents.
| FUNCTION | DESCRIPTION |
|---|---|
create_agent |
Creates an agent graph that calls tools in a loop until a stopping condition is met. |
create_agent
¶
create_agent(
model: str | BaseChatModel,
tools: Sequence[BaseTool | Callable | dict[str, Any]] | None = None,
*,
system_prompt: str | None = None,
middleware: Sequence[AgentMiddleware[AgentState[ResponseT], ContextT]] = (),
response_format: ResponseFormat[ResponseT] | type[ResponseT] | None = None,
state_schema: type[AgentState[ResponseT]] | None = None,
context_schema: type[ContextT] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
interrupt_before: list[str] | None = None,
interrupt_after: list[str] | None = None,
debug: bool = False,
name: str | None = None,
cache: BaseCache | None = None,
) -> CompiledStateGraph[
AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]
]
Creates an agent graph that calls tools in a loop until a stopping condition is met.
For more details on using create_agent,
visit the Agents docs.
| PARAMETER | DESCRIPTION |
|---|---|
|
The language model for the agent. Can be a string identifier
(e.g.,
TYPE:
|
|
A list of tools,
TYPE:
|
|
An optional system prompt for the LLM. Prompts are converted to a
TYPE:
|
|
A sequence of middleware instances to apply to the agent. Middleware can intercept and modify agent behavior at various stages.
TYPE:
|
|
An optional configuration for structured responses.
Can be a
TYPE:
|
|
An optional
TYPE:
|
|
An optional schema for runtime context.
TYPE:
|
|
An optional checkpoint saver object. This is used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).
TYPE:
|
|
An optional store object. This is used for persisting data across multiple threads (e.g., multiple conversations / users).
TYPE:
|
|
An optional list of node names to interrupt before. Useful if you want to add a user confirmation or other interrupt before taking an action. |
|
An optional list of node names to interrupt after. Useful if you want to return directly or run additional processing on an output. |
|
Whether to enable verbose logging for graph execution. When enabled, prints detailed information about each node execution, state updates, and transitions during agent runtime. Useful for debugging middleware behavior and understanding agent execution flow.
TYPE:
|
|
An optional name for the
TYPE:
|
|
An optional
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
CompiledStateGraph[AgentState[ResponseT], ContextT, _InputAgentState, _OutputAgentState[ResponseT]]
|
A compiled |
The agent node calls the language model with the messages list (after applying
the system prompt). If the resulting AIMessage contains tool_calls, the graph
will then call the tools. The tools node executes the tools and adds the responses
to the messages list as ToolMessage objects. The agent node then calls the
language model again. The process repeats until no more tool_calls are
present in the response. The agent then returns the full list of messages.
Example
from langchain.agents import create_agent
def check_weather(location: str) -> str:
'''Return the weather forecast for the specified location.'''
return f"It's always sunny in {location}"
graph = create_agent(
model="anthropic:claude-sonnet-4-5",
tools=[check_weather],
system_prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
print(chunk)