Skip to content

pydantic_ai.Agent

Bases: Generic[AgentDeps, ResultData]

Class for defining "agents" - a way to have a specific type of "conversation" with an LLM.

__init__

__init__(
    model: Model | KnownModelName | None = None,
    result_type: type[ResultData] = str,
    *,
    system_prompt: str | Sequence[str] = (),
    deps_type: type[AgentDeps] = NoneType,
    retries: int = 1,
    result_tool_name: str = "final_result",
    result_tool_description: str | None = None,
    result_retries: int | None = None
)

Create an agent.

Parameters:

Name Type Description Default
model Model | KnownModelName | None

The default model to use for this agent, if not provide, you must provide the model when calling the agent.

None
result_type type[ResultData]

The type of the result data, used to validate the result data, defaults to str.

str
system_prompt str | Sequence[str]

Static system prompts to use for this agent, you can also register system prompts via a function with system_prompt.

()
deps_type type[AgentDeps]

The type used for dependency injection, this parameter exists solely to allow you to fully parameterize the agent, and therefore get the best out of static type checking. If you're not using deps, but want type checking to pass, you can set deps=None to satisfy Pyright or add a type hint : Agent[None, <return type>].

NoneType
retries int

The default number of retries to allow before raising an error.

1
result_tool_name str

The name of the tool to use for the final result.

'final_result'
result_tool_description str | None

The description of the final result tool.

None
result_retries int | None

The maximum number of retries to allow for result validation, defaults to retries.

None

run async

run(
    user_prompt: str,
    *,
    message_history: list[Message] | None = None,
    model: Model | KnownModelName | None = None,
    deps: AgentDeps = None
) -> RunResult[ResultData]

Run the agent with a user prompt in async mode.

Parameters:

Name Type Description Default
user_prompt str

User input to start/continue the conversation.

required
message_history list[Message] | None

History of the conversation so far.

None
model Model | KnownModelName | None

Optional model to use for this run, required if model was not set when creating the agent.

None
deps AgentDeps

Optional dependencies to use for this run.

None

Returns:

Type Description
RunResult[ResultData]

The result of the run.

run_sync

run_sync(
    user_prompt: str,
    *,
    message_history: list[Message] | None = None,
    model: Model | KnownModelName | None = None,
    deps: AgentDeps = None
) -> RunResult[ResultData]

Run the agent with a user prompt synchronously.

This is a convenience method that wraps self.run with asyncio.run().

Parameters:

Name Type Description Default
user_prompt str

User input to start/continue the conversation.

required
message_history list[Message] | None

History of the conversation so far.

None
model Model | KnownModelName | None

Optional model to use for this run, required if model was not set when creating the agent.

None
deps AgentDeps

Optional dependencies to use for this run.

None

Returns:

Type Description
RunResult[ResultData]

The result of the run.

run_stream async

run_stream(
    user_prompt: str,
    *,
    message_history: list[Message] | None = None,
    model: Model | KnownModelName | None = None,
    deps: AgentDeps = None
) -> AsyncIterator[
    StreamedRunResult[AgentDeps, ResultData]
]

Run the agent with a user prompt in async mode, returning a streamed response.

Parameters:

Name Type Description Default
user_prompt str

User input to start/continue the conversation.

required
message_history list[Message] | None

History of the conversation so far.

None
model Model | KnownModelName | None

Optional model to use for this run, required if model was not set when creating the agent.

None
deps AgentDeps

Optional dependencies to use for this run.

None

Returns:

Type Description
AsyncIterator[StreamedRunResult[AgentDeps, ResultData]]

The result of the run.

model instance-attribute

model: Model | None = (
    infer_model(model) if model is not None else None
)

The default model configured for this agent.

override_deps

override_deps(overriding_deps: AgentDeps) -> Iterator[None]

Context manager to temporarily override agent dependencies, this is particularly useful when testing.

Parameters:

Name Type Description Default
overriding_deps AgentDeps

The dependencies to use instead of the dependencies passed to the agent run.

required

last_run_messages class-attribute instance-attribute

last_run_messages: list[Message] | None = None

The messages from the last run, useful when a run raised an exception.

Note: these are not used by the agent, e.g. in future runs, they are just stored for developers' convenience.

system_prompt

Decorator to register a system prompt function that takes CallContext as it's only argument.

retriever_plain

retriever_plain(
    func: RetrieverPlainFunc[RetrieverParams],
) -> Retriever[AgentDeps, RetrieverParams]
retriever_plain(
    *, retries: int | None = None
) -> Callable[
    [RetrieverPlainFunc[RetrieverParams]],
    Retriever[AgentDeps, RetrieverParams],
]
retriever_plain(
    func: RetrieverPlainFunc[RetrieverParams] | None = None,
    /,
    *,
    retries: int | None = None,
) -> Any

Decorator to register a retriever function.

retriever_context

retriever_context(
    func: RetrieverContextFunc[AgentDeps, RetrieverParams]
) -> Retriever[AgentDeps, RetrieverParams]
retriever_context(
    *, retries: int | None = None
) -> Callable[
    [RetrieverContextFunc[AgentDeps, RetrieverParams]],
    Retriever[AgentDeps, RetrieverParams],
]
retriever_context(
    func: (
        RetrieverContextFunc[AgentDeps, RetrieverParams]
        | None
    ) = None,
    /,
    *,
    retries: int | None = None,
) -> Any

Decorator to register a retriever function.

result_validator

Decorator to register a result validator function.

KnownModelName module-attribute

KnownModelName = Literal[
    "openai:gpt-4o",
    "openai:gpt-4o-mini",
    "openai:gpt-4-turbo",
    "openai:gpt-4",
    "openai:gpt-3.5-turbo",
    "gemini-1.5-flash",
    "gemini-1.5-pro",
]

Known model names that can be used with the model parameter of Agent.

KnownModelName is provided as a concise way to specify a model.