Skip to main content

Agent Design

How the PydanticAI agent is structured and why tools build UI.

The key insight: tools build UI, not the LLM

In Chat-in-Bio, the LLM's role is intent routing — understanding what the visitor wants and calling the right tools. The tools themselves contain the logic for querying data and building A2UI components.

Visitor: "Do you have any upcoming events?"


┌─────────────┐
│ LLM │ ← Understands intent
│ (Claude, │ ← Decides: call list_upcoming_events
│ GPT, etc) │ ← Composes text response from tool result
└──────┬──────┘
│ tool call

┌─────────────────────┐
│ list_upcoming_events│ ← Queries Event table
│ (Python) │ ← Builds A2UI cards
│ │ ← Returns text summary to LLM
└─────────────────────┘

Why this matters

If the LLM generated UI directly (as JSON), you'd get:

  • Malformed JSON from token-by-token generation
  • Inconsistent component structures
  • Hallucinated component types
  • No way to guarantee the UI matches your data

With tools building UI:

  • Components are always valid (built by deterministic Python code)
  • Data is always fresh (queried from the database)
  • The LLM can focus on natural conversation

Agent factory

The agent is built dynamically from the database configuration:

agent = Agent(
model="anthropic:claude-sonnet-4-20250514", # from bot_config
system_prompt=system_prompt, # built from owner info + tools
deps_type=ChatDeps,
output_type=str,
)

Dependencies (ChatDeps)

Every tool receives a RunContext[ChatDeps] with:

FieldPurpose
db_sessionSQLAlchemy async session for database queries
bot_configBot configuration (model, tools, prompts)
ownerOwner profile data
surface_managerA2UI component accumulator

Dynamic tool registration

Only tools listed in bot_config.enabled_tools are registered:

# If enabled_tools = ["links", "events", "faq"]
# Then only list_links, list_upcoming_events, create_rsvp,
# and search_faq are available to the agent

Unknown tool names are silently ignored, making configuration forgiving.

System prompt

The system prompt is auto-generated from:

  1. Owner context — name, bio, tagline (so the agent "knows" who it represents)
  2. Custom prompt — the owner's personality instructions from bot_config.system_prompt
  3. Tool descriptions — what each enabled tool can do

This means the owner only needs to define their personality — the technical context is handled automatically.

Conversation history

Message history is loaded from the database on each request:

  1. All messages for the session are fetched, ordered by created_at
  2. User messages become ModelRequest with UserPromptPart
  3. Assistant messages become ModelResponse with TextPart
  4. The current user message is the new prompt; prior messages are message_history

This gives the LLM context for multi-turn conversations without storing PydanticAI-specific state.

Tool pattern

Every tool follows the same structure:

async def my_tool(ctx: RunContext[ChatDeps], param: str) -> str:
# 1. Query the database
result = await ctx.deps.db_session.execute(select(Model).where(...))
items = result.scalars().all()

# 2. Build A2UI components
builder = ctx.deps.surface_manager.create_builder()
for item in items:
builder.text(f"t-{item.id}", item.title)
builder.column("root", children=[f"t-{item.id}" for item in items])
ctx.deps.surface_manager.set_root(builder.surface_id, "root")

# 3. Return text summary for the LLM
return f"Displayed {len(items)} items"

The text summary helps the LLM compose a natural response like "Here are your 3 upcoming events!" without needing to parse the A2UI output.