|
|
Pydantic AI - Typed AI Agent Responses
Author: Venkata Sudhakar
Pydantic AI is a framework that wraps an LLM call and guarantees its output is a valid, fully-typed Python object. You define a Pydantic model for what you want - a product review analysis with a sentiment score, a list of tags, and a recommended action - and Pydantic AI makes the LLM return exactly that, validated and type-checked. If the LLM response cannot be coerced into your model, Pydantic AI retries automatically with the validation error as feedback. This eliminates the brittle string-parsing code that normally sits between an LLM call and your application logic. The core concept is the Agent. You create an Agent with your chosen LLM and the result type (your Pydantic model), write a system prompt explaining the task, and call agent.run_sync(user_input). The return value is an AgentRunResult whose .data attribute is a fully instantiated, validated instance of your Pydantic model - with proper Python types, validated field values, and any custom validators you defined. Your downstream code works with a clean Python object, not a string to parse. The below example builds a product review analysis agent for an e-commerce platform that reads customer reviews and returns structured insights ready to feed into a dashboard.
Analysing real customer reviews with fully typed output,
It gives the following output,
=== Review 1 ===
Sentiment: POSITIVE (score: 4.8/5)
Summary: Customer is highly satisfied with blender performance, noise level and delivery.
Tags: ["blender", "smoothies", "delivery", "noise", "value"]
Needs response: False
Suggested action: Feature this review on the product page as it highlights key selling points.
Type check: <class "float"> | <class "bool">
=== Review 2 ===
Sentiment: NEGATIVE (score: 1.5/5)
Summary: Customer received damaged packaging and has not received customer service reply.
Tags: ["damaged packaging", "customer service", "delivery", "price", "scratch"]
Needs response: True
Suggested action: Reply within 24 hours with apology and offer replacement or full refund.
Type check: <class "float"> | <class "bool">
# analysis.score is a float - not a string like "4.8"
# analysis.needs_response is a bool - not the string "True"
# analysis.key_tags is a list[str] - immediately iterable
# No parsing, no type casting, no try/except around json.loads()
The business value of typed responses is enormous at scale. When you process ten thousand reviews per day, a single malformed LLM response breaking your pipeline costs real money. Pydantic AI retries on validation failure, enforces max_length on summary so it fits your database column, ensures score stays between 0 and 5, and guarantees key_tags is always a list. You write your downstream code against clean Python types and never handle string-to-object conversion manually. This is the difference between a fragile prototype and a production-grade AI feature.
|
|