tl  tr
  Home | Tutorials | Articles | Videos | Products | Tools | Search
Interviews | Open Source | Tag Cloud | Follow Us | Bookmark | Contact   
 Generative AI > Prompt Engineering > Structured Output Prompting with JSON Schema

Structured Output Prompting with JSON Schema

Author: Venkata Sudhakar

Structured output prompting instructs the LLM to respond in a specific machine-readable format such as JSON, XML, or CSV. Instead of parsing free-text responses with fragile string matching, you define the exact schema you need and the model fills it. This makes LLM output directly usable in your application code without additional parsing logic.

Modern APIs like Gemini and OpenAI support native structured output mode where the model is constrained to produce valid JSON matching a provided schema. This eliminates malformed JSON, missing fields, and unexpected response formats that plague prompt-based approaches at scale.

The below example uses the Gemini API with response_schema to extract structured product information from unstructured ShopMax India customer reviews.


It gives the following output,

{
  "product_name": "Samsung Galaxy S24",
  "sentiment": "positive",
  "rating": 4,
  "key_issues": [
    "charger stopped working after 3 days"
  ],
  "recommend": true
}

The response is valid JSON matching the schema exactly - no parsing required. Use structured output prompting for any ShopMax workflow that passes LLM output to downstream code: review analysis pipelines, product tagging, order intent classification, or customer data extraction. Always define required fields to prevent missing data and use enum constraints to standardise categorical values like sentiment labels.


 
  


  
bl  br