tl  tr
  Home | Tutorials | Articles | Videos | Products | Tools | Search
Interviews | Open Source | Tag Cloud | Follow Us | Bookmark | Contact   
 Generative AI > Prompt Engineering > ReAct Prompting Pattern

ReAct Prompting Pattern

Author: Venkata Sudhakar

ReAct (Reasoning + Acting) is a prompting pattern where the LLM alternates between Thought (reasoning what to do next), Action (calling a tool), and Observation (reading the result). This loop continues until the model has enough information for a final Answer. ReAct dramatically improves accuracy on multi-step questions because the model externalises its reasoning and grounds each step in real data instead of hallucinating an answer upfront.

Without ReAct, an LLM asked "Are we ready to cut over - are row counts matching and is CDC caught up?" might guess. With ReAct, it reasons: I need to check source row count, then target row count, then CDC lag - each step is explicit and uses real tool results. ReAct is the foundation of all modern AI agent frameworks. LangChain agents, LangGraph, and CrewAI all implement variations of this Thought/Action/Observation loop under the hood.

The below example implements the ReAct loop manually with the OpenAI function calling API to answer a multi-step migration readiness question.


It gives the following output showing the full Thought/Action/Observation loop,

Action: count_rows({"table": "customers", "database": "source"})
Observation: 125000

Action: count_rows({"table": "customers", "database": "target"})
Observation: 124998

Action: check_cdc_lag({"consumer_group": "migration-cdc"})
Observation: {"lag_seconds": 2, "pending_messages": 150}

Answer: The customers table is NOT fully migrated - source has 125,000 rows
but target only has 124,998 (2 rows missing). CDC lag is 2 seconds with
150 pending messages. Wait for CDC to catch up before cutover.

# 3 tool calls -> grounded answer, not a hallucination

ReAct gives you traceability (every reasoning step is visible), correctability (if a tool errors the model adapts), and accuracy (answers grounded in real data). In production you rarely implement this loop manually - LangChain agents and LangGraph do it for you. But understanding the Thought/Action/Observation cycle helps you design better agent prompts and debug why an agent took a wrong turn.


 
  


  
bl  br