tl  tr
  Home | Tutorials | Articles | Videos | Products | Tools | Search
Interviews | Open Source | Tag Cloud | Follow Us | Bookmark | Contact   
 Agentic AI > LangGraph > LangGraph Streaming Agent Outputs

LangGraph Streaming Agent Outputs

Author: Venkata Sudhakar

LangGraph's stream() method lets you observe a graph's execution in real time instead of waiting for the final result. Each node emits its state update as it completes, so your application can display progress, intermediate results, or token-by-token LLM output as the agent runs. For ShopMax India, streaming means a customer sees each step of their order diagnosis as it happens rather than staring at a loading spinner.

Call app.stream(input, config) instead of app.invoke(). It returns an iterator of events. Each event is a dictionary mapping node names to their output state. Use stream_mode="values" to receive the full state after each node, or stream_mode="updates" to receive only what changed. For LLM token streaming, use stream_mode="messages" to get individual tokens as they arrive from the model.

The following example streams a ShopMax India support agent showing each node's output as it runs, giving the user live visibility into the workflow.


It gives the following output,

Streaming agent execution:
[classify] => {'intent': 'delivery'}
[respond] => {'response': 'Your ShopMax India order is on its way. Expected delivery in 1-2 business days.'}

For LLM-powered nodes, combine stream_mode="messages" with an LLM that supports streaming to receive tokens as they are generated. This is especially valuable in chat interfaces where ShopMax India customers expect to see the assistant typing in real time. The astream() async variant works the same way in async frameworks like FastAPI.


 
  


  
bl  br