|
|
LangChain Callbacks and Tracing with LangSmith
Author: Venkata Sudhakar
LangChain callbacks intercept events at every stage of a chain or agent run - LLM start, LLM end, tool call, retrieval, error - allowing you to log, monitor, and debug your pipelines. LangSmith is the official observability platform built by the LangChain team that automatically captures full traces when you set two environment variables, giving you a visual timeline of every step in your pipeline. Without callbacks and tracing, debugging a multi-step LangChain pipeline means adding print statements and hoping the right data is visible. With LangSmith, every run is captured with its full input, output, latency, token count, and error - organised in a searchable dashboard. You can compare runs, annotate examples, and build evaluation datasets directly from production traces. The below example uses LangChain callbacks to build a custom logging handler that tracks token usage and latency for every ShopMax India LLM call, and shows how to enable LangSmith tracing with environment variables.
It gives the following output,
[LLM START] Prompt length: 54 chars
[LLM END] Latency: 387.4ms | Output: 142 chars
Tokens - Input: 14, Output: 38
Response: ShopMax India accepts returns on all electronics
within 10 days of purchase in original packaging. Refunds
are credited within 5-7 business days to the original
payment method.
Custom callbacks give you complete control over what is logged and where. For LangSmith, simply uncomment the three environment variable lines and every run will appear in your LangSmith dashboard automatically - no other code changes needed. Use callbacks in production to track token costs per customer session, alert on high-latency LLM calls, and build audit logs for ShopMax AI interactions that require compliance tracking.
|
|