tl  tr
  Home | Tutorials | Articles | Videos | Products | Tools | Search
Interviews | Open Source | Tag Cloud | Follow Us | Bookmark | Contact   
 Agentic AI > MCP Protocol > MCP Server Observability - Tracing and Metrics

MCP Server Observability - Tracing and Metrics

Author: Venkata Sudhakar

When an MCP server runs in production with multiple ADK agents calling it, you need visibility into which tools are called most, how long each call takes, and whether errors are increasing. Without structured logging and metrics, diagnosing slow tool calls or unexpected failures requires guesswork. Adding observability to an MCP server gives you a clear operational picture using Cloud Logging and Cloud Monitoring.

In this tutorial, you will instrument an MCP server with structured JSON logs and custom metrics. Every tool call logs the tool name, arguments, duration, and success status as a JSON record. The server also writes call counts and latency metrics to Cloud Monitoring using the google-cloud-monitoring library.

The server below wraps each tool call in a timing block and emits a structured log entry to stdout. When deployed on Cloud Run or GKE, Cloud Logging picks up stdout automatically. The metrics client writes custom counters per tool name to Cloud Monitoring.

The test below calls the inventory tool several times and then calls the built-in metrics summary tool to see aggregated call counts and average durations without leaving the MCP session.

Structured JSON logs integrate directly with Cloud Logging when the server runs on Cloud Run or GKE - no additional configuration needed. For custom metrics you can extend this pattern to write time series data to Cloud Monitoring using the google-cloud-monitoring client, enabling dashboards and alerting policies based on tool call error rates or p99 latency.


 
  


  
bl  br