|
|
Building an MCP Server
Author: Venkata Sudhakar
The Model Context Protocol (MCP) is an open standard for connecting AI models to external data sources and tools. Instead of writing custom tool integration code for every AI application, you build one MCP server that exposes your data, and any MCP-compatible AI client can connect to it immediately. An MCP server for your inventory database means Claude Desktop, Claude Code, and your own LangChain agents can all query your stock levels using the same server with no duplication of integration code. MCP is to AI tools what REST APIs were to web services - a universal interface standard. An MCP server exposes three types of capabilities: tools (functions the AI can call, like check_stock or place_order), resources (data the AI can read, like a product catalogue), and prompts (pre-built prompt templates). You build an MCP server using the official mcp Python SDK. The server runs as a local process and communicates over stdio or HTTP. Tools are defined as Python functions decorated with server.call_tool and their schemas registered with server.list_tools. Claude and other MCP clients discover available tools automatically when they connect. The below example builds a complete MCP server for a retail company inventory system - exposing three tools that any AI agent can call: check product stock levels, search for products by category, and get low stock alerts.
Implementing the tool handlers and starting the server,
Configuring an MCP client (Claude Desktop) to connect to the server,
# claude_desktop_config.json - add this to connect Claude Desktop to your MCP server:
{
"mcpServers": {
"inventory": {
"command": "python",
"args": ["/path/to/inventory_mcp_server.py"]
}
}
}
# Once connected, Claude Desktop can call your tools naturally:
# User: "Which products are low on stock right now?"
# Claude calls: get_low_stock_alerts()
# Result: {
# "alerts": [
# {"sku": "SKU-003", "name": "Samsung 65-inch QLED TV",
# "current_stock": 4, "reorder_point": 5, "urgency": "LOW"},
# {"sku": "SKU-005", "name": "iPad Air M2",
# "current_stock": 3, "reorder_point": 5, "urgency": "LOW"}
# ],
# "total_low_stock_skus": 2
# }
# Claude: "Two products need reordering: Samsung QLED TV (4 units, below
# reorder point of 5) and iPad Air M2 (3 units, below reorder point of 5).
# I recommend raising purchase orders for both today."
# Same server works with Claude Code, LangChain MCP client, and any other
# MCP-compatible AI tool - one server, universal access
The MCP inspector lets you test all tools interactively before connecting to Claude,
MCP Inspector running at http://localhost:5173
Available tools:
check_stock - Check current stock level and price for a specific SKU
search_products - Search for products by category or name keyword
get_low_stock_alerts - Get all products currently below reorder point
Test: check_stock({"sku": "SKU-001"})
Result:
{
"sku": "SKU-001",
"name": "Dell XPS 15 Laptop",
"category": "Laptops",
"stock_units": 8,
"price_inr": 124999,
"status": "OK"
}
MCP server design principles: keep each tool focused on one operation - a tool named "manage_inventory" that does everything is harder for the AI to use correctly than three focused tools. Write tool descriptions as if explaining to a smart human colleague who has never seen your system - the AI reads these descriptions to decide which tool to call. Include the units and format of each parameter in the description (e.g. "price in INR as integer" not just "price"). For production MCP servers, add authentication by validating a token in the tool handler before executing database queries, and log every tool call with timestamp and arguments for audit purposes.
|
|