主要な発見
  • MCP (Model Context Protocol) is the "USB-C" of AI tool integration — a single open protocol that unifies all tool connections, simplifying the N×M problem of connecting N AI applications with M tools down to N+M
  • Three-layer architecture design: Host (AI application, e.g. Claude Desktop) → Client (protocol layer, managing connections and security) → Server (tool and data provider), with clear separation of responsibilities
  • Already adopted by mainstream development tools including Claude Desktop, Cursor, Windsurf, Zed, and Sourcegraph Cody, with the community having built over a thousand open-source MCP Servers
  • This article includes two Google Colab hands-on labs: building a weather query MCP Server from scratch, and building a multi-tool MCP Server with Client SDK testing of the complete call flow

1. Why Do We Need MCP? The Fragmentation Problem of AI Tool Integration

The capability boundary of large language models (LLMs) depends on how many external tools and data sources they can access. Whether querying databases, operating APIs, reading file systems, or interacting with third-party services, LLMs need a reliable "bridge" to connect with the external world. Yet this bridge has not been standardized[6].

1.1 The N×M Problem: Every Integration Is an Island

Imagine you are a technical lead at a company. Your team uses 3 AI applications (Claude, ChatGPT, Gemini) and 5 internal tools (CRM, ERP, knowledge base, Slack, Jira). Without a unified protocol, you need to build 3 × 5 = 15 separate integrations. Every time a new AI application or tool is added, the number of integrations grows multiplicatively. This is the so-called N×M problem.

Schick et al. demonstrated in the Toolformer research[3] that LLMs can autonomously learn to use tools, but tool access interfaces remain fragmented. Each platform has its own API format, authentication mechanism, and error handling logic, requiring developers to write custom glue code for every combination.

1.2 The Limitations of Function Calling

OpenAI's Function Calling, introduced in 2023[11], was an important attempt. It allows developers to define functions using JSON Schema and lets the model decide when to call them and what parameters to pass. Google's Gemini API offers a similar mechanism[12].

However, Function Calling has three structural limitations:

1.3 MCP's Solution: A Unified Protocol Layer

In November 2024, Anthropic open-sourced the Model Context Protocol (MCP)[2], aiming to become the "USB-C" of AI tool integration — a unified, open, vendor-neutral protocol standard.

MCP's core insight is: rather than having each AI application integrate with each tool separately (N×M), establish a middle protocol layer. An AI application only needs to implement the MCP Client once to connect to all MCP Servers; a tool provider only needs to implement an MCP Server once to be accessible by all MCP-enabled AI applications. The number of integrations drops from N×M to N+M.

Traditional Model (N×M integrations):

  Claude ──┬── Slack integration
           ├── GitHub integration
           └── PostgreSQL integration
  ChatGPT ─┬── Slack integration (rewritten)
           ├── GitHub integration (rewritten)
           └── PostgreSQL integration (rewritten)

MCP Model (N+M integrations):

  Claude ──── MCP Client ──┐
  ChatGPT ── MCP Client ──┤  MCP Protocol
  Cursor ─── MCP Client ──┤
                           ├── Slack MCP Server
                           ├── GitHub MCP Server
                           └── PostgreSQL MCP Server

2. MCP Protocol Architecture Deep Dive

MCP's protocol design follows the JSON-RPC 2.0 standard[1], defining three core roles and three capability primitives. Understanding these six concepts gives you a complete picture of MCP.

2.1 Three-Role Architecture: Host / Client / Server

Host is the AI application that the user directly interacts with, such as Claude Desktop, Cursor IDE, or a custom chatbot. The Host manages the user interface, handles conversation flow, and decides when to invoke external tools.

Client is the protocol-layer component inside the Host. Each Client maintains a one-to-one stateful connection with a Server. The Client handles the initialization handshake, capability negotiation, message routing, and most importantly — security gatekeeping. A Host can run multiple Clients simultaneously, each connected to a different Server.

Server is the provider of tools and data. An MCP Server can expose any number of Tools, Resources, and Prompts. A Server is a lightweight process that typically communicates with the Client via stdio or HTTP+SSE.

MCP Architecture Diagram:

┌─────────────────────────────────────────────┐
│  Host (e.g. Claude Desktop)                 │
│  ┌─────────┐  ┌─────────┐  ┌─────────┐     │
│  │ Client A│  │ Client B│  │ Client C│     │
│  └────┬────┘  └────┬────┘  └────┬────┘     │
└───────┼────────────┼────────────┼───────────┘
        │            │            │
   MCP Protocol  MCP Protocol  MCP Protocol
   (JSON-RPC)   (JSON-RPC)   (JSON-RPC)
        │            │            │
   ┌────┴────┐  ┌────┴────┐  ┌────┴────┐
   │Server A │  │Server B │  │Server C │
   │(GitHub) │  │(Slack)  │  │(DB)     │
   └─────────┘  └─────────┘  └─────────┘

2.2 Three Capability Primitives

An MCP Server exposes capabilities to the Client through three primitives:

Tools are functions the model can call. Each Tool has a name, description, and input parameters defined by JSON Schema. Tool calls are initiated by the model but must pass through the Client's security review (human-in-the-loop). Typical uses include: executing database queries, calling external APIs, and operating on file systems.

Resources are structured data the model can read. Each Resource is identified by a URI (e.g., file:///path/to/doc or db://table/row) and comes with a MIME type. Resources are application-controlled — the user or Host decides when and which resources to inject into the model's context window.

Prompts are predefined prompt instruction templates provided by the Server and triggered by the user. Prompts can include parameterized placeholders that expand into complete prompt text once the user fills them in. Typical uses: code review templates, data analysis report templates.

Capability Primitive Controlled By Description Analogy
Tools Model-controlled Model decides when to call and what parameters to pass POST API endpoint
Resources Application-controlled Host/user decides when to load which resources GET API endpoint
Prompts User-controlled User selects which prompt template to use Predefined slash command

2.3 Transport Layer: stdio vs SSE

The MCP specification defines two transport mechanisms:

stdio (standard input/output) is suited for local scenarios. The Host launches the Server as a subprocess and exchanges JSON-RPC messages via stdin/stdout. Advantages include zero network configuration, low latency, and high security (process isolation). Claude Desktop and Cursor primarily use this mode.

HTTP + Server-Sent Events (SSE) is suited for remote scenarios. The Client sends requests via HTTP POST, and the Server streams results back via SSE. Advantages include cross-network deployment and support for multiple Client connections. Ideal for enterprise-grade shared MCP Server deployments.

2.4 Comparison: MCP vs Function Calling vs LangChain Tools

Feature MCP Function Calling (OpenAI) LangChain Tools
Protocol Standard Open specification (JSON-RPC 2.0) OpenAI proprietary API Python framework API
Connection State Stateful (persistent connection) Stateless (per request) Depends on implementation
Tool Discovery Automatic (tools/list) Manual tool list passing Manual registration
Resource Management Native support (Resources) Not supported Requires separate implementation
Prompt Templates Native support (Prompts) Not supported PromptTemplate
Multi-Model Support Vendor-neutral OpenAI only Multi-model (via framework abstraction)
Transport stdio / HTTP+SSE HTTPS API In-process calls
Language SDKs TypeScript, Python (official) Multi-language (OpenAI SDK) Primarily Python
Security Model Client-side guard + human-in-the-loop Application-level implementation Application-level implementation

3. Core Concepts of MCP Servers

Understanding MCP Server design patterns is key to mastering MCP in practice. Below we take a deep dive into four core concepts of Servers[1].

3.1 Tool Definition: Name, Description, and inputSchema

Each Tool consists of three elements. The name is the tool's unique identifier, following the snake_case convention. The description is natural language text that tells the model the tool's purpose and applicable scenarios — the quality of the description directly affects whether the model can correctly select the tool. The inputSchema is a parameter definition in JSON Schema format, specifying the input structure the tool accepts.

# Tool Definition Example (Python SDK)
@server.tool()
async def get_weather(city: str, units: str = "celsius") -> str:
    """Query real-time weather information for a specified city.

    Args:
        city: City name (English), e.g. "Taipei" or "Tokyo"
        units: Temperature units, "celsius" or "fahrenheit", defaults to celsius

    Returns:
        Weather summary text including temperature, humidity, and wind speed
    """
    # Actual implementation...

The Python MCP SDK automatically generates the corresponding JSON Schema from the function's type annotations and docstring, so developers don't need to manually write schema definitions.

3.2 Resource Exposure: URI Patterns and MIME Types

Resources use URI patterns to identify data sources. The MCP specification supports custom URI schemes, for example:

Each Resource comes with a MIME type (e.g., text/plain, application/json, image/png), letting the Client know how to handle the returned content. Resources also support templated URIs (Resource Templates), e.g., weather://current/{city}, where the Client can dynamically fill in parameters.

3.3 Prompt Templates: Reusable Prompt Instructions

Prompts allow Servers to define reusable prompt templates. The user selects a Prompt in the Host interface, fills in the parameters, and it expands into a complete prompt including Resources and instructions. This is particularly useful for standardizing workflows:

# Prompt Definition Example
@server.prompt()
async def code_review(language: str, code: str) -> str:
    """Code review template: reviews the quality and security of code in the specified language."""
    return f"""Please conduct a comprehensive review of the following {language} code, checking:
1. Code quality: naming conventions, readability, DRY principle
2. Security: injection attacks, sensitive data leakage
3. Performance: time/space complexity, potential bottlenecks
4. Best practices: error handling, logging

Code:
{code}"""

3.4 Security Model: Client as Guard

MCP's security model positions the Client as a security gateway. The Server exposes capabilities, but all actual operations must pass through the Client's review. This is reflected in several layers:

This design ensures that even if an MCP Server comes from an untrusted third party, the Client can still control the risk boundary[8].

4. Hands-on Lab 1: Build Your First MCP Server with Python

The following implementation builds a weather query MCP Server from scratch, using the Open-Meteo free API (no API key required), defining both Tool and Resource capabilities, and running with stdio transport. All code can be executed directly in Google Colab.

# ============================================================
# Lab 1: Build a Weather Query MCP Server
# Environment: Google Colab / Python 3.10+
# ============================================================
# --- 0. Install Dependencies ---
!pip install -q mcp httpx pydantic

import asyncio
import json
import httpx
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import (
    Tool,
    TextContent,
    Resource,
    ResourceTemplate,
)

# --- 1. Create MCP Server Instance ---
server = Server("weather-server")

# --- 2. Open-Meteo API Query Functions ---
GEOCODING_URL = "https://geocoding-api.open-meteo.com/v1/search"
WEATHER_URL = "https://api.open-meteo.com/v1/forecast"

async def fetch_coordinates(city: str) -> dict:
    """Get city coordinates via the Open-Meteo Geocoding API."""
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            GEOCODING_URL,
            params={"name": city, "count": 1, "language": "en"}
        )
        resp.raise_for_status()
        data = resp.json()
        if not data.get("results"):
            raise ValueError(f"City not found: {city}")
        result = data["results"][0]
        return {
            "name": result["name"],
            "latitude": result["latitude"],
            "longitude": result["longitude"],
            "country": result.get("country", ""),
        }

async def fetch_weather(latitude: float, longitude: float) -> dict:
    """Get weather data via the Open-Meteo Forecast API."""
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            WEATHER_URL,
            params={
                "latitude": latitude,
                "longitude": longitude,
                "current": "temperature_2m,relative_humidity_2m,"
                           "wind_speed_10m,weather_code",
                "timezone": "auto",
            }
        )
        resp.raise_for_status()
        return resp.json()

# WMO Weather Code Descriptions
WMO_CODES = {
    0: "Clear sky", 1: "Mainly clear", 2: "Partly cloudy", 3: "Overcast",
    45: "Fog", 48: "Depositing rime fog", 51: "Light drizzle", 53: "Moderate drizzle",
    55: "Dense drizzle", 61: "Slight rain", 63: "Moderate rain", 65: "Heavy rain",
    71: "Slight snow", 73: "Moderate snow", 75: "Heavy snow", 80: "Slight showers",
    81: "Moderate showers", 82: "Violent showers", 95: "Thunderstorm",
}

# --- 3. Define Tool: get_weather ---
@server.list_tools()
async def list_tools() -> list[Tool]:
    return [
        Tool(
            name="get_weather",
            description=(
                "Query real-time weather information for a specified city, "
                "including temperature, humidity, wind speed, and conditions. "
                "Simply input the English city name. Supports any city worldwide."
            ),
            inputSchema={
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "City name (English), e.g. Taipei, Tokyo, London"
                    }
                },
                "required": ["city"]
            }
        )
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
    if name == "get_weather":
        city = arguments["city"]
        coords = await fetch_coordinates(city)
        weather = await fetch_weather(coords["latitude"], coords["longitude"])
        current = weather["current"]
        code = current.get("weather_code", 0)
        description = WMO_CODES.get(code, "Unknown")

        report = (
            f"📍 {coords['name']}, {coords['country']}\n"
            f"🌡️ Temperature: {current['temperature_2m']}°C\n"
            f"💧 Relative Humidity: {current['relative_humidity_2m']}%\n"
            f"💨 Wind Speed: {current['wind_speed_10m']} km/h\n"
            f"🌤️ Conditions: {description}"
        )
        return [TextContent(type="text", text=report)]
    raise ValueError(f"Unknown tool: {name}")

# --- 4. Define Resource: weather://current/{city} ---
@server.list_resource_templates()
async def list_resource_templates() -> list[ResourceTemplate]:
    return [
        ResourceTemplate(
            uriTemplate="weather://current/{city}",
            name="Current Weather",
            description="Get the current weather data for a specified city (JSON format)",
            mimeType="application/json",
        )
    ]

@server.read_resource()
async def read_resource(uri: str) -> str:
    if uri.startswith("weather://current/"):
        city = uri.split("/")[-1]
        coords = await fetch_coordinates(city)
        weather = await fetch_weather(coords["latitude"], coords["longitude"])
        result = {
            "city": coords["name"],
            "country": coords["country"],
            "current": weather["current"],
        }
        return json.dumps(result, ensure_ascii=False, indent=2)
    raise ValueError(f"Unknown resource URI: {uri}")

# --- 5. Start Server (stdio transport) ---
async def main():
    async with stdio_server() as (read_stream, write_stream):
        await server.run(
            read_stream,
            write_stream,
            server.create_initialization_options()
        )

# Test in Colab: directly call tool functions to verify logic
async def test_locally():
    """Local test: call internal functions directly without the MCP protocol."""
    print("=== Test get_weather ===")
    coords = await fetch_coordinates("Taipei")
    print(f"Coordinates: {coords}")

    weather = await fetch_weather(coords["latitude"], coords["longitude"])
    current = weather["current"]
    code = current.get("weather_code", 0)
    print(f"Temperature: {current['temperature_2m']}°C")
    print(f"Humidity: {current['relative_humidity_2m']}%")
    print(f"Wind Speed: {current['wind_speed_10m']} km/h")
    print(f"Weather: {WMO_CODES.get(code, 'Unknown')}")

    print("\n=== Test Resource ===")
    resource_data = await read_resource("weather://current/Tokyo")
    print(resource_data)

# Run local test in Colab
await test_locally()

# To start as an MCP Server (run in terminal):
# asyncio.run(main())

Key design points of the above code:

5. Hands-on Lab 2: Build a Multi-Tool MCP Server with Client Testing

This implementation builds an MCP Server with three tools (calculator, text analysis, and translation simulator), and uses the MCP Python SDK's Client to perform complete connection testing, demonstrating the full flow of tool discovery, invocation, and result processing.

# ============================================================
# Lab 2: Multi-Tool MCP Server + Complete Client Testing
# Environment: Google Colab / Python 3.10+
# ============================================================
# --- 0. Install Dependencies ---
!pip install -q mcp pydantic

import asyncio
import json
import math
import re
from io import StringIO
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import (
    Tool,
    TextContent,
    Prompt,
    PromptMessage,
    PromptArgument,
)

# ==========================================
# Part A: Build Multi-Tool MCP Server
# ==========================================
server = Server("multi-tool-server")

# --- Tool 1: Scientific Calculator ---
def safe_eval_math(expression: str) -> float:
    """Safely evaluate a math expression (without using eval)."""
    allowed_names = {
        "abs": abs, "round": round,
        "sin": math.sin, "cos": math.cos, "tan": math.tan,
        "sqrt": math.sqrt, "log": math.log, "log10": math.log10,
        "pi": math.pi, "e": math.e,
        "pow": pow, "ceil": math.ceil, "floor": math.floor,
    }
    sanitized = expression.replace("^", "**")
    code = compile(sanitized, "", "eval")
    for name in code.co_names:
        if name not in allowed_names:
            raise ValueError(f"Disallowed function or variable: {name}")
    return eval(code, {"__builtins__": {}}, allowed_names)

# --- Tool 2: Text Analysis ---
def analyze_text(text: str) -> dict:
    """Analyze text statistics including word count, character count, sentence count, etc."""
    chars = len(text)
    chars_no_spaces = len(text.replace(" ", ""))
    words = len(text.split())
    sentences = len(re.split(r'[.!?。!?]', text))
    sentences = max(1, sentences - 1)
    paragraphs = len([p for p in text.split("\n") if p.strip()])
    cjk_chars = len(re.findall(r'[\u4e00-\u9fff]', text))
    return {
        "characters": chars,
        "characters_no_spaces": chars_no_spaces,
        "words": words,
        "sentences": sentences,
        "paragraphs": paragraphs,
        "cjk_characters": cjk_chars,
        "avg_word_length": round(chars_no_spaces / max(1, words), 1),
    }

# --- Tool 3: Translation Simulator (Demo Purpose) ---
TRANSLATION_DICT = {
    "hello": "你好", "world": "世界", "ai": "人工智慧",
    "model": "模型", "protocol": "協議", "server": "伺服器",
    "client": "客戶端", "tool": "工具", "data": "資料",
    "context": "上下文", "language": "語言", "learning": "學習",
    "machine": "機器", "network": "網路", "computer": "電腦",
}

def simple_translate(text: str, target_lang: str = "zh-TW") -> str:
    """Simple English-to-Chinese translation (demo purpose; for production use a translation API)."""
    if target_lang != "zh-TW":
        return f"[Only zh-TW translation supported, received: {target_lang}]"
    words = text.lower().split()
    translated = []
    for w in words:
        clean = re.sub(r'[^\w]', '', w)
        if clean in TRANSLATION_DICT:
            translated.append(TRANSLATION_DICT[clean])
        else:
            translated.append(w)
    return " ".join(translated)

# --- Register Three Tools ---
@server.list_tools()
async def list_tools() -> list[Tool]:
    return [
        Tool(
            name="calculator",
            description=(
                "A safe scientific calculator. Supports basic arithmetic (+, -, *, /, **) and "
                "math functions (sin, cos, tan, sqrt, log, log10, abs, "
                "round, ceil, floor). Constants: pi, e."
            ),
            inputSchema={
                "type": "object",
                "properties": {
                    "expression": {
                        "type": "string",
                        "description": "Math expression, e.g. 'sqrt(144) + pi * 2'"
                    }
                },
                "required": ["expression"]
            }
        ),
        Tool(
            name="text_analyzer",
            description=(
                "Analyze text statistics: character count, word count, sentence count, "
                "paragraph count, CJK character count, average word length. "
                "Supports mixed Chinese and English text."
            ),
            inputSchema={
                "type": "object",
                "properties": {
                    "text": {
                        "type": "string",
                        "description": "The text content to analyze"
                    }
                },
                "required": ["text"]
            }
        ),
        Tool(
            name="translate",
            description=(
                "Translate English text to Traditional Chinese. "
                "Note: This is a simplified demo version that only supports common AI terminology."
            ),
            inputSchema={
                "type": "object",
                "properties": {
                    "text": {
                        "type": "string",
                        "description": "The English text to translate"
                    },
                    "target_language": {
                        "type": "string",
                        "description": "Target language code, currently only supports zh-TW",
                        "default": "zh-TW"
                    }
                },
                "required": ["text"]
            }
        ),
    ]

@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
    if name == "calculator":
        expr = arguments["expression"]
        try:
            result = safe_eval_math(expr)
            return [TextContent(
                type="text",
                text=f"Calculation result: {expr} = {result}"
            )]
        except Exception as e:
            return [TextContent(
                type="text",
                text=f"Calculation error: {e}"
            )]

    elif name == "text_analyzer":
        text = arguments["text"]
        stats = analyze_text(text)
        lines = [
            "📊 Text Analysis Results:",
            f"  Characters: {stats['characters']}",
            f"  Characters (no spaces): {stats['characters_no_spaces']}",
            f"  Words: {stats['words']}",
            f"  Sentences: {stats['sentences']}",
            f"  Paragraphs: {stats['paragraphs']}",
            f"  CJK Characters: {stats['cjk_characters']}",
            f"  Avg Word Length: {stats['avg_word_length']}",
        ]
        return [TextContent(type="text", text="\n".join(lines))]

    elif name == "translate":
        text = arguments["text"]
        target = arguments.get("target_language", "zh-TW")
        result = simple_translate(text, target)
        return [TextContent(
            type="text",
            text=f"Translation result (en → {target}):\nOriginal: {text}\nTranslated: {result}"
        )]

    raise ValueError(f"Unknown tool: {name}")

# --- Register Prompt Template ---
@server.list_prompts()
async def list_prompts() -> list[Prompt]:
    return [
        Prompt(
            name="summarize_stats",
            description="Organize text analysis results into a concise summary",
            arguments=[
                PromptArgument(
                    name="text",
                    description="The text to analyze and summarize",
                    required=True,
                )
            ]
        )
    ]

@server.get_prompt()
async def get_prompt(name: str, arguments: dict) -> list[PromptMessage]:
    if name == "summarize_stats":
        text = arguments["text"]
        stats = analyze_text(text)
        return [PromptMessage(
            role="user",
            content=TextContent(
                type="text",
                text=(
                    f"The following text has {stats['characters']} characters, "
                    f"{stats['words']} words, and {stats['sentences']} sentences. "
                    f"It contains {stats['cjk_characters']} CJK characters."
                    f"\n\nBased on the above statistics, describe the characteristics of this text in one paragraph."
                    f"\n\nOriginal text:\n{text[:500]}"
                )
            )
        )]
    raise ValueError(f"Unknown Prompt: {name}")

# ==========================================
# Part B: Local Testing (Simulating Client Behavior)
# ==========================================
async def test_multi_tool_server():
    """Test in Colab: directly call Server handler functions."""

    print("=" * 60)
    print("MCP Multi-Tool Server Local Test")
    print("=" * 60)

    # Test 1: List all tools
    print("\n--- 1. List All Available Tools ---")
    tools = await list_tools()
    for t in tools:
        print(f"  Tool: {t.name}")
        print(f"  Description: {t.description[:60]}...")
        print()

    # Test 2: Calculator
    print("--- 2. Test Calculator ---")
    result = await call_tool("calculator", {"expression": "sqrt(144) + pi * 2"})
    print(f"  {result[0].text}")

    result = await call_tool("calculator", {"expression": "log10(1000) + 2**10"})
    print(f"  {result[0].text}")

    result = await call_tool("calculator", {"expression": "sin(pi/6)"})
    print(f"  {result[0].text}")

    # Test 3: Text Analysis
    print("\n--- 3. Test Text Analyzer ---")
    sample_text = (
        "Model Context Protocol is an open-source protocol standard by Anthropic. "
        "It enables AI applications to connect with external tools and data sources through a unified interface. "
        "MCP's design is inspired by USB-C — one interface to handle all connections."
    )
    result = await call_tool("text_analyzer", {"text": sample_text})
    print(f"  {result[0].text}")

    # Test 4: Translation
    print("\n--- 4. Test Translate ---")
    result = await call_tool("translate", {
        "text": "machine learning model context protocol",
        "target_language": "zh-TW"
    })
    print(f"  {result[0].text}")

    # Test 5: Prompt Template
    print("\n--- 5. Test Prompt Template ---")
    prompts = await list_prompts()
    print(f"  Available Prompts: {[p.name for p in prompts]}")
    prompt_result = await get_prompt("summarize_stats", {"text": sample_text})
    print(f"  Expanded Prompt:\n  {prompt_result[0].content.text[:200]}...")

    print("\n" + "=" * 60)
    print("All tests passed! Server logic verification complete.")
    print("=" * 60)

# Run tests
await test_multi_tool_server()

This Lab demonstrates several important design patterns:

6. The MCP Ecosystem: Integration Practices from Claude to Cursor

Since its release in late 2024, MCP has been rapidly adopted by multiple mainstream development tools[2]. Below are several key integration scenarios.

6.1 Claude Desktop Configuration

Claude Desktop was the first AI application with native MCP support. Users simply declare MCP Servers in the configuration file to use tools within conversations:

// macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
// Windows: %APPDATA%/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "weather": {
      "command": "python",
      "args": ["/path/to/weather_server.py"],
      "env": {}
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxxxxxxxxxx"
      }
    },
    "postgres": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-postgres",
        "postgresql://user:pass@localhost:5432/mydb"
      ]
    }
  }
}

The configuration file declares each Server's startup command (command), arguments (args), and environment variables (env) in JSON format. Claude Desktop automatically launches these Server processes via stdio at startup.

6.2 Cursor MCP Integration

Cursor IDE has supported MCP since version 0.45[15], allowing developers to invoke external tools during AI-assisted programming. The configuration method is similar to Claude Desktop, placed in .cursor/mcp.json. Cursor currently only supports MCP's Tools capability; Resources and Prompts have not yet been integrated.

Windsurf, Zed editor, Sourcegraph Cody, and other tools have also announced MCP support, indicating that this protocol is becoming the de facto standard for AI development tools.

6.3 Community MCP Server Ecosystem

MCP's openness has fostered a thriving community ecosystem. Here are some popular open-source MCP Servers:

MCP Server Functionality Maintainer
server-github GitHub API integration: issues, PRs, repo search Anthropic (official)
server-postgres PostgreSQL database queries (read-only) Anthropic (official)
server-slack Slack channel message reading and sending Anthropic (official)
server-puppeteer Headless browser automation and web screenshots Anthropic (official)
server-filesystem Local file system read/write (with sandbox restrictions) Anthropic (official)
server-brave-search Brave search engine API integration Anthropic (official)
server-notion Notion document and database operations Community
server-linear Linear project management tool integration Community

6.4 Architecture Considerations for Enterprise MCP Servers

For enterprise scenarios, building custom MCP Servers requires consideration of the following architectural dimensions:

7. Decision Framework: When Should Enterprises Adopt MCP

MCP is not a silver bullet. Enterprises should make rational decisions about MCP adoption based on their technical architecture and use case requirements.

7.1 Suitable Scenarios

7.2 Unsuitable Scenarios

7.3 Comparison: Direct API Integration vs MCP vs LangChain

Evaluation Dimension Direct API Integration MCP LangChain
Initial Development Cost Low (fastest to ship) Medium (protocol learning curve) Medium (framework learning curve)
Long-term Maintenance Cost High (N×M growth) Low (N+M growth) Medium (framework upgrade risk)
Model Portability None (locked to specific platform) High (vendor-neutral) Medium (framework abstraction layer)
Ecosystem Self-built Growing (1,000+ community Servers) Mature (extensive integrations)
Security Model Self-implemented Built-in (Client guard) Self-implemented
Best Fit Scale Small projects Medium to large enterprises Medium-sized projects

7.4 Security Considerations

While MCP's security model is superior to direct API integration, enterprises should still be aware of the following risks[8]:

Recommended countermeasures include: only using trusted MCP Server sources, enabling the human-in-the-loop confirmation mechanism, limiting Server permission scopes, and deploying input/output content filtering.

7.5 Migration Path Recommendations

For enterprises that already have Function Calling or LangChain Tools integrations, we recommend the following migration path:

  1. Phase 1 (2-4 weeks): Select a low-risk internal tool (e.g., weather queries, document search), build your first MCP Server, and validate the team's understanding of the protocol
  2. Phase 2 (1-2 months): Wrap core business tools (CRM queries, knowledge base search) as MCP Servers, and test them in Claude Desktop or Cursor
  3. Phase 3 (2-3 months): Deploy shared MCP Servers in SSE mode, integrating enterprise-grade features like authentication, auditing, and rate limiting
  4. Phase 4 (ongoing): Gradually migrate existing Function Calling definitions to MCP Tools and build the enterprise's MCP Server catalog

8. Future Outlook and Strategic Value of MCP

8.1 Anthropic's Open Strategy

MCP is an important strategic move by Anthropic in the AI ecosystem. By open-sourcing a protocol standard, Anthropic has taken a different path from OpenAI (closed Plugins ecosystem). The strategic logic behind this is: whoever defines the protocol holds the architectural power over the ecosystem.

This is similar to Google's strategy of open-sourcing Android — not controlling every application, but building ecosystem influence by defining platform standards. Anthropic doesn't need to own every MCP Server; as long as MCP becomes the de facto standard, Claude naturally becomes the preferred AI application in this ecosystem[2].

8.2 Competition and Cooperation with OpenAI Plugins and Google Extensions

The AI tool integration landscape currently has three competing forces:

Patil et al.'s Gorilla research[5] has demonstrated that when LLMs can connect to a large number of APIs, tool-use capabilities improve significantly. The question is who can build the largest and most open tool ecosystem. MCP's open design gives it a first-mover advantage.

8.3 Impact on the AI Agent Ecosystem

MCP has far-reaching implications for the AI Agent ecosystem. Wang et al.'s LLM Agent survey[8] pointed out that tool-use capability is the key to upgrading Agents from "conversational systems" to "action systems." MCP lowers the barrier for Agents to interface with the external world by standardizing tool interfaces.

As Agent frameworks like AutoGPT[13] and AI Agent frameworks[14] evolve, MCP may become the default protocol for Agents to access external tools. This means future AI Agents will no longer need to write custom integration code for each tool, but instead dynamically discover and invoke tools through MCP.

HuggingGPT[7] demonstrated how LLMs can act as controllers to orchestrate multiple AI models for complex tasks. MCP can further extend this pattern to arbitrary external tools, not just AI models. WebGPT[9] proved that LLMs can effectively use browsers as tools; MCP's server-puppeteer is the standardized implementation of this capability.

8.4 Action Recommendations for Enterprise CTOs

Based on the analysis above, we offer the following recommendations for enterprise technology decision-makers:

  1. Act now: Assign 1-2 engineers on your team to learn the MCP protocol and SDK, and build the first internal MCP Server as a POC
  2. Short-term (3-6 months): Evaluate your existing AI tool integration architecture, identify which integrations can be unified as MCP Servers, and develop a migration roadmap
  3. Medium-term (6-12 months): Build an enterprise MCP Server catalog and governance framework, including version control, security review, and performance monitoring
  4. Long-term observation: Track the evolution of the MCP specification (particularly authentication standardization and multi-Agent collaboration protocols), as well as whether competitors (OpenAI, Google) release corresponding open standards

9. Conclusion

The emergence of MCP marks the transition of AI tool integration from the "artisanal era" to the "industrial standardization era." Just as USB-C unified hardware connection interfaces, MCP is unifying the software connection interface between AI and the external world.

However, the protocol is merely infrastructure. The real value lies in how enterprises build tool ecosystems deeply integrated with their own business on top of this infrastructure. We have already seen in Tool Learning[6] and LangChain[10] practices that tool quality — the precision of descriptions, the completeness of error handling, and the rigor of security boundaries — directly determines the reliability of AI applications.

MCP will not replace Function Calling or LangChain Tools; rather, it provides a unified protocol framework at a higher level of abstraction. For enterprises that need to connect multiple AI applications with multiple tools, MCP offers a clear path to reducing integration complexity and improving maintainability.

The Meta Intelligence research team continuously tracks the latest developments in the MCP ecosystem and assists enterprise clients in designing and implementing MCP Server architectures that meet their business needs. From protocol understanding to hands-on deployment, from security assessment to performance optimization, we are committed to bringing the most cutting-edge AI engineering practices into enterprise scenarios.