P130 min

Transports & Lifecycle

Stdio vs SSE vs Streamable HTTP — transport selection, initialization handshake, and graceful shutdown.

On This Page

Key Concepts

  • STDIO: stdin/stdout, newline-delimited JSON
  • Streamable HTTP: POST + SSE, session management
  • SSE (legacy): deprecated, one-way streaming
  • Initialize handshake and capability exchange
  • The 'initialized' notification requirement
  • Graceful shutdown per transport type
  • JSON-RPC 2.0 message format

What Transports Do

Every MCP message — tool calls, resource reads, initialization handshakes — is a JSON-RPC 2.0 message. But a JSON message needs a way to travel between client and server. That is what transports do: they define how bytes move between the two sides.

Think of it like mail delivery. The message (a letter) is the same regardless of whether it goes by courier, postal service, or email. The delivery mechanism is the transport. MCP defines three transports, each suited for different deployment scenarios.

Transport Overview:

  ┌──────────────────┬────────────────┬──────────────────┐
  │     STDIO        │ Streamable HTTP│   SSE (Legacy)   │
  ├──────────────────┼────────────────┼──────────────────┤
  │ Local processes  │ Remote/web     │ Remote (old)     │
  │ stdin/stdout     │ POST + SSE     │ GET stream + POST│
  │ Always connected │ Stateless OK   │ Always connected │
  │ Claude Desktop   │ Web apps       │ Being deprecated │
  │ CLI tools        │ Cloud deploy   │ Avoid for new    │
  └──────────────────┴────────────────┴──────────────────┘

STDIO Transport

The STDIO transport is the simplest and most common. The client launches your server as a child process and communicates through the process's standard input and standard output streams.

How STDIO Works:

  Client (e.g., Claude Desktop)
    │
    │  Launches server as child process:
    │  $ node build/index.js
    │
    ├──── stdin (client writes) ──────────────────────┐
    │                                                  ▼
    │                                          ┌──────────────┐
    │                                          │  Your Server │
    │                                          │  (process)   │
    │                                          └──────┬───────┘
    │                                                  │
    ◄──── stdout (server writes) ─────────────────────┘
    │
    │  stderr goes to logs (NOT part of the transport)
    │

The protocol on stdio is newline-delimited JSON. Each JSON-RPC message is a single line of JSON followed by a newline character. The client writes a line to stdin, your server reads it, processes it, and writes a response line to stdout.

What flows through stdin (client → server):

{"jsonrpc":"2.0","id":1,"method":"initialize","params":{...}}\n
{"jsonrpc":"2.0","method":"notifications/initialized"}\n
{"jsonrpc":"2.0","id":2,"method":"tools/list"}\n
{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"greet","arguments":{"name":"Alice"}}}\n

What flows through stdout (server → client):

{"jsonrpc":"2.0","id":1,"result":{"serverInfo":{"name":"my-server"},...}}\n
{"jsonrpc":"2.0","id":2,"result":{"tools":[{"name":"greet",...}]}}\n
{"jsonrpc":"2.0","id":3,"result":{"content":[{"type":"text","text":"Hello, Alice!"}]}}\n

Key rules:
  1. One JSON object per line
  2. Each line ends with \n (newline)
  3. No pretty-printing — compact JSON only
  4. NEVER write non-JSON to stdout (no console.log!)
  5. stderr is safe for debug output

The process lifecycle is straightforward: client starts the process, communication happens over stdin/stdout, and when the client closes stdin (or the process exits), the connection ends.

// Server-side: using STDIO transport
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new McpServer({ name: "my-server", version: "1.0.0" });

// Register tools, resources, prompts...

// Create and connect the transport
const transport = new StdioServerTransport();
await server.connect(transport);
// Server is now reading from stdin and writing to stdout

// Client-side config (e.g., Claude Desktop's claude_desktop_config.json):
// {
//   "mcpServers": {
//     "my-server": {
//       "command": "node",
//       "args": ["/path/to/build/index.js"]
//     }
//   }
// }

Streamable HTTP Transport

The Streamable HTTP transport is designed for remote servers — servers running on a different machine, in the cloud, or behind an API gateway. It uses standard HTTP, making it firewall-friendly and deployable anywhere you can host a web server.

How Streamable HTTP Works:

  Client                                      Server
    │                                           │
    │  POST /mcp (JSON-RPC request)             │
    ├──────────────────────────────────────────→ │
    │                                           │
    │  Response: SSE stream or JSON             │
    │  (Server-Sent Events for streaming,       │
    │   plain JSON for simple responses)        │
    ◄────────────────────────────────────────── │
    │                                           │
    │  GET /mcp (optional: open SSE stream)     │
    ├──────────────────────────────────────────→ │
    │                                           │
    │  SSE stream for server → client messages  │
    │  (notifications, progress updates)        │
    ◄────────────────────────────────────────── │

The key design of Streamable HTTP:

  • POST requests carry client-to-server messages. The client sends a JSON-RPC message in the POST body. The server can respond with either a direct JSON response or open an SSE stream for the response (useful for progress notifications during tool calls).
  • GET requests open an SSE stream. This is optional and used for server-initiated messages like resource change notifications. The client opens a GET request, and the server keeps the connection open, sending events as they happen.
  • Session management via headers. The server can return a Mcp-Session-Id header to maintain state across requests. Clients include this header in subsequent requests. This enables stateful servers without requiring persistent connections.
Session Management Flow:

  1. Client sends POST /mcp with initialize request
     → No session header (first request)

  2. Server responds with initialize result
     → Includes header: Mcp-Session-Id: abc123

  3. Client sends all subsequent requests with:
     → Header: Mcp-Session-Id: abc123

  4. Server uses the session ID to look up state
     → Capability cache, subscriptions, etc.

  5. To end the session:
     → Client sends DELETE /mcp with the session header
     → Server cleans up session state

SSE Transport (Legacy)

The SSE (Server-Sent Events) transport is the original remote transport in MCP. It is being superseded by Streamable HTTP and should not be used for new servers. However, you will encounter it in older servers and documentation, so understanding it is useful.

How SSE (Legacy) Works:

  Client                                      Server
    │                                           │
    │  GET /sse                                 │
    ├──────────────────────────────────────────→ │
    │                                           │
    │  SSE stream opens (stays open)            │
    ◄ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─  │
    │                                           │
    │  Server sends "endpoint" event            │
    │  with URL for client to POST to           │
    ◄────────────────────────────────────────── │
    │                                           │
    │  POST /messages (JSON-RPC requests)       │
    ├──────────────────────────────────────────→ │
    │                                           │
    │  Server sends responses via SSE stream    │
    ◄────────────────────────────────────────── │

Why this is being deprecated:
  1. Requires two connections (GET for SSE + POST for messages)
  2. SSE is unidirectional — server to client only
  3. No built-in session management
  4. Harder to deploy behind load balancers
  5. Streamable HTTP does everything SSE does, better

If you are working with an existing SSE server, the protocol is fundamentally the same — JSON-RPC messages, same initialization handshake, same capabilities. Only the transport mechanism differs. Migrating from SSE to Streamable HTTP changes the plumbing, not the logic.

When to Use Which Transport

Decision Matrix:

  Question                              → Transport
  ────────────────────────────────────────────────────
  Server runs on the same machine       → STDIO
  as the client?

  Server needs to be accessed           → Streamable HTTP
  over the network?

  Server will be deployed to            → Streamable HTTP
  a cloud service?

  Building a CLI tool or                → STDIO
  desktop integration?

  Need to support web browser           → Streamable HTTP
  clients?

  Working with an existing SSE          → SSE (but plan
  server?                                 to migrate)


  ┌──────────────┐
  │ Is it local? │
  └──────┬───────┘
         │
    yes  │  no
    ▼    │  ▼
  STDIO  │  ┌──────────────────┐
         │  │ Is it new code?  │
         │  └──────┬───────────┘
         │    yes  │  no
         │    ▼    │  ▼
         │ Streamable  SSE
         │    HTTP   (existing)
         │

Most developers start with STDIO because it is simpler to debug. You can test with the Inspector locally, get everything working, and then add Streamable HTTP when you need remote access. The server logic stays exactly the same — only the transport line changes.

The Connection Lifecycle

Regardless of transport, every MCP connection follows the same lifecycle: initialize, operate, shutdown. Understanding this lifecycle is essential because if you get the initialization wrong, nothing else works.

Connection Lifecycle — Three Phases:

  ┌─────────────────────────────────────────────────────┐
  │  Phase 1: INITIALIZATION                            │
  │                                                     │
  │  Client sends "initialize" with its capabilities    │
  │  Server responds with its capabilities              │
  │  Client sends "initialized" notification            │
  │  ─────────── handshake complete ──────────────────  │
  │                                                     │
  │  Phase 2: OPERATION                                 │
  │                                                     │
  │  Normal request/response flow                       │
  │  Client calls tools, reads resources, gets prompts  │
  │  Server sends notifications (resource changes, etc) │
  │  This phase lasts the entire session                │
  │                                                     │
  │  Phase 3: SHUTDOWN                                  │
  │                                                     │
  │  STDIO: client closes stdin → server exits          │
  │  HTTP:  client sends DELETE → server cleans up      │
  │  SSE:   client closes connection → server cleans up │
  └─────────────────────────────────────────────────────┘

  IMPORTANT: No tool calls, resource reads, or prompt gets
  are allowed during Phase 1. The server MUST reject them.
  Operations begin only after the "initialized" notification.

The Initialize Handshake

The initialization handshake is the first thing that happens on every MCP connection. It is a three-message exchange:

Message 1: Client → Server (initialize request)

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "2025-03-26",
    "capabilities": {
      "roots": {                    // Client can provide filesystem roots
        "listChanged": true         // Client will notify if roots change
      },
      "sampling": {}                // Client supports AI sampling requests
    },
    "clientInfo": {
      "name": "claude-desktop",     // Client identifies itself
      "version": "1.2.0"
    }
  }
}

This tells the server:
  - What protocol version the client speaks
  - What capabilities the CLIENT supports
  - Who the client is
Message 2: Server → Client (initialize response)

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "protocolVersion": "2025-03-26",
    "capabilities": {
      "tools": {                    // Server offers tools
        "listChanged": true         // Server will notify if tools change
      },
      "resources": {                // Server offers resources
        "subscribe": true,          // Server supports subscriptions
        "listChanged": true
      },
      "prompts": {                  // Server offers prompts
        "listChanged": true
      }
    },
    "serverInfo": {
      "name": "my-server",         // Server identifies itself
      "version": "1.0.0"
    }
  }
}

This tells the client:
  - What protocol version the server agreed to
  - What capabilities the SERVER supports
  - Who the server is
Message 3: Client → Server (initialized notification)

{
  "jsonrpc": "2.0",
  "method": "notifications/initialized"
}

This is a notification (no "id" field, no response expected).
It tells the server: "Handshake complete. You can start
accepting normal operations now."

WHY IS THIS REQUIRED?
Because the server might need to do setup work after learning
the client's capabilities but before accepting requests.
The "initialized" notification is the signal that setup time
is over and the operation phase begins.

Capability Negotiation

The handshake is not just a greeting — it is a negotiation. Both sides declare what they support, and each side adapts to what the other offers.

Capability Negotiation Example:

  Client says:  "I support roots and sampling"
  Server says:  "I support tools and resources"

  Result:
  ─────────────────────────────────────────────────────
  The server knows:
    ✓ It CAN ask the client for filesystem roots
    ✓ It CAN request AI sampling through the client
    ✗ It should NOT send prompt-related notifications
      (client did not declare prompt support)

  The client knows:
    ✓ It CAN call tools/list and tools/call
    ✓ It CAN read resources and subscribe to changes
    ✗ It should NOT call prompts/list
      (server did not declare prompt support)

  Both sides only use capabilities the other declared.

This is why capability negotiation matters: it prevents errors from calling unsupported methods and allows both sides to optimize. A server that knows the client does not support subscriptions can skip setting up change detection for resources.

The Operation Phase

After the handshake, normal operations begin. This is where all the interesting work happens — tool calls, resource reads, prompt invocations.

Operation Phase — Request/Response Patterns:

  Client → Server (requests):
    tools/list          → Get available tools
    tools/call          → Execute a tool
    resources/list      → Get available resources
    resources/read      → Read a resource
    resources/subscribe → Subscribe to resource changes
    prompts/list        → Get available prompts
    prompts/get         → Get a prompt template with arguments

  Server → Client (requests — less common):
    sampling/createMessage  → Ask AI model to generate text
    roots/list              → Get filesystem roots

  Server → Client (notifications):
    notifications/resources/updated      → A resource changed
    notifications/resources/list_changed → Resource list changed
    notifications/tools/list_changed     → Tool list changed
    notifications/prompts/list_changed   → Prompt list changed
    notifications/progress               → Tool progress update

  Client → Server (notifications):
    notifications/roots/list_changed     → Roots changed
    notifications/cancelled              → Cancel a pending request

The bidirectional nature of MCP is visible here. Both sides can send requests (expect a response) and notifications (fire-and-forget). This is fundamentally different from REST, where only the client sends requests.

Graceful Shutdown

How shutdown works depends on the transport:

Shutdown by Transport:

STDIO:
  1. Client closes stdin
  2. Transport detects EOF on the read stream
  3. Server performs cleanup (close DB connections, etc.)
  4. Process exits
  → Simplest shutdown — process lifecycle = connection lifecycle

Streamable HTTP:
  1. Client sends DELETE to the MCP endpoint
     with the Mcp-Session-Id header
  2. Server cleans up session state
  3. Server responds with 200 OK (or 405 if sessions not used)
  → Session-based cleanup — server may serve other clients

SSE (Legacy):
  1. Client closes the SSE connection
  2. Server detects the closed stream
  3. Server cleans up any state for that client
  → Connection-based cleanup

In all cases, servers should handle unexpected disconnects
gracefully. Clients crash, networks fail, processes get killed.
Use try/finally or process signal handlers to clean up.
// Handling graceful shutdown in your server
process.on("SIGINT", async () => {
  // Clean up resources
  console.error("Shutting down gracefully...");
  await database.close();
  await cache.flush();
  process.exit(0);
});

process.on("SIGTERM", async () => {
  console.error("Received SIGTERM, shutting down...");
  await database.close();
  process.exit(0);
});

Full Lifecycle Diagram

Complete MCP Connection Lifecycle (STDIO):

  Client (Claude Desktop)              Server (your code)
  ─────────────────────                ────────────────────
  Start process                   ──→  Process starts
         │                                    │
         │  {"method":"initialize",...}        │
         ├───────────────────────────────────→ │
         │                                    │  Parse capabilities
         │  {"result":{"capabilities":...}}   │  Build response
         │ ◄─────────────────────────────────┤
         │                                    │
  Store  │  {"method":"notifications/         │
  server │   initialized"}                    │
  caps   ├───────────────────────────────────→ │  Enter operation
         │                                    │  phase
         │  ═══════════════════════════════   │
         │  ║    OPERATION PHASE          ║   │
         │  ═══════════════════════════════   │
         │                                    │
         │  {"method":"tools/list"}           │
         ├───────────────────────────────────→ │
         │  {"result":{"tools":[...]}}        │
         │ ◄─────────────────────────────────┤
         │                                    │
         │  {"method":"tools/call",...}        │
         ├───────────────────────────────────→ │  Execute handler
         │  {"result":{"content":[...]}}      │
         │ ◄─────────────────────────────────┤
         │                                    │
         │  ... more operations ...           │
         │                                    │
  Close  │                                    │
  stdin  ├──── EOF ──────────────────────────→│  Detect EOF
         │                                    │  Cleanup
         │                                    │  Exit process

Raw JSON-RPC: Tracing a Message

To truly understand the protocol, let us manually trace what happens when you send a JSON-RPC message over STDIO. You do not need to do this in production — the SDK handles it. But understanding it makes debugging much easier.

# You can manually test a server by piping JSON to it.
# This is educational — use the Inspector for real testing.

# Start the server and send an initialize request:
echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"manual-test","version":"0.1"}}}' | node build/index.js

# What you will see on stdout (the server's response):
# {"jsonrpc":"2.0","id":1,"result":{"protocolVersion":"2025-03-26",
#  "capabilities":{"tools":{"listChanged":true}},"serverInfo":
#  {"name":"my-server","version":"1.0.0"}}}

# The message format breakdown:
#
# "jsonrpc": "2.0"    — Always "2.0". Required by JSON-RPC spec.
# "id": 1             — Request ID. Response will have the same ID.
#                        Notifications have no ID.
# "method": "..."     — The RPC method being called.
# "params": {...}     — Method-specific parameters.
# "result": {...}     — Only in responses. The return value.
# "error": {...}      — Only in error responses. Has code + message.

The JSON-RPC 2.0 format is the foundation of all MCP communication. Every message, on every transport, follows this exact format. The transport only determines how the bytes travel — the message structure is always JSON-RPC.

Try It Yourself

  1. Trace the full initialization. Take the server you built in Module 1.1 and write out (on paper or in a text file) the complete initialization sequence: the three messages with their full JSON bodies. Include the protocol version, capabilities, and server info your server would send.
  2. Use the MCP Inspector's raw view. Connect your server to the Inspector and find the view that shows raw JSON-RPC messages. Watch the initialization handshake happen in real time. Identify each of the three messages.
  3. Draw a lifecycle diagram for a Streamable HTTP server that supports tool calls and resource subscriptions. Include the session ID management. How does it differ from the STDIO diagram above?
  4. Answer without looking: What happens if a client tries to call tools/call before sending the initialized notification? Why does the protocol require this notification?

Key Takeaway

Transports define how messages travel (STDIO for local, Streamable HTTP for remote); the lifecycle defines when messages are allowed (initialization before operations, operations before shutdown). Use STDIO for local development and Claude Desktop integrations. Use Streamable HTTP for cloud deployments and web clients. The initialization handshake negotiates capabilities and must complete before any operations. The SDK handles all of this automatically, but understanding it makes debugging protocol issues straightforward.

Common Mistakes

  • Choosing Streamable HTTP for a local-only server. If the client and server are on the same machine, STDIO is simpler, faster, and has no network overhead. Do not add HTTP complexity when you do not need it.
  • Using console.log with STDIO. Yes, this is repeated from Module 1.1. It is that common. Stdout is the transport. Use stderr for logging.
  • Starting a new project with SSE transport. SSE is legacy and being deprecated. If you need remote access, use Streamable HTTP.
  • Ignoring the initialized notification. Some developers skip this when manually testing. But the server should not accept operations until it receives this notification. The SDK enforces this, but understanding why prevents confusion when requests get rejected.
  • Not handling unexpected disconnects. Clients crash. Networks fail. SIGKILL happens. Your server should handle cleanup through signal handlers or try/finally blocks, not just the happy path shutdown.
  • Forgetting session headers with Streamable HTTP. If the server returns an Mcp-Session-Id, the client must include it in all subsequent requests. Missing the header means the server cannot find the session state, and requests fail.
  • Confusing notifications with requests. Notifications have no id field and expect no response. Requests have an id and expect a response with the same id. Sending a response to a notification or ignoring a request response will corrupt the protocol state.