P125 min

Prompts & Templates

Define reusable prompt templates with arguments, multi-turn flows, and embedded resources.

On This Page

Key Concepts

  • Prompts are user-controlled templates
  • Arguments with description and required flag
  • Multi-message prompt templates
  • UserMessage and AssistantMessage roles
  • Dynamic prompts that fetch context at runtime
  • Slash commands in Claude Desktop
  • Prompts vs tools vs resources control model

What Prompts Are

Prompts are the third primitive in MCP, and the simplest to understand: they are reusable message templates that a user can invoke. While tools are for the AI model to call and resources are for the application to read, prompts are for the human to select.

Think of prompts like saved email templates. You create a template once (“Code Review for {language}”), and every time you want to use it, you fill in the blanks and send it. The template defines the structure; you provide the specifics.

The Three Primitives — Control Model:

  ┌─────────┐     ┌─────────────┐     ┌──────────┐
  │  Tools  │     │  Resources  │     │ Prompts  │
  │         │     │             │     │          │
  │ AI model│     │ Application │     │  User    │
  │ decides │     │   decides   │     │ decides  │
  │ to call │     │  to fetch   │     │ to use   │
  └─────────┘     └─────────────┘     └──────────┘

  "get_weather"   "config://app"      "/code-review"
  Model invokes   App includes as     User picks from
  when relevant   context             a menu

The Control Model

Why do prompts need to be a separate primitive? Why not just tell the user to type their instructions directly?

Because prompts encode expert knowledge about how to interact with tools and resources. A user might not know the best way to ask for a code review. But a server developer — who built the tools and knows the domain — can craft a prompt template that structures the request perfectly.

Without a prompt template:
  User types: "Review my code"
  → Vague, no structure, AI might miss important aspects

With a prompt template (/code-review):
  Template injects:
  "You are a senior code reviewer. Review the following code for:
   1. Correctness and logic errors
   2. Security vulnerabilities
   3. Performance issues
   4. Code style and readability

   Focus on {language} best practices. The code is:
   {code}

   Provide feedback in this format:
   - CRITICAL: issues that must be fixed
   - WARNING: potential problems
   - SUGGESTION: improvements"

  → Structured, consistent, thorough reviews every time

Prompts are a way for server developers to share their domain expertise with users who may not have it.

Defining Prompts

Registering a prompt is similar to registering a tool. You give it a name, a description, and a handler that returns the messages to inject into the conversation.

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";

const server = new McpServer({
  name: "dev-tools-server",
  version: "1.0.0",
});

// Register a prompt with no arguments
server.prompt(
  // Name — used as an identifier (and slash command in some clients)
  "explain-code",

  // Description — shown to the user when browsing prompts
  "Explain a piece of code in plain English, line by line",

  // Handler — returns an array of messages
  () => ({
    messages: [
      {
        role: "user",
        content: {
          type: "text",
          text: "Please explain the following code in plain English. " +
               "Go through it line by line, explaining what each part does " +
               "and why. Assume I am a junior developer. " +
               "Highlight any non-obvious behavior or potential pitfalls.",
        },
      },
    ],
  })
);

The handler returns an object with a messages array. Each message has a role (“user” or “assistant”) and content. These messages are injected into the conversation when the user selects this prompt.

Prompt Arguments

Most useful prompts need input from the user. Arguments are how you define what the user provides:

// A prompt with arguments
server.prompt(
  "code-review",
  "Review code for bugs, security issues, and best practices",

  // Arguments array — defines what the user must provide
  [
    {
      name: "language",
      description: "Programming language of the code (e.g., TypeScript, Python)",
      required: true,
    },
    {
      name: "code",
      description: "The code to review",
      required: true,
    },
    {
      name: "focus",
      description: "Optional focus area: 'security', 'performance', or 'readability'",
      required: false,  // Optional argument
    },
  ],

  // Handler receives the argument values
  ({ language, code, focus }) => {
    // Build the prompt text based on arguments
    let instruction = `You are a senior ${language} developer performing a code review.`;

    if (focus) {
      instruction += ` Focus specifically on ${focus}.`;
    }

    instruction += `

Review the following ${language} code for:
1. Correctness and logic errors
2. Security vulnerabilities
3. Performance issues
4. Code style and ${language} best practices

Code to review:
\`\`\`${language}
${code}
\`\`\`

Format your review as:
- CRITICAL: Issues that must be fixed before merging
- WARNING: Potential problems that should be investigated
- SUGGESTION: Improvements for code quality`;

    return {
      messages: [
        {
          role: "user",
          content: { type: "text", text: instruction },
        },
      ],
    };
  }
);

Arguments have three fields:

  • name — the identifier. The handler receives values keyed by this name.
  • description — shown to the user so they know what to enter. Make it specific: “Programming language” is better than “lang.”
  • required — if true, the client must provide a value before invoking the prompt. If false, the argument can be omitted and will be undefined in the handler.

Multi-Turn Prompt Templates

Prompts are not limited to a single message. You can define multi-turn templates that include both user and assistant messages. This lets you set up a conversation structure:

server.prompt(
  "debug-error",
  "Systematic debugging approach for an error message",
  [
    {
      name: "error",
      description: "The error message or stack trace",
      required: true,
    },
    {
      name: "context",
      description: "What you were doing when the error occurred",
      required: false,
    },
  ],
  ({ error, context }) => ({
    messages: [
      // First user message — present the problem
      {
        role: "user" as const,
        content: {
          type: "text" as const,
          text: `I encountered this error:

\`\`\`
${error}
\`\`\`${
            context ? `

Context: ${context}` : ""
          }`,
        },
      },
      // Pre-filled assistant message — establish the approach
      {
        role: "assistant" as const,
        content: {
          type: "text" as const,
          text: "I will analyze this error systematically. Let me break it down:\n\n" +
               "1. **Error Type**: Let me identify what category of error this is\n" +
               "2. **Root Cause**: What is actually going wrong\n" +
               "3. **Fix**: How to resolve it\n" +
               "4. **Prevention**: How to avoid it in the future\n\n" +
               "Let me start with the analysis:",
        },
      },
    ],
  })
);

Multi-turn prompts are powerful because they prime the AI model with a structured approach. The assistant message in the template establishes the format before the model generates any new content. The model continues from where the template left off.

How multi-turn prompts work in the conversation:

  Template injects:    [ UserMessage, AssistantMessage ]
  AI continues from:   The end of the AssistantMessage

  The conversation looks like:

  User:      "I encountered this error: TypeError: Cannot read..."
  Assistant: "I will analyze this error systematically. Let me break it down:
              1. Error Type: ...
              2. Root Cause: ...     ← model generates the actual analysis
              3. Fix: ...               following the structure you set up
              4. Prevention: ..."

The template gives the model a recipe to follow.

Dynamic Prompts

Prompts can include dynamic content that is fetched at the time the user invokes the prompt. This is where prompts become really powerful — they can pull in context from resources, APIs, or the filesystem.

server.prompt(
  "review-recent-changes",
  "Review the most recent code changes in the project",
  [],  // No arguments needed — context is fetched dynamically
  async () => {
    // Fetch dynamic data when the prompt is invoked
    const recentCommits = await getRecentCommits(5);
    const diff = await getUnstagedDiff();

    return {
      messages: [
        {
          role: "user" as const,
          content: {
            type: "text" as const,
            text: `Review the recent changes in this project.

Recent commits:
${recentCommits.map(c => `- ${c.hash.slice(0, 7)} ${c.message}`).join("\n")}

Current unstaged changes:
\`\`\`diff
${diff}
\`\`\`

Please analyze:
1. Do the unstaged changes align with the recent commit history?
2. Are there any potential issues in the current changes?
3. Is there anything that should be split into separate commits?`,
          },
        },
      ],
    };
  }
);

The handler is async, so it can fetch data from any source: databases, APIs, the filesystem, other MCP resources. The prompt is constructed fresh each time the user invokes it, with the latest data.

You can also embed resources directly in prompt messages:

// Embed a resource reference in a prompt message
{
  role: "user",
  content: {
    type: "resource",
    resource: {
      uri: "postgres://myapp/users/schema",
      mimeType: "text/plain",
      text: schemaText,  // The actual schema content
    },
  },
}

How Prompts Appear in Clients

Different MCP clients present prompts in different ways. The most common is as slash commands.

In Claude Desktop:

  User types "/" in the chat input
  A menu appears with available prompts:

  ┌────────────────────────────────────────┐
  │  /code-review                          │
  │  Review code for bugs and best...      │
  │                                        │
  │  /explain-code                         │
  │  Explain a piece of code in plain...   │
  │                                        │
  │  /debug-error                          │
  │  Systematic debugging approach...      │
  └────────────────────────────────────────┘

  User selects /code-review
  Client shows input fields for required arguments:

  ┌────────────────────────────────────────┐
  │  Language: [TypeScript         ]       │
  │  Code:    [paste code here...  ]       │
  │  Focus:   [security           ]  (opt) │
  │                                        │
  │            [Submit]                    │
  └────────────────────────────────────────┘

  Client calls prompts/get with the argument values
  Server returns the populated messages
  Client injects those messages into the conversation

The protocol flow behind this:

1. Client calls prompts/list to get available prompts
   Response: [{ name: "code-review", description: "...", arguments: [...] }]

2. User selects a prompt and fills in arguments

3. Client calls prompts/get with name and arguments
   Request:  { name: "code-review", arguments: { language: "TypeScript", code: "..." } }
   Response: { messages: [{ role: "user", content: { type: "text", text: "..." } }] }

4. Client adds the returned messages to the conversation

Complete Example: Code Review Prompt

Here is a complete, well-designed prompt server with multiple prompts that work together. Study how each prompt serves a different use case:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new McpServer({
  name: "dev-prompts",
  version: "1.0.0",
});

// --- Prompt 1: Quick code explanation ---
server.prompt(
  "explain",
  "Get a plain-English explanation of code",
  [
    {
      name: "code",
      description: "The code snippet to explain",
      required: true,
    },
  ],
  ({ code }) => ({
    messages: [{
      role: "user" as const,
      content: {
        type: "text" as const,
        text: `Explain this code in plain English. Be concise but thorough.
Assume I understand basic programming but may not know this specific
language or library well.

\`\`\`
${code}
\`\`\``,
      },
    }],
  })
);

// --- Prompt 2: Thorough code review ---
server.prompt(
  "review",
  "Structured code review with severity levels",
  [
    {
      name: "code",
      description: "The code to review",
      required: true,
    },
    {
      name: "language",
      description: "Programming language (e.g., TypeScript, Python, Go)",
      required: true,
    },
  ],
  ({ code, language }) => ({
    messages: [
      {
        role: "user" as const,
        content: {
          type: "text" as const,
          text: `Perform a thorough code review of this ${language} code.

\`\`\`${language}
${code}
\`\`\``,
        },
      },
      {
        role: "assistant" as const,
        content: {
          type: "text" as const,
          text: `I will review this ${language} code across four dimensions.
For each finding, I will assign a severity level.

## Review Format
- **CRITICAL** (must fix): Bugs, security holes, data loss risks
- **WARNING** (should fix): Performance issues, edge cases, fragile logic
- **STYLE** (consider): Naming, structure, idiomatic ${language} patterns
- **POSITIVE** (keep): Things done well worth noting

Starting the review:`,
        },
      },
    ],
  })
);

// --- Prompt 3: Write tests for code ---
server.prompt(
  "write-tests",
  "Generate test cases for a function or module",
  [
    {
      name: "code",
      description: "The code to write tests for",
      required: true,
    },
    {
      name: "framework",
      description: "Test framework to use (e.g., jest, vitest, pytest)",
      required: false,
    },
  ],
  ({ code, framework }) => ({
    messages: [{
      role: "user" as const,
      content: {
        type: "text" as const,
        text: `Write comprehensive tests for the following code${
          framework ? ` using ${framework}` : ""
        }.

Include:
1. Happy path tests (normal inputs, expected behavior)
2. Edge cases (empty input, boundary values, null/undefined)
3. Error cases (invalid input, failures, exceptions)
4. If applicable: async behavior, race conditions, timeouts

Code to test:
\`\`\`
${code}
\`\`\`

Write the tests with clear describe/it blocks and descriptive names.`,
      },
    }],
  })
);

// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);

Try It Yourself

Design three prompts for a domain you work in. For each prompt, write out: the name, description, arguments (with required/optional), and the full message template. Here are some starting ideas:

  1. A SQL query helper that takes a table schema and a natural language question, then structures a prompt that asks the AI to write the query step by step. What arguments would you need?
  2. A commit message writer that takes a git diff and generates a conventional commit message. Should this be a prompt or a tool? Think about who controls the invocation.
  3. A documentation generator that takes a function signature and generates JSDoc/docstring documentation. Would you use a multi-turn template to structure the output?

The exercise is in the design, not the code. Write the prompt templates on paper (or in a text file) before implementing them.

Key Takeaway

Prompts are user-controlled message templates that encode expert knowledge about how to interact with AI for specific tasks. They accept arguments, can span multiple turns, and can fetch dynamic context at invocation time. In clients like Claude Desktop, they appear as slash commands. Use prompts to package complex instructions into simple, repeatable interactions.

Common Mistakes

  • Using a prompt when you should use a tool. If the AI model needs to autonomously decide to take an action (like searching files), that is a tool. If a human deliberately chooses a workflow (like starting a code review), that is a prompt. The distinction is who initiates.
  • Prompts that are too short. A prompt with just “Review this code” adds no value — the user could type that themselves. The value of a prompt is in the structured, expert-crafted instructions that a user would not write from scratch.
  • Making every argument required. If a prompt has five required arguments, users will avoid it. Make only essential arguments required and provide sensible defaults for the rest.
  • Ignoring the assistant role in multi-turn prompts. Pre-filling an assistant message primes the model to follow a specific structure. It is one of the most effective ways to get consistent output format. Use it.
  • Static prompts that could be dynamic. If your prompt includes information that changes (like project structure or recent commits), fetch it dynamically in the handler instead of hardcoding it.
  • Vague argument descriptions. “Enter code” is not helpful. “Paste the function or module you want reviewed. Include imports if they are relevant to the review.” tells the user exactly what to provide.