Tools let the AI do things. Resources let the AI know things. A well-designed resource layer gives the model the context it needs to make good decisions — without requiring a tool call for every piece of information.
This module covers five patterns for designing resources that are useful, efficient, and maintainable. The goal: the AI should be able to ground itself in your data without you having to think about it.
The Context Provisioning Pattern
The most important use of resources is context provisioning — giving the AI model background information it needs before calling any tools. Think of it as “pre-loading” the model with domain knowledge.
// Example: A database MCP server that exposes schema as a resource
server.resource(
"schema://main",
"database://schema/main",
{
description: "Database schema — tables, columns, types, and relationships",
mimeType: "application/json",
},
async () => {
const schema = await getFullSchema();
return {
contents: [{
uri: "database://schema/main",
mimeType: "application/json",
text: JSON.stringify(schema, null, 2),
}],
};
}
);When the AI connects to your server, it can read this resource to understand what tables exist, their column types, and relationships — before it writes a single query. Without this, the model either guesses at column names (and gets them wrong) or needs a “list tables” tool call first, wasting a round trip.
Context provisioning resources should be:
- Available immediately on connection (no parameters needed)
- Comprehensive but concise (schema overview, not every row of data)
- Stable (don't change on every request)
- Self-describing (include descriptions for columns, enums, etc.)
Static vs Dynamic Resources
Resources fall into two categories, and the design approach differs significantly:
// STATIC: Content known at server startup, doesn't change
// Use: direct resource registration
server.resource(
"docs://api-reference",
"docs://api-reference",
{ description: "API reference documentation", mimeType: "text/markdown" },
async () => ({
contents: [{
uri: "docs://api-reference",
mimeType: "text/markdown",
text: readFileSync("./docs/api-reference.md", "utf-8"),
}],
})
);
// DYNAMIC: Content depends on parameters or changes frequently
// Use: resource templates with URI parameters
server.resourceTemplate(
"logs://{service}/{date}",
"Application logs by service and date",
{ mimeType: "text/plain" },
async ({ service, date }) => {
const logs = await fetchLogs(service, date);
return {
contents: [{
uri: `logs://${service}/${date}`,
mimeType: "text/plain",
text: logs.join("\n"),
}],
};
}
);Decision rule: if the content exists without any input from the user, make it a static resource. If it requires parameters (a user ID, a date range, a file path), make it a resource template.
Static resources are listed in the resource directory automatically. Dynamic resources need URI templates so the AI knows how to construct valid URIs. A common mistake is making everything dynamic when most context is static.
Pagination & Cursors
Some resources represent large datasets. Loading 10,000 log entries into the AI's context is wasteful and often hits token limits. Use cursor-based pagination to let the model request data in chunks.
server.resourceTemplate(
"logs://{service}?cursor={cursor}&limit={limit}",
"Paginated application logs",
{ mimeType: "application/json" },
async ({ service, cursor, limit }) => {
const pageSize = Math.min(parseInt(limit || "50"), 100);
const { entries, nextCursor } = await fetchLogPage(
service,
cursor || null,
pageSize
);
return {
contents: [{
uri: `logs://${service}?cursor=${cursor || "start"}&limit=${pageSize}`,
mimeType: "application/json",
text: JSON.stringify({
entries,
pagination: {
next_cursor: nextCursor,
has_more: !!nextCursor,
page_size: pageSize,
},
}, null, 2),
}],
};
}
);Key pagination rules:
- Always enforce a maximum page size (protect against the AI requesting everything)
- Return a
next_cursorso the AI knows how to get the next page - Include
has_moreso the model knows when to stop - Default to a reasonable page size if none is specified
Caching Strategies
Resources that don't change frequently should communicate their freshness. This helps MCP clients avoid redundant fetches and keeps token usage low.
// Pattern: Include cache metadata in the resource response
server.resource(
"config://app-settings",
"config://app-settings",
{ description: "Application configuration", mimeType: "application/json" },
async () => {
const config = await loadConfig();
const hash = createHash("sha256").update(JSON.stringify(config)).digest("hex");
return {
contents: [{
uri: "config://app-settings",
mimeType: "application/json",
text: JSON.stringify({
data: config,
_meta: {
etag: hash,
cached_at: new Date().toISOString(),
ttl_seconds: 300, // Suggest re-fetch after 5 minutes
},
}, null, 2),
}],
};
}
);For resources that change unpredictably, use the MCP subscription mechanism to notify clients of updates rather than relying on polling.
Resource Composition
Sometimes the most useful context is a combination of data from multiple sources. Rather than making the AI read five separate resources and stitch them together, provide composed resources.
// Instead of separate resources for project, team, and deadlines...
// Compose them into a single "project context" resource
server.resourceTemplate(
"project://{id}/context",
"Complete project context — details, team, timeline, recent activity",
{ mimeType: "application/json" },
async ({ id }) => {
const [project, team, timeline, activity] = await Promise.all([
getProject(id),
getProjectTeam(id),
getProjectTimeline(id),
getRecentActivity(id, { limit: 10 }),
]);
return {
contents: [{
uri: `project://${id}/context`,
mimeType: "application/json",
text: JSON.stringify({
project,
team: team.map(m => ({ name: m.name, role: m.role })),
timeline: { start: timeline.start, end: timeline.end, milestones: timeline.milestones },
recent_activity: activity,
}, null, 2),
}],
};
}
);Smell test: if the AI always reads resources A, B, and C together, create a composed resource D that includes all three.
Exercise: Design a Resource Layer
You are building an MCP server for a documentation site. The site has:
- 200 markdown articles organized in 15 categories
- A search index
- User-submitted comments on each article
- Analytics data (page views, popular articles)
Design the resource layer:
- Which resources should be static? Which need templates?
- Design a context provisioning resource that gives the AI an overview of all content.
- How would you paginate the articles list?
- Which resources would benefit from composition?
- What caching strategy would you use for comments vs analytics?
Check Your Understanding
- What is the context provisioning pattern, and why does it reduce tool calls?
- When should you use a static resource vs a resource template?
- Name three rules for implementing cursor-based pagination in resources.
- Why is resource composition valuable? Give an example.
- Your resource returns 5MB of data. What two strategies would you use to make this manageable for the AI?
Key Takeaway
Resources are how you give the AI model context — the background knowledge it needs to use your tools effectively. Design them as context provisioning layers: pre-load the model with schema, configuration, and domain knowledge so it can act with confidence. Use pagination for large datasets, caching for stable data, and composition to reduce round trips. The best MCP servers feel like the AI already “knows” your system before it takes any action.