16 April 2026
The Model Context Protocol is a standard for connecting AI models to external tools and data sources. That’s the whole thing. Everything else is implementation detail.
But since you’re here, let’s go deeper.
The problem MCP solves
Claude is good at reasoning about things. It’s less good at knowing what’s in your todo list right now, because it doesn’t have access to your todo list. It doesn’t have access to anything outside the conversation unless you put it there.
Before MCP, connecting Claude to external systems meant writing custom integrations for every application, every AI provider, every use case. Each connection was one-off. Nothing was reusable. Anthropic looked at this situation and said, essentially: there should be a standard for this.
So they built one.
What MCP actually is
MCP is a client-server protocol. Here’s how the pieces fit together:
MCP hosts are the AI applications — Claude Desktop, claude.ai, your own app built on the Claude API. The host is what the user talks to.
MCP clients live inside the host. They manage connections to MCP servers. When Claude decides it needs to use a tool, the client handles the communication.
MCP servers are the external services that expose capabilities. A server might give Claude access to a database, a file system, a web browser, a CRM — or a $4 todo list.
The protocol between them is JSON-RPC 2.0 over either stdio (for local servers) or HTTP (for remote ones). JSON-RPC is a simple remote procedure call format — you send a JSON object describing what you want to do, you get a JSON object back describing what happened.
That’s the technical foundation. It’s deliberately not complicated.
The three things MCP servers can expose
MCP defines three primitives:
Tools — functions Claude can call. add_todo, search_database, send_email. Tools take parameters and return results. Claude decides when to call them based on the conversation.
Resources — data sources Claude can read. A file, a database record, a document. Resources are for context — Claude reads them to inform its responses.
Prompts — reusable prompt templates. Less commonly used, but they let servers define structured workflows that Claude can invoke.
Most MCP servers you’ll encounter are primarily tool-based. You give Claude a set of functions, Claude figures out when to use them.
How Claude decides to use a tool
When you connect an MCP server to Claude, the server describes its tools — names, descriptions, and parameter schemas. Claude receives these descriptions and treats them as capabilities it can invoke.
You say: “Add milk to my todo list.”
Claude’s reasoning, roughly: the user wants to add something to their todo list. I have access to an add_todo tool. The tool takes a text parameter. I should call add_todo with text: "milk".
Claude makes that call. The MCP server receives it. The todo gets added. Claude tells you it’s done.
Claude never sees your raw API key or your database credentials. It sees tool descriptions and tool results. The MCP server handles everything behind the authentication boundary.
The JSON-RPC layer, briefly
If you want to understand what’s actually happening on the wire, here’s the shape of a tool call:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "add_todo",
"arguments": {
"text": "buy milk"
}
}
} And the response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [
{
"type": "text",
"text": "Todo added: buy milk"
}
]
}
} That’s it. Request and response. The complexity of what happens inside the server — authentication, database writes, whatever — is invisible to Claude. From Claude’s perspective, it called a function and got a result.
Why this matters
Before MCP, AI assistants were stuck in the conversation window. They could reason, explain, and generate — but they couldn’t act. Getting them to do useful things with external systems required bespoke engineering for every integration.
MCP changes the economics. Build an MCP server once, and any MCP-compatible host can use it. Claude Desktop, claude.ai, any application built on the Claude API. The integration is portable.
The analogy people reach for is USB — a standard connector that works regardless of what’s on either end. It’s not a perfect analogy, but it captures the point. Standardization makes the whole ecosystem more useful.
AnotherTodo as a working example
AnotherTodo ships with an MCP server. Here’s how the five tools map to the concepts above:
| Tool | What it does | MCP primitive |
|---|---|---|
list_todos | Returns all your todos | Tool |
add_todo | Adds a new todo | Tool |
update_todo | Changes the text of an existing todo | Tool |
delete_todo | Removes a todo | Tool |
toggle_todo | Marks a todo complete or incomplete | Tool |
All five are tools in the MCP sense — callable functions that Claude invokes when it determines they’re relevant to what you asked.
The server runs at https://anothertodo.app/mcp/your_key_here. It uses Streamable HTTP transport, which means it works over a regular HTTPS connection. No local process to run, no port forwarding, no setup beyond pasting a URL.
When Claude calls list_todos, the server authenticates the request using the API key in the URL, fetches your todos from the database, and returns them as structured JSON. Claude reads that JSON and formats a response. The round trip takes under a second.
This is, admittedly, not a complex demonstration of what MCP can do. Five CRUD operations on a todo list is close to the simplest possible MCP server. But that’s also why it’s a good example — the protocol is clear, the tools are obvious, and there’s nothing to distract from the mechanics.
Why a $4 todo app has an MCP server
Because we thought it would be interesting, and because it turned out to be useful in an absurd way.
There is something genuinely convenient about telling Claude “add a task for 2pm tomorrow to call the dentist” and having it appear in your list without opening a separate app. Not life-changing. Not enterprise-grade workflow automation. But convenient in the way small tools are — quietly, incrementally.
More to the point: MCP is the current bet for how AI assistants gain real-world capabilities. Getting familiar with how it works — even through a toy example — is worth some time. AnotherTodo is a toy example that actually works.
How to build your own MCP server
If you want to build one, the steps are:
- Choose a transport — stdio for local tools, Streamable HTTP for remote services
- Implement the MCP handshake — capability negotiation, tool listing
- Handle
tools/callrequests for each tool you expose - Return properly formatted results
The official MCP documentation has SDKs for TypeScript and Python. The TypeScript SDK in particular handles most of the protocol boilerplate so you can focus on your tools.
For a remote server with authentication, the basic pattern is: API key in the URL or headers, validate on every request, route the call to your application logic, return a result. Thirty lines of code for a simple server. Less if you use the SDK.
The setup tutorial covers connecting AnotherTodo specifically. If you’re building your own, the MCP docs are the right place to start.
The short version
MCP is a standard protocol for giving AI models access to external tools and data. It uses JSON-RPC. Servers expose tools. Claude calls those tools when they’re relevant. The connection is portable — build once, works anywhere MCP is supported.
AnotherTodo implements this with five tools for managing a todo list. It is not impressive. But it works, and working is sufficient.