Building Your First MCP Server in 2026 — The Part Nobody Tells You
Everyone's talking about MCP, but most tutorials skip the real friction. Here's how I actually built and deployed a custom MCP server, what broke, and what I'd do differently.
I spent a weekend building a custom MCP server that gives Claude access to my internal project tracker. It wasn't hard exactly — but it wasn't the clean 20-minute process the tutorials make it look like either.
This is the guide I wish existed before I started: the one that covers the actual friction, the gotchas, and the parts that make a real MCP server useful versus a toy demo.
What MCP Actually Is (and Isn't)
Model Context Protocol is Anthropic's open standard for connecting AI models to external tools and data sources. Think of it as a universal API adapter: instead of every AI tool reinventing how it connects to GitHub, Jira, or your database, MCP defines a common interface.
The analogy that clicked for me: MCP is to AI agents what USB is to peripherals. One standard, many compatible devices.
What it's not is a magic wand. MCP servers are just Node.js (or Python) processes that expose tools via a JSON-RPC-like protocol over stdio or HTTP/SSE. The AI calls a tool, your server runs some code, returns a result. That's it. The sophistication comes from what your server actually does.
The Project: A Task Context Server
My use case: I'm always switching between projects. Whenever I open a new Claude conversation, I spend the first 3 minutes copy-pasting context — what project I'm on, what the current sprint goal is, what I was working on yesterday.
I wanted Claude to just know this. So I built an MCP server that reads from a local JSON file I update daily, plus pulls open GitHub issues from my active repos.
Here's what the final tool surface looks like:
get_current_project— returns active project name, goal, deadlineget_open_issues— pulls open GitHub issues for the active repoget_yesterday_notes— returns my daily standup notes from a local fileset_active_project— switches the active project context
Project Structure
mcp-task-context/
├── src/
│ ├── index.ts # Entry point, server setup
│ ├── tools/
│ │ ├── project.ts # Project context tools
│ │ ├── github.ts # GitHub issues tools
│ │ └── notes.ts # Daily notes tools
│ └── types.ts # Shared types
├── data/
│ └── projects.json # Your project data
├── package.json
└── tsconfig.json
I used TypeScript. You can use JavaScript if you prefer, but TypeScript catches the shape mismatches in MCP responses that will otherwise confuse you for hours.
Setting Up the MCP SDK
mkdir mcp-task-context && cd mcp-task-context
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node tsx
The MCP SDK handles the protocol layer. Zod handles runtime type validation — you'll want this.
// tsconfig.json
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./dist",
"strict": true,
"esModuleInterop": true
},
"include": ["src/**/*"]
}
The Server Entry Point
// src/index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { CallToolRequestSchema, ListToolsRequestSchema } from "@modelcontextprotocol/sdk/types.js";
import { getProjectTools, handleProjectTool } from "./tools/project.js";
import { getGithubTools, handleGithubTool } from "./tools/github.js";
import { getNotesTools, handleNotesTool } from "./tools/notes.js";
const server = new Server(
{ name: "task-context", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
...getProjectTools(),
...getGithubTools(),
...getNotesTools(),
],
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name.startsWith("project_")) return handleProjectTool(name, args);
if (name.startsWith("github_")) return handleGithubTool(name, args);
if (name.startsWith("notes_")) return handleNotesTool(name, args);
throw new Error(`Unknown tool: ${name}`);
});
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("MCP task-context server running");
}
main().catch(console.error);
One thing that bit me: console.error for debug logging, not console.log. Stdout is used for the MCP protocol. Anything you log to stdout will break the JSON-RPC parsing. I lost 45 minutes to this.
Implementing a Tool
Here's the project context tool — it's the most straightforward one:
// src/tools/project.ts
import { readFileSync } from "fs";
import { join } from "path";
import { z } from "zod";
const ProjectSchema = z.object({
name: z.string(),
goal: z.string(),
deadline: z.string().optional(),
repo: z.string().optional(),
active: z.boolean(),
});
const ProjectsSchema = z.object({
active: z.string(),
projects: z.record(ProjectSchema),
});
function loadProjects() {
const dataPath = join(process.cwd(), "data", "projects.json");
const raw = readFileSync(dataPath, "utf-8");
return ProjectsSchema.parse(JSON.parse(raw));
}
export function getProjectTools() {
return [
{
name: "project_get_current",
description: "Get the currently active project context including name, goal, and deadline",
inputSchema: {
type: "object" as const,
properties: {},
required: [],
},
},
{
name: "project_set_active",
description: "Switch the active project",
inputSchema: {
type: "object" as const,
properties: {
project_id: {
type: "string",
description: "The project ID to set as active",
},
},
required: ["project_id"],
},
},
];
}
export async function handleProjectTool(name: string, args: unknown) {
if (name === "project_get_current") {
const data = loadProjects();
const project = data.projects[data.active];
if (!project) {
return {
content: [{ type: "text", text: "No active project found." }],
};
}
return {
content: [
{
type: "text",
text: JSON.stringify({
id: data.active,
...project,
}, null, 2),
},
],
};
}
throw new Error(`Unknown project tool: ${name}`);
}
The projects.json file is just a JSON file I keep updated manually (or with a quick alias in my shell):
{
"active": "bytecore",
"projects": {
"bytecore": {
"name": "bytecore blog",
"goal": "Publish 3 articles and improve load time to <1s",
"deadline": "2026-03-31",
"repo": "PiotrekFilipecki/bytecore",
"active": true
}
}
}
The GitHub Tool (Where It Gets Interesting)
// src/tools/github.ts
const GITHUB_TOKEN = process.env.GITHUB_TOKEN;
export async function handleGithubTool(name: string, args: unknown) {
if (name === "github_get_open_issues") {
const { repo } = args as { repo?: string };
// Fall back to active project's repo if not specified
const targetRepo = repo ?? getActiveRepo();
if (!targetRepo) {
return {
content: [{ type: "text", text: "No repo specified and no active project repo found." }],
};
}
const response = await fetch(
`https://api.github.com/repos/${targetRepo}/issues?state=open&per_page=10`,
{
headers: {
Authorization: `Bearer ${GITHUB_TOKEN}`,
Accept: "application/vnd.github.v3+json",
},
}
);
if (!response.ok) {
throw new Error(`GitHub API error: ${response.status}`);
}
const issues = await response.json() as Array<{
number: number;
title: string;
body: string | null;
labels: Array<{ name: string }>;
}>;
const formatted = issues.map(i => ({
number: i.number,
title: i.title,
labels: i.labels.map(l => l.name),
body: i.body?.slice(0, 200),
}));
return {
content: [{ type: "text", text: JSON.stringify(formatted, null, 2) }],
};
}
throw new Error(`Unknown github tool: ${name}`);
}
The GitHub API fetch works out of the box because MCP servers are just regular Node processes — they have full network access. No sandboxing. Keep that in mind if you're exposing your server to anything external.
Wiring It Up to Claude
The server runs via stdio, so Claude Desktop needs to know how to spawn it. Add this to your Claude config (~/Library/Application Support/Claude/claude_desktop_config.json on Mac):
{
"mcpServers": {
"task-context": {
"command": "node",
"args": ["/absolute/path/to/mcp-task-context/dist/index.js"],
"env": {
"GITHUB_TOKEN": "ghp_yourtoken"
}
}
}
}
Build first with npx tsc, then restart Claude Desktop. You should see your tools listed under the hammer icon in a new conversation.
During development, npx tsx src/index.ts is faster than the full TypeScript compile. Just swap tsx for node dist/index.js in the config while iterating.
What Actually Broke
The stdio logging thing. Already mentioned, but it's worth repeating because it wastes the most time. Always use process.stderr.write() or console.error() for debug output.
Absolute paths everywhere. MCP servers are spawned from Claude Desktop, not from your terminal session. Relative paths resolve from wherever Claude's working directory is, which is not your project. Use path.join(__dirname, ...) or import.meta.url to construct absolute paths to your data files.
The tool name namespace. If you register two tools with the same name across different servers, Claude picks one silently. I prefix all my tools with the server name (project_, github_, notes_) to avoid collisions.
Error handling. If your tool throws an unhandled error, Claude just sees "the tool failed" with no useful message. Wrap your tool handlers in try-catch and return proper error content:
try {
// ... your code
} catch (err) {
return {
content: [{
type: "text",
text: `Error: ${err instanceof Error ? err.message : String(err)}`
}],
isError: true,
};
}
Deploying It as a Remote Server
Running it locally over stdio works fine for personal use, but if you want to share a server with a team or connect it to Claude API integrations, you need HTTP/SSE transport.
I deployed mine to Railway — it took about 10 minutes. The MCP SDK has a built-in SSE server option:
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
const app = express();
app.use(express.json());
app.get("/sse", async (req, res) => {
const transport = new SSEServerTransport("/messages", res);
await server.connect(transport);
});
app.post("/messages", async (req, res) => {
// handle messages
});
app.listen(process.env.PORT ?? 3000);
Railway handles the Node.js environment automatically — push to GitHub, it deploys. I pay about $5/month for a hobby plan. For something this lightweight (the server barely uses 50MB RAM), it's the right call. No managing Docker, no configuring nginx, just railway up and done.
Is It Worth It?
Honestly, yes. The actual time savings are smaller than I expected — Claude was already pretty good at staying on context if I gave it a good system prompt. But the quality of the context is better now. Instead of me remembering to copy-paste the right thing, the server fetches live GitHub issues. The information is always current.
The bigger benefit was less expected: it changed how I think about AI assistance. Once you have MCP tools available, you start noticing all the places where a 20-line tool handler could eliminate 5 minutes of manual context setup. The tooling mindset compounds.
I've since added a recent_commits tool (reads from git log via a shell exec), a env_check tool that verifies my local .env has all required keys, and I'm planning a tool that surfaces relevant Notion notes based on the active project.
The SDK is solid, the docs are decent, and the main friction is the initial debugging session where you figure out that stdout is sacred. After that, it's just writing TypeScript that does useful things.
Start with one tool. Make it read something real from your actual workflow. That's it.
This post contains affiliate links. If you sign up for Railway or Cursor through my links, I may earn a small commission at no extra cost to you. I only recommend tools I actually use.