Skip to main content
Lesson 7 of 7 60 min

Capstone: Simple Loom

Build a Task Tracker MCP server. Apply everything you've learned.

Apply Everything You've Learned

You've learned:

  • The Automation Layer — what sits between intention and execution
  • The Subtractive Triad — how to evaluate what belongs
  • External Memory — how systems persist state
  • Agent-Native Tools — how to design for AI agents

Now you build.

What You're Building

Simple Loom — a Task Tracker MCP server.

Your Intention                The Automation Layer              Execution
─────────────────            ─────────────────────             ──────────
"Add a task"         →       Your MCP Server          →        Task saved
"What's on my list?" →       (Simple Loom)            →        Tasks returned
"Mark it done"       →                                →        Status updated

This is the automation layer pattern from Lesson 3, made real.

Why This Matters

What You Build Lesson Production Version
Task lifecycle Automation Layer Loom's task coordination
tasks.json persistence External Memory Loom's SQLite + checkpoints
Four MCP tools Agent-Native Tools Ground's verification system
Tool boundaries Subtractive Triad Every tool earns its place

Step 1: Get the Scaffold

Create a new project directory and set up the scaffold:

mkdir ~/my-task-tracker
cd ~/my-task-tracker
npm init -y
npm install @modelcontextprotocol/sdk
npm install -D typescript @types/node

Create tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "dist",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true
  },
  "include": ["src"]
}

Step 2: Create the Storage Layer

Create src/tasks.ts — this handles persistence:

import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';

export interface Task {
  id: string;
  title: string;
  status: 'todo' | 'doing' | 'done';
  createdAt: string;
}

const TASKS_DIR = path.join(os.homedir(), '.tasks');
const TASKS_FILE = path.join(TASKS_DIR, 'tasks.json');

function ensureDir() {
  if (!fs.existsSync(TASKS_DIR)) {
    fs.mkdirSync(TASKS_DIR, { recursive: true });
  }
}

export function loadTasks(): Task[] {
  ensureDir();
  if (!fs.existsSync(TASKS_FILE)) return [];
  return JSON.parse(fs.readFileSync(TASKS_FILE, 'utf-8'));
}

export function saveTasks(tasks: Task[]) {
  ensureDir();
  fs.writeFileSync(TASKS_FILE, JSON.stringify(tasks, null, 2));
}

export function addTask(title: string): Task {
  const tasks = loadTasks();
  const task: Task = {
    id: Date.now().toString(36),
    title,
    status: 'todo',
    createdAt: new Date().toISOString(),
  };
  tasks.push(task);
  saveTasks(tasks);
  return task;
}

export function getTasks(status?: Task['status']): Task[] {
  const tasks = loadTasks();
  return status ? tasks.filter(t => t.status === status) : tasks;
}

export function updateTaskStatus(id: string, status: Task['status']): Task | null {
  const tasks = loadTasks();
  const task = tasks.find(t => t.id === id);
  if (!task) return null;
  task.status = status;
  saveTasks(tasks);
  return task;
}

export function removeTask(id: string): boolean {
  const tasks = loadTasks();
  const index = tasks.findIndex(t => t.id === id);
  if (index === -1) return false;
  tasks.splice(index, 1);
  saveTasks(tasks);
  return true;
}

This demonstrates Loom's external memory pattern — tasks persist across sessions.


Step 3: Define Your Tools

Create src/index.ts. You need four tools:

task_add

  • Purpose: Add a new task
  • Input: { title: string }
  • Returns: The created task

task_list

  • Purpose: List tasks
  • Input: { status?: 'todo' | 'doing' | 'done' } (optional filter)
  • Returns: Array of tasks

task_complete

  • Purpose: Mark a task as done
  • Input: { id: string }
  • Returns: The updated task (or error if not found)

task_remove

  • Purpose: Delete a task
  • Input: { id: string }
  • Returns: Success/failure

Apply Rams: Four tools is enough. Resist the urge to add task_archive, task_priority, etc. Do those earn their existence right now?


Step 4: Implement the Server

Here's the complete src/index.ts:

#!/usr/bin/env node
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
import { addTask, getTasks, updateTaskStatus, removeTask, Task } from './tasks.js';

const server = new Server(
  { name: 'task-tracker', version: '1.0.0' },
  { capabilities: { tools: {} } }
);

// Define available tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: 'task_add',
      description: 'Add a new task',
      inputSchema: {
        type: 'object',
        properties: { title: { type: 'string', description: 'Task title' } },
        required: ['title'],
      },
    },
    {
      name: 'task_list',
      description: 'List tasks, optionally filtered by status',
      inputSchema: {
        type: 'object',
        properties: {
          status: { type: 'string', enum: ['todo', 'doing', 'done'] },
        },
      },
    },
    {
      name: 'task_complete',
      description: 'Mark a task as done',
      inputSchema: {
        type: 'object',
        properties: { id: { type: 'string', description: 'Task ID' } },
        required: ['id'],
      },
    },
    {
      name: 'task_remove',
      description: 'Remove a task permanently',
      inputSchema: {
        type: 'object',
        properties: { id: { type: 'string', description: 'Task ID' } },
        required: ['id'],
      },
    },
  ],
}));

// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  switch (name) {
    case 'task_add': {
      const { title } = args as { title: string };
      const task = addTask(title);
      return { content: [{ type: 'text', text: JSON.stringify({ task }) }] };
    }

    case 'task_list': {
      const { status } = args as { status?: Task['status'] };
      const tasks = getTasks(status);
      return { content: [{ type: 'text', text: JSON.stringify({ tasks }) }] };
    }

    case 'task_complete': {
      const { id } = args as { id: string };
      const task = updateTaskStatus(id, 'done');
      if (!task) {
        return { content: [{ type: 'text', text: JSON.stringify({ error: 'Task not found' }) }] };
      }
      return { content: [{ type: 'text', text: JSON.stringify({ task }) }] };
    }

    case 'task_remove': {
      const { id } = args as { id: string };
      const success = removeTask(id);
      return { content: [{ type: 'text', text: JSON.stringify({ success }) }] };
    }

    default:
      return { content: [{ type: 'text', text: JSON.stringify({ error: 'Unknown tool' }) }] };
  }
});

// Start server
const transport = new StdioServerTransport();
server.connect(transport);

Apply Heidegger: The return format serves the AI agent. It needs structured data it can parse and act on.


Step 5: Build and Test

npx tsc

Fix any TypeScript errors. Then test manually:

echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | node dist/index.js

You should see your four tools listed.


Step 6: Connect to Your AI Agent

Add your server to your AI agent's MCP configuration.

For Gemini CLI

Add to ~/.gemini/settings.json:

{
  "mcpServers": {
    "task-tracker": {
      "command": "node",
      "args": ["/Users/you/my-task-tracker/dist/index.js"]
    }
  }
}

For Claude Code

Add to .mcp.json in your project:

{
  "mcpServers": {
    "task-tracker": {
      "command": "node",
      "args": ["/Users/you/my-task-tracker/dist/index.js"]
    }
  }
}

Now you can say:

  • "Add a task: review PR #42"
  • "What's on my task list?"
  • "Mark the PR review task as done"

Your automation layer is working.


Step 7: Reflect

Before marking the capstone complete, answer these:

1. The Automation Layer — Where does your MCP server sit in the flow from intention to execution? What does it connect?

2. The Subtractive Triad — Where did DRY guide you? What didn't earn its existence? Does your server serve the workflow?

3. External Memory — What would break if tasks.json didn't exist? What does persistence enable?

4. Agent-Native Tools — How did you design for the agent, not for humans? What makes your tools easy to use?

These aren't rhetorical questions. Write your answers. The capstone isn't complete until you've reflected.


What You Built

A local automation layer. AI agents can now manage your tasks without you opening a todo app.

This is Simple Loom — the same patterns that power production task coordination.

What Comes Next

You've learned to see through the Subtractive Triad. You've built automation infrastructure. When the questions become automatic, you're ready for tools that execute what you now perceive.


Going Deeper

WORKWAY's Focus Workflow does this at team scale — syncing Slack messages to Notion tasks. Same philosophy, production infrastructure.

learn.createsomething.io covers building production automation like Focus Workflow.

You're not done learning. But you've started building.

That's the difference between Seeing and Dwelling.


Resources

Model Context Protocol (MCP)

Gemini CLI

CREATE SOMETHING

The Subtractive Triad