Agent-Native Tools
Designing tools for AI agents, not humans. Different inputs, different outputs.
Designing for Agents, Not Humans
When you build a CLI tool for humans, you optimize for:
- Helpful error messages
- Interactive prompts
- Flexible input formats
- Colorful output
When you build a tool for AI agents, you optimize for:
- Structured input (JSON schemas)
- Structured output (parseable responses)
- Clear affordances (what can this tool do?)
- Predictable errors (machine-readable failure modes)
The MCP Pattern
MCP (Model Context Protocol) is how AI agents discover and use tools.
// 1. Define what the tool does (schema)
{
name: 'task_add',
description: 'Add a new task',
inputSchema: {
type: 'object',
properties: {
title: { type: 'string', description: 'Task title' }
},
required: ['title']
}
}
// 2. Handle the tool call
case 'task_add': {
const { title } = args as { title: string };
const task = addTask(title);
return { content: [{ type: 'text', text: JSON.stringify({ task }) }] };
}
The agent reads the schema to understand what it can do. Good schemas = good tool use.
Input: JSON Schemas
Agents need to know exactly what input is valid.
Bad Schema
{
name: 'task_add',
description: 'Add a task', // Vague
inputSchema: { type: 'object' } // No properties defined
}
Agent doesn't know what to pass.
Good Schema
{
name: 'task_add',
description: 'Add a new task to the task list',
inputSchema: {
type: 'object',
properties: {
title: {
type: 'string',
description: 'The task title (required)'
}
},
required: ['title']
}
}
Agent knows exactly what to pass.
Output: Structured Responses
Agents need to parse your responses. Always return structured data.
Bad Output
return { content: [{ type: 'text', text: 'Task added!' }] };
Agent can't extract the task ID for follow-up operations.
Good Output
return {
content: [{
type: 'text',
text: JSON.stringify({
task: { id: 'abc123', title: 'Review PR', status: 'todo' }
})
}]
};
Agent can parse the response and use the ID.
Error Handling for Agents
Agents need to know when things fail — and why.
Bad Error
throw new Error('Something went wrong');
Agent can't recover or explain the failure.
Good Error
return {
content: [{
type: 'text',
text: JSON.stringify({
error: 'Task not found',
id: requestedId,
suggestion: 'Use task_list to see available tasks'
})
}]
};
Agent can explain the failure and suggest recovery.
Tool Boundaries
Each tool should do one thing well.
Bad Boundaries
// One tool that does too much
task_manage({ action: 'add' | 'remove' | 'update' | 'list', ... })
Confusing for agents. Complex schema. Hard to describe.
Good Boundaries
// Separate tools with clear purposes
task_add({ title })
task_list({ status? })
task_complete({ id })
task_remove({ id })
Each tool has one job. Clear schemas. Easy to choose.
The Four Task Tracker Tools
| Tool | Input | Output | Purpose |
|---|---|---|---|
task_add |
{ title: string } |
{ task: Task } |
Create a task |
task_list |
{ status?: string } |
{ tasks: Task[] } |
List tasks |
task_complete |
{ id: string } |
{ task: Task } or { error } |
Mark done |
task_remove |
{ id: string } |
{ success: boolean } |
Delete |
Notice: Consistent patterns. Structured I/O. Clear boundaries.
The Triad Applied
| Question | Application |
|---|---|
| DRY | One schema pattern used across all tools |
| Rams | Four tools — no more. Each earns its existence. |
| Heidegger | Tools serve the agent's workflow |
Reflection
The best agent tools are invisible. The agent doesn't struggle with input formats or parse cryptic outputs. It just uses the tool and gets structured results.
What makes a tool easy for an agent to use? The answer is always: clarity. Clear inputs, clear outputs, clear boundaries.