External Memory
How automation systems remember. Persistence patterns for agents.
Why Memory Matters
AI agents are stateless. Every conversation starts fresh. Without external memory, your agent forgets everything the moment the session ends.
External memory is what lets automation persist.
Session 1: "Add a task: review PR #42"
→ Task saved to ~/.tasks/tasks.json
Session 2: "What's on my task list?"
→ Agent reads from ~/.tasks/tasks.json
→ "You have one task: review PR #42"
The agent didn't remember. The automation layer remembered.
The Pattern
External memory follows a simple pattern:
// 1. Load state from persistent storage
const tasks = loadTasks(); // Read from disk/database
// 2. Modify state
tasks.push(newTask);
// 3. Save state back to persistent storage
saveTasks(tasks); // Write to disk/database
That's it. Load → Modify → Save.
Storage Options
| Storage | Complexity | When to Use |
|---|---|---|
| JSON file | Low | Learning, simple tools, single-user |
| SQLite | Medium | Production, queries, multiple tools |
| Database | High | Multi-user, cloud, scale |
For your first automation layer, JSON files are enough. Don't over-engineer.
The Task Tracker Pattern
Here's the complete external memory implementation you'll use:
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
// Where state lives
const TASKS_DIR = path.join(os.homedir(), '.tasks');
const TASKS_FILE = path.join(TASKS_DIR, 'tasks.json');
// Ensure directory exists
function ensureDir() {
if (!fs.existsSync(TASKS_DIR)) {
fs.mkdirSync(TASKS_DIR, { recursive: true });
}
}
// Load state
export function loadTasks(): Task[] {
ensureDir();
if (!fs.existsSync(TASKS_FILE)) return [];
return JSON.parse(fs.readFileSync(TASKS_FILE, 'utf-8'));
}
// Save state
export function saveTasks(tasks: Task[]) {
ensureDir();
fs.writeFileSync(TASKS_FILE, JSON.stringify(tasks, null, 2));
}
Key decisions:
- Location:
~/.tasks/— User's home directory, not the project - Format: JSON — Human-readable, easy to debug
- Atomic operations: Load all, modify, save all — Simple and safe
What Production Systems Use
Your Task Tracker uses JSON files. Here's how production systems scale up:
| System | Memory Pattern |
|---|---|
| Loom (task coordination) | SQLite + checkpoints |
| Ground (verification) | Evidence stored as JSON per run |
| WORKWAY (workflows) | Cloudflare D1 (SQLite at edge) |
The pattern is the same. Only the storage backend changes.
Common Mistakes
Mistake 1: Storing state in the agent's context
❌ "Remember that I have a task called 'review PR'"
→ Agent forgets next session
✓ Use external memory
→ Task persists forever
Mistake 2: Over-engineering storage
❌ "I need PostgreSQL with proper migrations"
→ You're building a learning project
✓ Start with JSON files
→ Migrate when you hit real limits
Mistake 3: Not handling missing files
// ❌ Crashes if file doesn't exist
const tasks = JSON.parse(fs.readFileSync(TASKS_FILE, 'utf-8'));
// ✓ Returns empty array if file doesn't exist
if (!fs.existsSync(TASKS_FILE)) return [];
return JSON.parse(fs.readFileSync(TASKS_FILE, 'utf-8'));
The Triad Applied
| Question | Application |
|---|---|
| DRY | One loadTasks() function, used everywhere |
| Rams | JSON file is the simplest storage that works |
| Heidegger | Storage serves the workflow (task management) |
Reflection
External memory is what makes automation useful. Without it, every session starts from zero.
What would break in your daily workflow if your tools forgot everything between sessions?
Everything you thought of — that's what external memory prevents.