How to Build a Simple AI Agent in 30 Minutes
April 16, 2026 · 7 min read
What You'll Build
In this tutorial, you'll build a minimal coding agent that can read files, write files, and run terminal commands — the three core capabilities of any coding agent. It'll use the Anthropic API with tool use, and the whole thing is under 100 lines of code.
This isn't a toy. The same architecture powers production tools like Claude Code. The difference is just scale — our agent will be simpler, but the core loop is identical.
Prerequisites
- Node.js 18+ installed
- An Anthropic API key — get one at console.anthropic.com
- $5 in API credits — this tutorial will cost less than $0.50
Step 1: Set Up Your Project
Create a new directory and install the Anthropic SDK:
mkdir my-agent && cd my-agent
npm init -y
npm install @anthropic-ai/sdk
Set your API key as an environment variable:
export ANTHROPIC_API_KEY="your-key-here"
Step 2: Define Your Tools
Tools are how the agent interacts with the world. We'll define three: read a file, write a file, and run a shell command. Each tool has a name, description, and input schema that tells the model how to use it.
const tools = [
{
name: "read_file",
description: "Read the contents of a file at the given path",
input_schema: {
type: "object",
properties: {
path: { type: "string", description: "File path to read" }
},
required: ["path"]
}
},
{
name: "write_file",
description: "Write content to a file at the given path",
input_schema: {
type: "object",
properties: {
path: { type: "string", description: "File path to write" },
content: { type: "string", description: "Content to write" }
},
required: ["path", "content"]
}
},
{
name: "run_command",
description: "Run a shell command and return the output",
input_schema: {
type: "object",
properties: {
command: { type: "string", description: "Shell command to run" }
},
required: ["command"]
}
}
];
Step 3: Implement Tool Execution
When the model decides to use a tool, we need to actually execute it and return the result. This is where the agent takes action in the real world:
import { execSync } from "child_process";
import { readFileSync, writeFileSync } from "fs";
function executeTool(name: string, input: Record<string, string>) {
switch (name) {
case "read_file":
return readFileSync(input.path, "utf-8");
case "write_file":
writeFileSync(input.path, input.content, "utf-8");
return "File written successfully";
case "run_command":
return execSync(input.command, { encoding: "utf-8" });
default:
return "Unknown tool";
}
}
A note on safety: this code runs commands without any sandboxing. In a production agent, you'd want to restrict which commands can run, validate file paths, and run everything in a sandbox. For learning purposes, this is fine — just don't point it at anything important.
Step 4: The Agent Loop
This is the core of every AI agent. The loop works like this: send the conversation to the model, check if it wants to use a tool, execute the tool, add the result to the conversation, and repeat. The agent stops when the model responds with text instead of a tool call.
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
async function agent(userMessage: string) {
const messages = [{ role: "user", content: userMessage }];
while (true) {
const response = await client.messages.create({
model: "claude-sonnet-4-6-20250514",
max_tokens: 4096,
tools: tools.map(t => ({
name: t.name,
description: t.description,
input_schema: t.input_schema
})),
messages
});
const stopReason = response.stop_reason;
// Add assistant response to conversation
messages.push({ role: "assistant", content: response.content });
if (stopReason === "end_turn") {
// Model finished — print the text response
const textBlock = response.content.find(b => b.type === "text");
console.log(textBlock?.text);
break;
}
if (stopReason === "tool_use") {
// Collect all tool results, then send as a single user message
const toolResults: Array<{type: "tool_result"; tool_use_id: string; content: string}> = [];
for (const block of response.content) {
if (block.type === "tool_use") {
console.log(`Calling tool: ${block.name}`);
const result = executeTool(block.name, block.input);
toolResults.push({
type: "tool_result",
tool_use_id: block.id,
content: result
});
}
}
messages.push({
role: "user",
content: toolResults
});
}
}
}
// Run it!
agent("Create a file called hello.txt with 'Hello, agent!' in it, then read it back.");
What Happens When You Run It
- The model receives your instruction and decides to call write_file with path "hello.txt" and content "Hello, agent!"
- The tool executes, the file is created, and "File written successfully" is returned to the model
- The model decides to call read_file with path "hello.txt"
- The tool executes, reads the file, and returns "Hello, agent!"
- The model sees the content matches what it wrote and responds with a text confirmation
- The loop exits
That's it. You've built an AI agent. The entire thing is about 80 lines of code, and the architecture scales — add more tools, add better error handling, add a system prompt with project context, and you've got something close to a real coding assistant.
Key Takeaways
- An agent is just a loop — Send messages, check for tool calls, execute tools, repeat. The complexity comes from the tools and context management, not the core loop.
- Tools are the agent's hands — Without tools, it's just a chatbot. The tools you define determine what the agent can actually do.
- Every tool call costs tokens — The tool result gets added to the conversation and re-processed on every subsequent turn. This is why agent costs grow over time.
- Model choice matters — Claude Sonnet is a good balance of capability and cost for coding tasks. Haiku is cheaper but less reliable with complex tool sequences.
Before you start building, estimate your API costs with our AI Cost Estimator — it'll help you pick the right model and understand what your agent will cost to run at scale.
Want to calculate exact costs for your project?
Estimate Your AI Coding Costs →