Best Practices for Designing APIs for AI Agents (2026)
Design principles, common mistakes to avoid, and how to think in actions instead of endpoints when building APIs for AI agents.
The mindset shift: actions, not endpoints
The most important change when designing APIs for AI agents is shifting from endpoint-centric to action-centric thinking. Traditional API design organizes operations around resources and HTTP methods: GET /users, POST /users, PUT /users/:id. Agents do not think this way.
Agents think in actions and outcomes. They want to “create a user,” “send an email,” or “process a payment.” The underlying HTTP method and resource path are implementation details. When you design for agents, you design around what the API does, not how it is structured internally.
This shift is explored in depth in Structured Actions vs REST Endpoints. The key takeaway: agents need a clear mapping from intent to action, with all the context required to execute correctly.
Five design principles for agent-friendly APIs
These principles guide every design decision when building APIs that agents can use reliably.
1. Use clear, descriptive action names
Action names should describe what the operation does, not the HTTP method or resource path. An agent should be able to understand the purpose of an action from its name alone.
Good action names: sendEmail - clear verb + object createUser - obvious what it does getInvoiceById - specific retrieval cancelSubscription - unambiguous intent Bad action names: POST /v2/messages - method + path, not an action processData - vague, what data? what process? handleRequest - meaningless to an agent doAction - completely opaque
2. Keep inputs simple and well-typed
Every input parameter should have a clear type, a description, and validation constraints. Avoid deeply nested objects, optional parameters with complex interdependencies, and polymorphic inputs. The simpler the input schema, the less likely an agent is to hallucinate incorrect values.
{
"action": "sendEmail",
"inputs": {
"to": {
"type": "string",
"format": "email",
"required": true,
"description": "Recipient email address"
},
"subject": {
"type": "string",
"required": true,
"maxLength": 200,
"description": "Email subject line"
},
"body": {
"type": "string",
"required": true,
"description": "Plain text email body"
},
"send_at": {
"type": "string",
"format": "date-time",
"required": false,
"description": "Schedule send time (ISO 8601). Omit for immediate send."
}
}
}3. Return predictable, structured outputs
Output schemas should be consistent and typed. Every action should return the same structure regardless of the input, with clear status indicators and error messages. Agents use output schemas to plan subsequent steps. If the output structure varies unpredictably, downstream actions will fail.
4. Group related operations logically
Organize actions into logical groups that correspond to capabilities. A payments API might group actions as: charge, refund, getTransaction, listTransactions. An email API might group: sendEmail, getEmailStatus, listTemplates. This grouping helps agents understand the scope of your API and select the right action for their task.
5. Include reasoning documentation
Every action should include reasoning docs: when to use it, when not to use it, common mistakes, and expected outcomes. This is the single most effective way to prevent agents from misusing your API. Reasoning docs are the difference between an agent that guesses and an agent that knows.
Common mistakes to avoid
These are the most frequent design mistakes we see in APIs that agents struggle to use correctly. Each one increases hallucination rates and reduces agent reliability.
Overly complex endpoints with too many parameters
Endpoints that accept 20+ parameters with complex interdependencies are difficult for agents to use correctly. Break complex operations into smaller, focused actions. Instead of a single createOrder with 50 fields, provide createOrder, addLineItem, setShippingAddress, and confirmOrder.
Inconsistent naming and schemas
Using user_id in one endpoint and userId in another, or returning dates as ISO 8601 in one response and Unix timestamps in another. Inconsistency forces agents to learn special cases, which increases errors. Pick one convention and enforce it everywhere.
Missing descriptions and examples
Parameters without descriptions force agents to guess from the name alone. A field called amount could be in dollars, cents, or a custom unit. Always include a description with format, unit, and constraints. Always include at least one example request/response pair per action.
No error guidance for agents
When an API returns an error, agents need to know what went wrong and how to fix it. Returning 500 Internal Server Error with no body is useless to an agent. Return structured errors with error codes, descriptions, and suggested remediation steps.
Input and output design: good vs bad
The quality of your input/output design directly impacts how reliably agents can use your API. Here are concrete examples of good and bad design patterns.
Bad: ambiguous, untyped input
{
"action": "process",
"inputs": {
"data": "any",
"options": "object",
"mode": "string"
}
}
// Agent has no idea what "data" should contain,
// what "options" are available, or what "mode" values
// are valid. This will hallucinate.Good: explicit, typed, described
{
"action": "resizeImage",
"description": "Resize an image to specified dimensions",
"inputs": {
"image_url": {
"type": "string",
"format": "url",
"required": true,
"description": "Public URL of the source image"
},
"width": {
"type": "integer",
"required": true,
"min": 1,
"max": 4096,
"description": "Target width in pixels"
},
"height": {
"type": "integer",
"required": true,
"min": 1,
"max": 4096,
"description": "Target height in pixels"
},
"format": {
"type": "string",
"enum": ["png", "jpg", "webp"],
"default": "png",
"description": "Output image format"
}
},
"outputs": {
"url": { "type": "string", "description": "URL of the resized image" },
"size_bytes": { "type": "integer", "description": "File size in bytes" }
}
}Writing effective reasoning documentation
Reasoning documentation gives agents the context they need to make correct decisions about when and how to use your API. It bridges the gap between knowing what an action does and knowing when to use it.
Every action should include four types of reasoning documentation:
When to use
Describe the specific scenarios where this action is the right choice. Be concrete: “Use sendEmail for transactional messages like receipts, notifications, and password resets” is better than “Use this to send emails.”
When not to use
Explicitly state what this action should not be used for. “Do not use for marketing emails or bulk sends. Use the newsletter API instead.” Negative guidance is as important as positive guidance for preventing misuse.
Common mistakes
List the specific mistakes agents commonly make with this action. “Common mistake: sending amount as a string instead of integer in cents.” These warnings help agents avoid known pitfalls. See our post on why agents hallucinate API calls for more on preventing these errors.
Expected outputs
Describe what the agent should expect in the response and how to interpret it. “Returns a transaction_id and status. If status is ‘pending,’ poll getTransactionStatus every 5 seconds until it resolves.”
Testing your API design with agents
Designing for agents is not complete until you validate that agents can actually use your API correctly. Testing with real agents reveals ambiguities, missing context, and design flaws that are invisible from the specification alone.
Further reading
These design best practices are part of a broader shift in how APIs are built and documented. For the full picture, explore these related resources:
Best API Documentation for AI Agents — a comprehensive guide to platforms, formats, and strategies for agent-native documentation.
Why AI Agents Hallucinate API Calls — understand the root causes of hallucination and how good design prevents it.
How Elba supports agent-friendly API design
Elba is built around the action-centric design philosophy. When you define your API in Elba, you create structured actions with typed inputs, typed outputs, reasoning docs, and example prompts. Elba then generates all the machine-readable formats agents need: agent.json, MCP config, llms.txt, and JSON-LD metadata.
The platform validates your design by checking for missing descriptions, untyped parameters, and absent reasoning docs. It surfaces the issues agents will encounter before you publish, so you can fix them proactively instead of debugging hallucinations in production.