What is llms.txt and How It Helps AI Understand Your API (2026)

llms.txt is an emerging standard that gives LLMs context about your product. Learn what to include, best practices, and how it fits into the broader discovery stack.

What is llms.txt

llms.txt is a plain-text file hosted at the root of your domain (e.g., yourdomain.com/llms.txt) that provides a concise, structured overview of your product or API specifically for large language models. It is designed to be the first thing an LLM reads when it encounters your service, giving it enough context to understand what you do, what capabilities you offer, and how to interact with your API.

The concept is inspired by robots.txt, which tells search engine crawlers how to interact with a website. Where robots.txt communicates rules for web crawlers, llms.txt communicates context for AI models. It is a lightweight, human-readable, and machine-parseable way to introduce your product to AI systems.

Unlike structured formats such as agent.json or OpenAPI specifications, llms.txt is intentionally simple. It uses plain text with light formatting — headings, bullet points, and short paragraphs — because LLMs process natural language natively. The file is meant to be concise (typically under 2,000 words) and focused on the information that matters most for AI comprehension.

Why llms.txt matters

Large language models face a fundamental challenge when interacting with APIs and products they have not been specifically trained on: they lack context. Without a clear summary of what your product does, an LLM may hallucinate capabilities, misunderstand your API's purpose, or fail to use it correctly.

LLMs need concise summaries

LLMs work best with focused, well-organized context. A full API reference with hundreds of endpoints can overwhelm the model's context window or cause it to miss key information. llms.txt provides a curated summary that fits within any context window and highlights what matters most.

Reduces hallucination

When an LLM does not have accurate context about your product, it fills in the gaps with its best guesses. These guesses often produce incorrect information about your API's capabilities, endpoints, or behavior. A well-written llms.txt grounds the model in reality, significantly reducing hallucination.

Improves context quality

When users ask an AI assistant about your product, the quality of the response depends on the context available. llms.txt ensures that any LLM that reads it gets an accurate, up-to-date picture of your product — leading to better answers for end users.

What to include in your llms.txt

An effective llms.txt file covers five key areas. Each should be concise but informative enough for an LLM to build an accurate mental model of your product.

1.Product description. A one-to-two sentence description of what your product is and what problem it solves. Be specific and avoid marketing language. “Elba is an API documentation platform built for AI agents” is better than “Elba revolutionizes the API ecosystem.”
2.Core capabilities. A bulleted list of what your API can do. Each capability should be a single line with a clear action verb: “Create and manage project tasks”, “Send transactional emails”, “Process credit card payments.”
3.Key endpoints or actions. A summary of the most important API endpoints or actions. You do not need to list every endpoint — focus on the 5-10 most commonly used operations.
4.Authentication. How to authenticate with your API. Include the auth type (API key, OAuth, bearer token), where to get credentials, and any special headers required.
5.Key terms and concepts. Any domain-specific terminology the LLM needs to understand. If your API uses terms like “workspace”, “board”, or “pipeline” in specific ways, define them here.

Example llms.txt file

Here is a realistic example of an llms.txt file for a project management API:

llms.txt
# ProjectBoard API

> ProjectBoard is a project management API that enables task tracking,
> team collaboration, and workflow automation.

## Capabilities

- Create, update, and delete tasks
- Organize tasks into boards and columns
- Assign tasks to team members
- Set priorities and due dates
- Search and filter tasks by status, assignee, or label
- Manage team members and permissions
- Create automated workflows (triggers and actions)
- Generate project reports and analytics

## Key Actions

- createTask: Create a new task (requires boardId, title)
- listTasks: List tasks in a board (supports filtering)
- updateTask: Update task properties (status, assignee, priority)
- deleteTask: Remove a task permanently
- searchTasks: Full-text search across all tasks
- createBoard: Create a new project board
- addTeamMember: Invite a user to a board

## Authentication

- Type: Bearer token
- Header: Authorization: Bearer {api_key}
- Get your API key at: https://projectboard.io/settings/api

## Key Concepts

- Board: A project container that holds tasks and columns
- Column: A workflow stage (e.g., To Do, In Progress, Done)
- Task: A work item with title, description, assignee, priority
- Workspace: An organization-level container for boards and members

## Links

- API Reference: https://docs.projectboard.io/api
- Agent Documentation: https://docs.projectboard.io/agent
- MCP Server: npx @projectboard/mcp-server
- agent.json: https://projectboard.io/.well-known/agent.json

Notice how the file is concise but comprehensive. It gives an LLM everything it needs to understand the product, its capabilities, how to authenticate, and where to find more detailed documentation. An LLM reading this file can immediately answer questions about ProjectBoard and guide users toward the right API calls.

Best practices for writing llms.txt

A well-written llms.txt file follows these principles:

Keep it simple

Use plain language. Avoid jargon unless you define it. LLMs parse natural language well, so write as if you are explaining your product to a knowledgeable developer who has never heard of you. Aim for under 2,000 words total.

Use structured formatting

Use Markdown-style headings, bullet lists, and short paragraphs. This makes the file easy for both humans and LLMs to scan. Group related information under clear headings. Avoid dense paragraphs of text.

Focus on capabilities

Lead with what your API can do, not how it is built. LLMs need to match user requests to capabilities. “Send transactional emails” is more useful than “Uses SMTP with TLS 1.3 encryption.” Technical details belong in your API reference, not in llms.txt.

Update regularly

Your llms.txt should reflect the current state of your product. When you add new features, deprecate endpoints, or change authentication methods, update llms.txt. Stale information is worse than no information because it causes LLMs to confidently state incorrect facts.

Be honest about limitations

If your API has rate limits, does not support certain operations, or has known constraints, mention them. LLMs that know your limitations can give better guidance to users and avoid suggesting actions your API cannot perform.

llms.txt alongside agent.json and MCP

llms.txt is one part of a three-format discovery stack that is becoming the standard for agent-ready APIs in 2026. Each format serves a distinct purpose:

llms.txt

Natural language context. Gives LLMs a high-level understanding of your product and capabilities. Best for general Q&A and initial discovery.

agent.json

Structured action definitions. Provides typed schemas, reasoning docs, and machine-readable metadata. Best for agents that need to execute API calls.

MCP

Protocol-based access. Provides a runtime connection layer for agents to call your API as a tool. Best for direct execution in agent frameworks.

These formats are complementary, not competing. An agent might first read your llms.txt to understand what your product does, then fetch agent.json for detailed action schemas, and finally connect via MCP to execute calls. Publishing all three gives agents the best possible experience and maximizes your API's discoverability.

How llms.txt fits into the discovery stack

API discovery for AI agents is becoming as important as SEO was for websites. Just as businesses optimized for Google to be found by humans, APIs now need to optimize for agent discovery to be found by AI systems. llms.txt plays a specific role in this stack.

When an agent or AI system first encounters your domain, llms.txt serves as the entry point. It provides enough context for the AI to determine if your API is relevant to the current task. If it is, the agent can proceed to fetch agent.json for detailed action schemas or connect via MCP for direct execution. For more on how the full discovery process works, see our guide on API discovery for AI agents.

Elba generates all three discovery formats automatically from your API definition. Import your OpenAPI spec or define your actions, and Elba produces an llms.txt, agent.json, and MCP configuration that work together. For a comprehensive look at how this compares to other approaches, see our guide to the best API documentation for AI agents.

Generate your llms.txt with Elba
Import your API and get llms.txt, agent.json, and MCP config generated automatically.

Related reading