What is llms.txt? — Making Your Site Readable by LLMs (2026)
llms.txt is an emerging standard that helps LLMs understand your product. Learn the format, best practices, and how to write an effective llms.txt file.
Definition
llms.txt is a plain-text file served at /llms.txt on your domain that provides a concise, structured overview of your product or API specifically for consumption by large language models (LLMs). It answers the question every LLM faces when it encounters your product: “What is this, and what can it do?”
The file uses a simple markdown-style plain text format with clear headings, short descriptions, and structured sections. It is designed to be loaded into an LLM's context window as a single, efficient block of information — giving the model everything it needs to understand your product without parsing an entire website.
Think of llms.txt as a product brief written specifically for AI. Where your marketing site is optimized for humans and your API docs are optimized for developers, llms.txt is optimized for large language models that need to quickly understand what your product does, how it works, and what capabilities it offers.
Why llms.txt exists
LLMs face a fundamental challenge when trying to understand products and APIs: websites are noisy. A typical product website contains marketing copy, navigation menus, cookie banners, testimonials, pricing tables, blog posts, and dozens of other elements that are irrelevant (or even misleading) for an LLM trying to understand core functionality.
Even API documentation, which is more structured than marketing sites, often spans dozens or hundreds of pages. Loading all of it into an LLM context window is impractical and wasteful. The model needs a concise summary, not a 50-page reference manual.
The existing file conventions do not solve this problem:
robots.txt tells bots what NOT to do
robots.txt is an access control file — it tells crawlers which pages to avoid. It says nothing about what your product actually is or does.
sitemap.xml lists pages, not meaning
A sitemap tells search engines which URLs exist on your site, but provides no semantic information about your product's capabilities.
llms.txt tells LLMs what your product IS
llms.txt fills the gap: a single file that provides a clear, structured description of your product's identity, capabilities, and key endpoints — designed to be consumed by AI models efficiently.
As LLM-powered search, AI assistants, and autonomous agents become mainstream in 2026, having an llms.txt file ensures that AI systems represent your product accurately. Without one, LLMs must piece together information from noisy web pages — leading to inaccurate or incomplete representations.
The llms.txt format
llms.txt uses a simple, markdown-style plain text format. There is no strict schema — the convention favors clarity and readability over rigid structure. Here is a complete example:
# ProjectBoard
> Project management API for teams. Create boards, manage tasks, track progress, and collaborate in real time.
ProjectBoard is a project management platform with a REST API that enables programmatic access to all core features. It is used by engineering teams, product managers, and agencies to manage work across projects.
## Capabilities
- Create and manage project boards
- Create, update, and assign tasks with priorities
- Track task status (open, in progress, done)
- Team collaboration with comments and mentions
- Webhooks for real-time event notifications
- Role-based access control
## API Base URL
https://api.projectboard.io/v2
## Authentication
Bearer token via Authorization header. API keys are generated in the ProjectBoard dashboard under Settings > API Keys.
## Key Endpoints
- POST /tasks — Create a new task
- GET /boards/{boardId}/tasks — List tasks in a board
- PATCH /tasks/{taskId} — Update a task
- DELETE /tasks/{taskId} — Delete a task
- GET /boards — List all boards
- POST /boards — Create a new board
- GET /users — List team members
## Rate Limits
100 requests per minute per API key. Burst limit of 20 requests per second.
## SDKs
- JavaScript: npm install @projectboard/sdk
- Python: pip install projectboard
## Links
- Documentation: https://docs.projectboard.io
- API Reference: https://docs.projectboard.io/api
- Agent spec: https://projectboard.io/.well-known/agent.json
- Status: https://status.projectboard.io
## Glossary
- Board: A container for tasks, usually representing a project or team
- Task: A unit of work with a title, status, assignee, and priority
- Sprint: A time-boxed iteration containing a set of tasksKey structural elements of a well-written llms.txt:
- Title (# heading) — your product name, immediately identifiable.
- Tagline (> blockquote) — a one-line description that captures the essence of what your product does.
- Description paragraph — 2-3 sentences expanding on the tagline with context about who uses the product and why.
- Capabilities section — a bulleted list of what the product can do, written as actions (not features).
- API details — base URL, authentication method, key endpoints, rate limits.
- Links — pointers to documentation, API reference, agent.json, and other resources.
- Glossary — definitions of domain-specific terms that an LLM might not know.
llms.txt vs robots.txt
robots.txt and llms.txt both live at the root of your domain, but they serve entirely different purposes.
| Aspect | robots.txt | llms.txt |
|---|---|---|
| Purpose | Access control for crawlers | Product description for LLMs |
| Content | Allow/disallow rules | Capabilities, endpoints, context |
| Audience | Web crawlers/bots | Large language models |
| Answers | “What should I not crawl?” | “What is this product?” |
| Format | Structured directives | Markdown-style plain text |
Both files are important. robots.txt controls what AI crawlers can access on your site. llms.txt tells AI models what your product actually is and does. They are complementary — use robots.txt to control access, and llms.txt to control understanding.
llms.txt vs agent.json
agent.json and llms.txt both describe your product for AI consumption, but at different levels of detail and for different use cases.
llms.txt is a high-level overview. It gives an LLM enough context to understand what your product is, what it can do, and where to find more information. It is plain text, human-readable, and focused on comprehension. An LLM reading your llms.txt should be able to accurately describe your product to a user.
agent.json is detailed structured data. It provides the exact specifications an AI agent needs to make API calls — action names, HTTP methods, paths, typed input/output schemas, authentication details, and reasoning documentation. An agent reading your agent.json should be able to use your API, not just describe it.
In the discovery flow, llms.txt often comes first: an LLM reads it to decide whether your API is relevant, then fetches agent.json for the detailed action definitions. Publishing both files ensures your product is understandable (llms.txt) and usable (agent.json) by AI.
llms.txt and MCP
MCP (Model Context Protocol) is a runtime protocol for connecting AI agents to tools. llms.txt is a static description file. They operate at different layers but complement each other in the agent ecosystem.
An MCP server can reference your llms.txt as a resource, giving connected agents access to your product overview as part of the MCP session. This is useful for providing initial context before the agent starts using specific tools.
Together, llms.txt (understanding), agent.json (specification), and MCP (connection) form the complete agent discovery stack in 2026. Each serves a distinct purpose, and publishing all three ensures maximum visibility for your API across the AI ecosystem. For more on this, see our guide on API SEO for AI Agents.
Best practices for writing llms.txt
A well-written llms.txt file maximizes how accurately and completely LLMs understand your product. Follow these guidelines:
Keep it under 2,000 words
The purpose of llms.txt is conciseness. LLM context windows are valuable, and a shorter file is more likely to be loaded in full. If an LLM only reads the first 500 tokens, those tokens should capture the essence of your product. Front-load the most important information.
Use clear headings
Markdown-style headings (## Capabilities, ## API, ## Glossary) help LLMs parse the structure of your file. They also make the file scannable for both humans and AI. Use consistent heading levels and descriptive names.
Focus on capabilities, not features
Write from the user's perspective: “Create and manage project boards” is better than “Board management module.” Describe what someone can do with your product, not how it is architected internally.
Include key endpoints
List your most important API endpoints with brief descriptions. This gives LLMs enough technical context to guide users toward the right integration approach, even without the full API reference.
Add a glossary
Define domain-specific terms that an LLM might not know or might confuse with similar concepts in other products. This reduces the chance of AI misrepresenting your product.
Update regularly
Keep your llms.txt in sync with your product. If you add new features, endpoints, or change authentication methods, update the file. Outdated information is worse than no information — it causes LLMs to give users incorrect guidance.
Link to deeper resources
Include links to your full documentation, API reference, agent.json, and MCP server. llms.txt is the entry point — it should point LLMs to more detailed resources when they need them.
Who uses llms.txt
llms.txt adoption is growing across the technology ecosystem. The convention was popularized by the llmstxt.org community and has been adopted by a diverse range of organizations:
API providers
Companies with public APIs publish llms.txt to ensure AI assistants can accurately describe their platform and guide users toward the right integration approach.
Developer tool companies
Documentation platforms, CI/CD tools, and infrastructure providers use llms.txt to make their products visible and understandable to the AI coding assistants their users rely on.
SaaS companies
Any SaaS product with an API benefits from llms.txt. As users increasingly ask AI assistants questions like “How do I integrate with [product]?”, having a clear llms.txt ensures accurate answers.
Frequently asked questions
Is llms.txt an official standard?
llms.txt is an emerging community-driven convention, not a formal standard governed by a standards body like the IETF or W3C. It was proposed by the llmstxt.org community and is gaining organic adoption among API providers, SaaS companies, and developer tool makers. The format is simple and intentionally informal, which has helped it spread quickly without requiring formal standardization.
Where do I put my llms.txt file?
Serve your llms.txt file at the root of your domain: https://yourdomain.com/llms.txt. This follows the same convention as robots.txt. The file should be publicly accessible without authentication, served with Content-Type: text/plain or text/markdown, and reachable via a simple GET request. Some implementations also support llms-full.txt for a more comprehensive version.
How long should llms.txt be?
Keep your llms.txt file under 2,000 words. The entire point of llms.txt is to provide a concise summary that fits within an LLM context window without consuming too many tokens. Focus on what your product does, its key capabilities, and the most important endpoints or features. If you need to provide more detail, link to your full documentation, agent.json, or API reference from within the file.
Do search engines use llms.txt?
Traditional search engines like Google do not use llms.txt for ranking purposes. llms.txt is designed specifically for large language models and AI agents — not web crawlers. However, as AI-powered search (like Perplexity, ChatGPT search, and Google AI Overviews) grows, having a clear llms.txt can help these systems understand and accurately represent your product. Think of it as SEO for AI, not traditional SEO.
Further reading
For a more detailed walkthrough of writing and deploying llms.txt, read our blog post What is llms.txt? Help LLMs Understand Your API Faster. For the complete picture of agent-native documentation, see our comprehensive guide: Best API Documentation for AI Agents.
Related topics: What is agent.json? · What is MCP? · API SEO for AI Agents