Field notes AI search 10-MIN DEPLOY From the AI visibility for travel operators map

llms.txt for travel and hospitality the small file that decides what AI engines extract first.

What llms.txt is, why travel and hospitality sites should publish one, and a working template for lodges, tour operators, restaurants, and experience providers. Plain-English explanation, ready-to-deploy file, and the realistic limits of what it does.

A Quotable answer · 95 words

llms.txt is a plain-text file at the root of your domain that tells AI engines which pages on your site are the canonical, machine-readable sources for facts about your business. For travel and hospitality operators, it's a 10-minute deploy that signals — to ChatGPT, Claude, Perplexity, and Google AI Overviews — which URLs hold your rates, your menu, your itineraries, your room descriptions, your booking flow. AI engines don't have to guess; you've handed them the map. It doesn't replace structured data; it complements it by reducing extraction friction on the pages that matter most.

What llms.txt is

llms.txt is a plain-text file you publish at the root of your domain — bookedwild.com/llms.txt, your-lodge.co.uk/llms.txt. It lists the canonical, machine-readable URLs on your site that AI engines should treat as authoritative sources for facts about your business.

It looks like this, in its simplest form:

# Booked Wild
> Marketing agency for European independent travel and hospitality operators.

## Services
- [AI Visibility Fix](https://bookedwild.com/services/ai-visibility-fix): £495 schema and listings deploy in two weeks.
- [Direct Booking OS](https://bookedwild.com/services/direct-booking-os): Booking infrastructure with iCal sync.

## Pricing
- [Pricing page](https://bookedwild.com/pricing): All productised offers with fixed prices.

That’s the entire format. Markdown headings, bulleted links, one-line descriptions. The spec was proposed by Jeremy Howard in 2024 and has been adopted by Anthropic, Perplexity, and a growing list of crawler-based AI engines.

The intent is simple: instead of forcing AI engines to crawl every page on your site and guess which ones hold the canonical facts about your business, you publish a curated index that says these URLs are the source of truth.

Why it matters for travel and hospitality

Independent travel sites have a structural disadvantage in AI extraction: the brand-led, photo-heavy, atmospherically-designed pages that convert well for human visitors are often opaque to machine reading. The page that a traveller falls in love with — full-bleed photography, evocative copy, parallax scroll — is frequently the same page where AI engines can’t reliably extract the rate, the policy, the menu, or the location.

llms.txt is one mitigation. It lets you point AI engines past the brand layer to the structured layer underneath: the rates page, the room descriptions, the menu, the itinerary, the FAQ. If you’ve done the structural work — schema markup, plain-text pricing, FAQPage on the right pages — llms.txt is the navigation file that gets engines to those pages first.

For a typical independent lodge, restaurant, or tour operator, the file is short — twenty to fifty links — and takes about ten minutes to write once the underlying pages exist.

A working template for travel operators

Adapt the structure below for your business. The headings are guidance; the links are what matters.

# {Your Business Name}
> {One-sentence description: what you do, where you are, who you serve.}

## About
- [About us](https://example.com/about): {One line.}
- [Our story](https://example.com/our-story): {One line.}

## Stays / Tours / Experiences
- [Room category 1](https://example.com/rooms/category-1): {One line.}
- [Room category 2](https://example.com/rooms/category-2): {One line.}

## Rates
- [Current rates](https://example.com/rates): {Period covered, currency, what's included.}

## Booking
- [Book direct](https://example.com/book): {Booking flow URL.}

## FAQ
- [FAQ](https://example.com/faq): {Topics covered.}

## Location
- [Getting here](https://example.com/getting-here): {Region, nearest station / airport.}

## Contact
- [Contact](https://example.com/contact): {Hours, response time.}

A lodge with five room categories, a restaurant, an events offering, and a spa might have thirty links across six headings. A single-property B&B might have twelve links across four headings. A tour operator running ten itineraries might list each itinerary URL plus the booking and contact pages. Don’t pad the file with everything; pick the URLs that hold the canonical facts.

What llms.txt does not do

llms.txt is a navigation file, not a citation engine. Publishing it does not, on its own, make AI engines cite you more often. It makes the citation work — the schema, the plain-text pricing, the FAQPage, the named-author content — easier for engines to find and extract.

Three honest caveats:

Adoption is partial. Anthropic and Perplexity respect it; OpenAI and Google have not formally committed. The cost of publishing is ten minutes, so the math still favours doing it, but treat the upside as compounding rather than immediate.

It points to pages; it does not improve them. If the URL you list resolves to an opaque, image-heavy, JavaScript-rendered page with no structured data and no plain-text body, llms.txt has handed an engine a poor source. Fix the underlying page first.

It does not replace robots.txt or sitemap.xml. The three files coexist: robots.txt for what to exclude, sitemap.xml for what to crawl, llms.txt for what to treat as canonical AI-source material. Deploy all three.

Where to deploy it

/llms.txt at the root of your domain. Plain text, UTF-8, no authentication. Test that curl https://your-domain.com/llms.txt returns the file with HTTP 200. Reference it from your robots.txt with a # llms.txt: https://your-domain.com/llms.txt comment line if you want to be belt-and-braces about discoverability.

That’s the whole deployment. The harder work is the pages it points to.

1,140 words · last reviewed 03 May 2026
Questions on this article

What people ask after they read this.

Does Google or ChatGPT actually read llms.txt yet?

Adoption is uneven and the spec is young. Anthropic, Perplexity, and a handful of crawler-based AI engines respect it; OpenAI and Google's Gemini have not formally committed, but they crawl it because it's a plain-text file at a discoverable URL. The cost of publishing one is ten minutes; the upside is being early on a signal that's standardising fast. We deploy llms.txt on every site we build because the asymmetry favours operators who do.

Is llms.txt the same as robots.txt?

No. robots.txt tells crawlers what not to index. llms.txt tells AI engines where the canonical, machine-readable source pages on your site live — your menu, your rates, your room descriptions, your itineraries. Different file, different purpose, lives at a different URL (/llms.txt rather than /robots.txt), and the two should be deployed alongside each other.

What should an independent lodge actually put in llms.txt?

For a typical UK lodge or B&B: the homepage, the rooms or cabins page (with a child link per room category), the rates page, the about-the-property page, the booking flow URL, the location and getting-here page, and any FAQ page. Optionally: a chef bio, a sustainability page, and seasonal opening notes. Keep the file under 100 lines; make every URL one that resolves to a structured, schema-marked, plain-text page.

Will llms.txt get my site cited more?

It will not, on its own. llms.txt is a routing signal — it tells AI engines where to look, not whether what's there is worth citing. The citation work is the structured data, plain-text pricing, FAQ markup, and editorial third-party sources. llms.txt makes that work easier for engines to find. Treat it as a five-minute multiplier on the harder structural work, not a replacement for it.

Run the same audit on yourself

Want to know where you sit in the data?

Two weeks. £97. We run the same five-surface AI audit on your business, against your competitors, and write up exactly what to fix and in what order.