Click Below to Get the Code

Browse, clone, and build from real-world templates powered by Harper.
Blog
GitHub Logo

Why Your Brand Is Invisible to AI Search (And What to Do About It)

AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.
Blog

Why Your Brand Is Invisible to AI Search (And What to Do About It)

By
Aleks Haugom
March 17, 2026
By
Aleks Haugom
March 17, 2026
By
Aleks Haugom
March 17, 2026
March 17, 2026
AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.
Aleks Haugom
Senior Manager of GTM & Marketing
Your content ranks on Google. It converts. It's authoritative. But when a customer asks ChatGPT, Perplexity, or Gemini about your category, your brand doesn't exist. Here's why, and how to fix it at the infrastructure layer.

The Question Every Enterprise Is Asking

We recently fielded a question from a partner that cuts straight to the point:

We are having lots of conversations around how brands are showing up in AI searches, like ChatGPT and Gemini. Or worse case scenario, not showing up at all. Some scenarios, the results are pulling from old Reddit articles and aren't brand friendly. Does Harper have a solution to help with this? Like the new wave of SEO, but for AI generated consumer searches?

The short answer: yes. But the real answer requires understanding why brands are invisible in the first place. It's not a content problem. It's an infrastructure problem.

AI Crawlers Are Not Google Crawlers

There's a common misconception that AI-powered search works the same way as traditional search. It doesn't. Google's crawler indexes your page, follows your links, and ranks you based on relevance signals. LLM crawlers (GPTBot, ClaudeBot, PerplexityBot, and others) are doing something fundamentally different: they're trying to read your content, extract meaning, and feed it into a model or a real-time retrieval pipeline.

"The bots work the same way as the ones that come to index your site for Google search results. The difference is what they're doing with the data: now they're grabbing it for model training," explains Dawson Toth, Staff AI Engineer at Harper. "Sometimes, it's grabbing data in real time for RAG (retrieval augmented generation), and that will come from the customer's computer directly. Those bots have various mixtures of capability to act like a real browser, evaluating JavaScript. They also have real time constraints to control costs. If the page doesn't render fast enough, or has to download too much junk to render, it gives up and moves on."

That last part is critical. Traditional search bots are patient. LLM crawlers are not. They operate under strict latency and compute budgets. If your site can't serve clean, readable HTML quickly, the crawler moves on. It finds what it can: old Reddit threads, outdated reviews, third-party content. Whatever is easy to consume becomes the AI's version of the truth about your brand.

Why JavaScript-Heavy Sites Get Skipped

Most enterprise brand sites are built on JavaScript frameworks like React, Next.js, Angular, or Vue. These frameworks are great for user experience, but they create a fundamental problem for AI crawlers: the content isn't in the HTML. It's generated client-side, after JavaScript executes.

Google's crawler has a rendering pipeline that can execute JavaScript (albeit with delays). LLM crawlers generally do not. They want static HTML. If your page requires JavaScript execution to show its content, the crawler either sees a blank shell or gives up entirely.

This is why well-funded, well-optimized enterprise sites still show up as ghosts in AI search results. The content exists. The authority is there. But the delivery mechanism is incompatible with how AI systems consume the web.

The Infrastructure Fix: Pre-Rendering for LLM Crawlers

The fix isn't more content. It's better delivery. Specifically, it's about detecting bot traffic and serving fully-hydrated HTML snapshots instead of raw JavaScript bundles.

Harper solves this at the infrastructure layer:

Pre-render for LLM crawlers. Harper detects bot traffic, renders fully-hydrated HTML snapshots, and serves them to any crawler, making it bot-agnostic across all major AI search engines. What the crawler sees is the brand's complete, structured, authoritative content. Not a JavaScript shell. Not a loading spinner. The actual page.

Enable more frequent crawling. By dramatically reducing TTFB and serving static HTML, Harper helps bots crawl more pages within their crawl budget, resulting in fresher indexes and better brand representation in AI-generated results.

Control what bots can access. The other side of the equation is governance. "The other component of this is a robots.txt and sitemap.xml," Dawson notes. "Those can restrict what bots can access, which can be wise or unwise depending on what you're doing." Getting this right means deciding which AI systems you want to power and which you'd rather block.

Harper's pre-rendering system can handle roughly 2,000 pages per minute for a typical React application. Even for pages not yet cached, on-demand rendering typically completes in seconds, well within the tolerance of most LLM crawlers. The architecture distributes rendering jobs across nodes, caches results at the edge, and serves subsequent requests in milliseconds.

For a deeper technical walkthrough, check out our video on Improving SEO via Pre-Rendering.

Think of It as GEO at the Infrastructure Layer

The industry is converging on terms like Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO) to describe this shift. We wrote a comprehensive guide on the content strategy side of this in our article on Answer Engine Optimization: How to Get Cited by AI Answers.

But here's the thing most AEO guides miss: none of the content-level optimizations matter if the crawler can't read your page in the first place. Structured data, FAQ schema, clean headings, concise definitions... all of it is irrelevant if it's locked behind client-side JavaScript that the bot never executes.

Harper operates at the layer below all of that. Think of it as GEO at the infrastructure layer. Not just advising on what content to write, but ensuring AI crawlers can actually read the right content from the source.

What This Means in Practice

For brands navigating this shift, the playbook is two-fold:

First, fix the delivery problem. Make sure your site serves clean, pre-rendered HTML to bot traffic. If you're running a JavaScript-heavy framework (and most enterprises are), this means deploying a pre-rendering solution that detects crawlers and serves static snapshots from the edge. Harper's pre-rendering solution handles this out of the box.

Second, optimize the content. Once bots can actually read your pages, then the AEO fundamentals apply: structured data, clear headings, concise answers to common questions, and a governance strategy for which AI systems you permit to crawl your content.

The brands that figure this out early will own the narrative in AI search. The ones that don't will keep watching their brand story get told by Reddit threads and outdated review sites.

Your content ranks on Google. It converts. It's authoritative. But when a customer asks ChatGPT, Perplexity, or Gemini about your category, your brand doesn't exist. Here's why, and how to fix it at the infrastructure layer.

The Question Every Enterprise Is Asking

We recently fielded a question from a partner that cuts straight to the point:

We are having lots of conversations around how brands are showing up in AI searches, like ChatGPT and Gemini. Or worse case scenario, not showing up at all. Some scenarios, the results are pulling from old Reddit articles and aren't brand friendly. Does Harper have a solution to help with this? Like the new wave of SEO, but for AI generated consumer searches?

The short answer: yes. But the real answer requires understanding why brands are invisible in the first place. It's not a content problem. It's an infrastructure problem.

AI Crawlers Are Not Google Crawlers

There's a common misconception that AI-powered search works the same way as traditional search. It doesn't. Google's crawler indexes your page, follows your links, and ranks you based on relevance signals. LLM crawlers (GPTBot, ClaudeBot, PerplexityBot, and others) are doing something fundamentally different: they're trying to read your content, extract meaning, and feed it into a model or a real-time retrieval pipeline.

"The bots work the same way as the ones that come to index your site for Google search results. The difference is what they're doing with the data: now they're grabbing it for model training," explains Dawson Toth, Staff AI Engineer at Harper. "Sometimes, it's grabbing data in real time for RAG (retrieval augmented generation), and that will come from the customer's computer directly. Those bots have various mixtures of capability to act like a real browser, evaluating JavaScript. They also have real time constraints to control costs. If the page doesn't render fast enough, or has to download too much junk to render, it gives up and moves on."

That last part is critical. Traditional search bots are patient. LLM crawlers are not. They operate under strict latency and compute budgets. If your site can't serve clean, readable HTML quickly, the crawler moves on. It finds what it can: old Reddit threads, outdated reviews, third-party content. Whatever is easy to consume becomes the AI's version of the truth about your brand.

Why JavaScript-Heavy Sites Get Skipped

Most enterprise brand sites are built on JavaScript frameworks like React, Next.js, Angular, or Vue. These frameworks are great for user experience, but they create a fundamental problem for AI crawlers: the content isn't in the HTML. It's generated client-side, after JavaScript executes.

Google's crawler has a rendering pipeline that can execute JavaScript (albeit with delays). LLM crawlers generally do not. They want static HTML. If your page requires JavaScript execution to show its content, the crawler either sees a blank shell or gives up entirely.

This is why well-funded, well-optimized enterprise sites still show up as ghosts in AI search results. The content exists. The authority is there. But the delivery mechanism is incompatible with how AI systems consume the web.

The Infrastructure Fix: Pre-Rendering for LLM Crawlers

The fix isn't more content. It's better delivery. Specifically, it's about detecting bot traffic and serving fully-hydrated HTML snapshots instead of raw JavaScript bundles.

Harper solves this at the infrastructure layer:

Pre-render for LLM crawlers. Harper detects bot traffic, renders fully-hydrated HTML snapshots, and serves them to any crawler, making it bot-agnostic across all major AI search engines. What the crawler sees is the brand's complete, structured, authoritative content. Not a JavaScript shell. Not a loading spinner. The actual page.

Enable more frequent crawling. By dramatically reducing TTFB and serving static HTML, Harper helps bots crawl more pages within their crawl budget, resulting in fresher indexes and better brand representation in AI-generated results.

Control what bots can access. The other side of the equation is governance. "The other component of this is a robots.txt and sitemap.xml," Dawson notes. "Those can restrict what bots can access, which can be wise or unwise depending on what you're doing." Getting this right means deciding which AI systems you want to power and which you'd rather block.

Harper's pre-rendering system can handle roughly 2,000 pages per minute for a typical React application. Even for pages not yet cached, on-demand rendering typically completes in seconds, well within the tolerance of most LLM crawlers. The architecture distributes rendering jobs across nodes, caches results at the edge, and serves subsequent requests in milliseconds.

For a deeper technical walkthrough, check out our video on Improving SEO via Pre-Rendering.

Think of It as GEO at the Infrastructure Layer

The industry is converging on terms like Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO) to describe this shift. We wrote a comprehensive guide on the content strategy side of this in our article on Answer Engine Optimization: How to Get Cited by AI Answers.

But here's the thing most AEO guides miss: none of the content-level optimizations matter if the crawler can't read your page in the first place. Structured data, FAQ schema, clean headings, concise definitions... all of it is irrelevant if it's locked behind client-side JavaScript that the bot never executes.

Harper operates at the layer below all of that. Think of it as GEO at the infrastructure layer. Not just advising on what content to write, but ensuring AI crawlers can actually read the right content from the source.

What This Means in Practice

For brands navigating this shift, the playbook is two-fold:

First, fix the delivery problem. Make sure your site serves clean, pre-rendered HTML to bot traffic. If you're running a JavaScript-heavy framework (and most enterprises are), this means deploying a pre-rendering solution that detects crawlers and serves static snapshots from the edge. Harper's pre-rendering solution handles this out of the box.

Second, optimize the content. Once bots can actually read your pages, then the AEO fundamentals apply: structured data, clear headings, concise answers to common questions, and a governance strategy for which AI systems you permit to crawl your content.

The brands that figure this out early will own the narrative in AI search. The ones that don't will keep watching their brand story get told by Reddit threads and outdated review sites.

AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.

Download

White arrow pointing right
AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.

Download

White arrow pointing right
AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.

Download

White arrow pointing right

Explore Recent Resources

Case Study
GitHub Logo

How a $1B+ Retailer Unlocked $92M in Annual Revenue, Without Touching the Origin.

When experimentation logic, redirect limits, and origin failures were quietly costing a $1B+ retailer tens of millions, Harper delivered edge-deployed acceleration without re-platforming. 47x ROI. Six weeks to prove it.
Case Study
When experimentation logic, redirect limits, and origin failures were quietly costing a $1B+ retailer tens of millions, Harper delivered edge-deployed acceleration without re-platforming. 47x ROI. Six weeks to prove it.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM & Marketing
Case Study

How a $1B+ Retailer Unlocked $92M in Annual Revenue, Without Touching the Origin.

When experimentation logic, redirect limits, and origin failures were quietly costing a $1B+ retailer tens of millions, Harper delivered edge-deployed acceleration without re-platforming. 47x ROI. Six weeks to prove it.
Aleks Haugom
Mar 2026
Case Study

How a $1B+ Retailer Unlocked $92M in Annual Revenue, Without Touching the Origin.

When experimentation logic, redirect limits, and origin failures were quietly costing a $1B+ retailer tens of millions, Harper delivered edge-deployed acceleration without re-platforming. 47x ROI. Six weeks to prove it.
Aleks Haugom
Case Study

How a $1B+ Retailer Unlocked $92M in Annual Revenue, Without Touching the Origin.

When experimentation logic, redirect limits, and origin failures were quietly costing a $1B+ retailer tens of millions, Harper delivered edge-deployed acceleration without re-platforming. 47x ROI. Six weeks to prove it.
Aleks Haugom
Blog
GitHub Logo

The Security Problem in Agentic Engineering has an Architectural Solution

Agentic AI promises autonomous software development, but enterprise security concerns block adoption. This article explains how credential sprawl creates risk—and how a unified runtime architecture like Harper eliminates infrastructure access requirements, enabling secure agentic engineering in production environments.
A.I.
Blog
Agentic AI promises autonomous software development, but enterprise security concerns block adoption. This article explains how credential sprawl creates risk—and how a unified runtime architecture like Harper eliminates infrastructure access requirements, enabling secure agentic engineering in production environments.
Person with very short blonde hair wearing a light gray button‑up shirt, standing with arms crossed and smiling outdoors with foliage behind.
Kris Zyp
SVP of Engineering
Blog

The Security Problem in Agentic Engineering has an Architectural Solution

Agentic AI promises autonomous software development, but enterprise security concerns block adoption. This article explains how credential sprawl creates risk—and how a unified runtime architecture like Harper eliminates infrastructure access requirements, enabling secure agentic engineering in production environments.
Kris Zyp
Mar 2026
Blog

The Security Problem in Agentic Engineering has an Architectural Solution

Agentic AI promises autonomous software development, but enterprise security concerns block adoption. This article explains how credential sprawl creates risk—and how a unified runtime architecture like Harper eliminates infrastructure access requirements, enabling secure agentic engineering in production environments.
Kris Zyp
Blog

The Security Problem in Agentic Engineering has an Architectural Solution

Agentic AI promises autonomous software development, but enterprise security concerns block adoption. This article explains how credential sprawl creates risk—and how a unified runtime architecture like Harper eliminates infrastructure access requirements, enabling secure agentic engineering in production environments.
Kris Zyp