Click Below to Get the Code

Browse, clone, and build from real-world templates powered by Harper.
Blog
GitHub Logo

Why Your Brand Is Invisible to AI Search (And What to Do About It)

AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.
Blog

Why Your Brand Is Invisible to AI Search (And What to Do About It)

Aleks Haugom
Senior Manager of GTM
at Harper
March 17, 2026
Aleks Haugom
Senior Manager of GTM
at Harper
March 17, 2026
Aleks Haugom
Senior Manager of GTM
at Harper
March 17, 2026
March 17, 2026
AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.
Aleks Haugom
Senior Manager of GTM

Your content ranks on Google. It converts. It's authoritative. But when a customer asks ChatGPT, Perplexity, or Gemini about your category, your brand doesn't exist. Here's why, and how to fix it at the infrastructure layer.

The Question Every Enterprise Is Asking

We recently fielded a question from a partner that cuts straight to the point:

We are having lots of conversations around how brands are showing up in AI searches, like ChatGPT and Gemini. Or worse case scenario, not showing up at all. Some scenarios, the results are pulling from old Reddit articles and aren't brand friendly. Does Harper have a solution to help with this? Like the new wave of SEO, but for AI generated consumer searches?

The short answer: yes. But the real answer requires understanding why brands are invisible in the first place. It's not a content problem. It's an infrastructure problem.

AI Crawlers Are Not Google Crawlers

There's a common misconception that AI-powered search works the same way as traditional search. It doesn't. Google's crawler indexes your page, follows your links, and ranks you based on relevance signals. LLM crawlers (GPTBot, ClaudeBot, PerplexityBot, and others) are doing something fundamentally different: they're trying to read your content, extract meaning, and feed it into a model or a real-time retrieval pipeline.

"The bots work the same way as the ones that come to index your site for Google search results. The difference is what they're doing with the data: now they're grabbing it for model training," explains Dawson Toth, Staff AI Engineer at Harper. "Sometimes, it's grabbing data in real time for RAG (retrieval augmented generation), and that will come from the customer's computer directly. Those bots have various mixtures of capability to act like a real browser, evaluating JavaScript. They also have real time constraints to control costs. If the page doesn't render fast enough, or has to download too much junk to render, it gives up and moves on."

That last part is critical. Traditional search bots are patient. LLM crawlers are not. They operate under strict latency and compute budgets. If your site can't serve clean, readable HTML quickly, the crawler moves on. It finds what it can: old Reddit threads, outdated reviews, third-party content. Whatever is easy to consume becomes the AI's version of the truth about your brand.

Why JavaScript-Heavy Sites Get Skipped

Most enterprise brand sites are built on JavaScript frameworks like React, Next.js, Angular, or Vue. These frameworks are great for user experience, but they create a fundamental problem for AI crawlers: the content isn't in the HTML. It's generated client-side, after JavaScript executes.

Google's crawler has a rendering pipeline that can execute JavaScript (albeit with delays). LLM crawlers generally do not. They want static HTML. If your page requires JavaScript execution to show its content, the crawler either sees a blank shell or gives up entirely.

This is why well-funded, well-optimized enterprise sites still show up as ghosts in AI search results. The content exists. The authority is there. But the delivery mechanism is incompatible with how AI systems consume the web.

The Infrastructure Fix: Pre-Rendering for LLM Crawlers

The fix isn't more content. It's better delivery. Specifically, it's about detecting bot traffic and serving fully-hydrated HTML snapshots instead of raw JavaScript bundles.

Harper solves this at the infrastructure layer:

Pre-render for LLM crawlers. Harper detects bot traffic, renders fully-hydrated HTML snapshots, and serves them to any crawler, making it bot-agnostic across all major AI search engines. What the crawler sees is the brand's complete, structured, authoritative content. Not a JavaScript shell. Not a loading spinner. The actual page.

Enable more frequent crawling. By dramatically reducing TTFB and serving static HTML, Harper helps bots crawl more pages within their crawl budget, resulting in fresher indexes and better brand representation in AI-generated results.

Control what bots can access. The other side of the equation is governance. "The other component of this is a robots.txt and sitemap.xml," Dawson notes. "Those can restrict what bots can access, which can be wise or unwise depending on what you're doing." Getting this right means deciding which AI systems you want to power and which you'd rather block.

Harper's pre-rendering system can handle roughly 2,000 pages per minute for a typical React application. Even for pages not yet cached, on-demand rendering typically completes in seconds, well within the tolerance of most LLM crawlers. The architecture distributes rendering jobs across nodes, caches results at the edge, and serves subsequent requests in milliseconds.

For a deeper technical walkthrough, check out our video on Improving SEO via Pre-Rendering.

Think of It as GEO at the Infrastructure Layer

The industry is converging on terms like Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO) to describe this shift. We wrote a comprehensive guide on the content strategy side of this in our article on Answer Engine Optimization: How to Get Cited by AI Answers.

But here's the thing most AEO guides miss: none of the content-level optimizations matter if the crawler can't read your page in the first place. Structured data, FAQ schema, clean headings, concise definitions... all of it is irrelevant if it's locked behind client-side JavaScript that the bot never executes.

Harper operates at the layer below all of that. Think of it as GEO at the infrastructure layer. Not just advising on what content to write, but ensuring AI crawlers can actually read the right content from the source.

What This Means in Practice

For brands navigating this shift, the playbook is two-fold:

First, fix the delivery problem. Make sure your site serves clean, pre-rendered HTML to bot traffic. If you're running a JavaScript-heavy framework (and most enterprises are), this means deploying a pre-rendering solution that detects crawlers and serves static snapshots from the edge. Harper's pre-rendering solution handles this out of the box.

Second, optimize the content. Once bots can actually read your pages, then the AEO fundamentals apply: structured data, clear headings, concise answers to common questions, and a governance strategy for which AI systems you permit to crawl your content.

The brands that figure this out early will own the narrative in AI search. The ones that don't will keep watching their brand story get told by Reddit threads and outdated review sites.

Your content ranks on Google. It converts. It's authoritative. But when a customer asks ChatGPT, Perplexity, or Gemini about your category, your brand doesn't exist. Here's why, and how to fix it at the infrastructure layer.

The Question Every Enterprise Is Asking

We recently fielded a question from a partner that cuts straight to the point:

We are having lots of conversations around how brands are showing up in AI searches, like ChatGPT and Gemini. Or worse case scenario, not showing up at all. Some scenarios, the results are pulling from old Reddit articles and aren't brand friendly. Does Harper have a solution to help with this? Like the new wave of SEO, but for AI generated consumer searches?

The short answer: yes. But the real answer requires understanding why brands are invisible in the first place. It's not a content problem. It's an infrastructure problem.

AI Crawlers Are Not Google Crawlers

There's a common misconception that AI-powered search works the same way as traditional search. It doesn't. Google's crawler indexes your page, follows your links, and ranks you based on relevance signals. LLM crawlers (GPTBot, ClaudeBot, PerplexityBot, and others) are doing something fundamentally different: they're trying to read your content, extract meaning, and feed it into a model or a real-time retrieval pipeline.

"The bots work the same way as the ones that come to index your site for Google search results. The difference is what they're doing with the data: now they're grabbing it for model training," explains Dawson Toth, Staff AI Engineer at Harper. "Sometimes, it's grabbing data in real time for RAG (retrieval augmented generation), and that will come from the customer's computer directly. Those bots have various mixtures of capability to act like a real browser, evaluating JavaScript. They also have real time constraints to control costs. If the page doesn't render fast enough, or has to download too much junk to render, it gives up and moves on."

That last part is critical. Traditional search bots are patient. LLM crawlers are not. They operate under strict latency and compute budgets. If your site can't serve clean, readable HTML quickly, the crawler moves on. It finds what it can: old Reddit threads, outdated reviews, third-party content. Whatever is easy to consume becomes the AI's version of the truth about your brand.

Why JavaScript-Heavy Sites Get Skipped

Most enterprise brand sites are built on JavaScript frameworks like React, Next.js, Angular, or Vue. These frameworks are great for user experience, but they create a fundamental problem for AI crawlers: the content isn't in the HTML. It's generated client-side, after JavaScript executes.

Google's crawler has a rendering pipeline that can execute JavaScript (albeit with delays). LLM crawlers generally do not. They want static HTML. If your page requires JavaScript execution to show its content, the crawler either sees a blank shell or gives up entirely.

This is why well-funded, well-optimized enterprise sites still show up as ghosts in AI search results. The content exists. The authority is there. But the delivery mechanism is incompatible with how AI systems consume the web.

The Infrastructure Fix: Pre-Rendering for LLM Crawlers

The fix isn't more content. It's better delivery. Specifically, it's about detecting bot traffic and serving fully-hydrated HTML snapshots instead of raw JavaScript bundles.

Harper solves this at the infrastructure layer:

Pre-render for LLM crawlers. Harper detects bot traffic, renders fully-hydrated HTML snapshots, and serves them to any crawler, making it bot-agnostic across all major AI search engines. What the crawler sees is the brand's complete, structured, authoritative content. Not a JavaScript shell. Not a loading spinner. The actual page.

Enable more frequent crawling. By dramatically reducing TTFB and serving static HTML, Harper helps bots crawl more pages within their crawl budget, resulting in fresher indexes and better brand representation in AI-generated results.

Control what bots can access. The other side of the equation is governance. "The other component of this is a robots.txt and sitemap.xml," Dawson notes. "Those can restrict what bots can access, which can be wise or unwise depending on what you're doing." Getting this right means deciding which AI systems you want to power and which you'd rather block.

Harper's pre-rendering system can handle roughly 2,000 pages per minute for a typical React application. Even for pages not yet cached, on-demand rendering typically completes in seconds, well within the tolerance of most LLM crawlers. The architecture distributes rendering jobs across nodes, caches results at the edge, and serves subsequent requests in milliseconds.

For a deeper technical walkthrough, check out our video on Improving SEO via Pre-Rendering.

Think of It as GEO at the Infrastructure Layer

The industry is converging on terms like Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO) to describe this shift. We wrote a comprehensive guide on the content strategy side of this in our article on Answer Engine Optimization: How to Get Cited by AI Answers.

But here's the thing most AEO guides miss: none of the content-level optimizations matter if the crawler can't read your page in the first place. Structured data, FAQ schema, clean headings, concise definitions... all of it is irrelevant if it's locked behind client-side JavaScript that the bot never executes.

Harper operates at the layer below all of that. Think of it as GEO at the infrastructure layer. Not just advising on what content to write, but ensuring AI crawlers can actually read the right content from the source.

What This Means in Practice

For brands navigating this shift, the playbook is two-fold:

First, fix the delivery problem. Make sure your site serves clean, pre-rendered HTML to bot traffic. If you're running a JavaScript-heavy framework (and most enterprises are), this means deploying a pre-rendering solution that detects crawlers and serves static snapshots from the edge. Harper's pre-rendering solution handles this out of the box.

Second, optimize the content. Once bots can actually read your pages, then the AEO fundamentals apply: structured data, clear headings, concise answers to common questions, and a governance strategy for which AI systems you permit to crawl your content.

The brands that figure this out early will own the narrative in AI search. The ones that don't will keep watching their brand story get told by Reddit threads and outdated review sites.

AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.

Download

White arrow pointing right
AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.

Download

White arrow pointing right
AI search bots skip JavaScript-heavy sites, leaving brands invisible in ChatGPT, Perplexity, and Gemini. Learn how pre-rendering at the infrastructure layer fixes the problem.

Download

White arrow pointing right

Explore Recent Resources

Blog
GitHub Logo

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Blog
Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM
Blog

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Aleks Haugom
Apr 2026
Blog

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Aleks Haugom
Blog

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Aleks Haugom
Blog
GitHub Logo

How a Shopify Custom Tie Shop Exposes a Common Flaw in Agent Architecture

Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Blog
Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM
Blog

How a Shopify Custom Tie Shop Exposes a Common Flaw in Agent Architecture

Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Aleks Haugom
Apr 2026
Blog

How a Shopify Custom Tie Shop Exposes a Common Flaw in Agent Architecture

Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Aleks Haugom
Blog

How a Shopify Custom Tie Shop Exposes a Common Flaw in Agent Architecture

Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Aleks Haugom
Blog
GitHub Logo

Nobody Wants to Pick a Data Center (And They Shouldn't Have To)

Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Blog
Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Headshot of a smiling woman with shoulder-length dark hair wearing a black sweater with white stripes and a gold pendant necklace, standing outdoors with blurred trees and mountains in the background.
Bari Jay
Senior Director of Product Management
Blog

Nobody Wants to Pick a Data Center (And They Shouldn't Have To)

Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Bari Jay
Apr 2026
Blog

Nobody Wants to Pick a Data Center (And They Shouldn't Have To)

Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Bari Jay
Blog

Nobody Wants to Pick a Data Center (And They Shouldn't Have To)

Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Bari Jay
Blog
GitHub Logo

New RocksDB Binding for Node.js

rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Blog
rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Person with short hair and rectangular glasses wearing a plaid shirt over a dark T‑shirt, smiling broadly with a blurred outdoor background of trees and hills.
Chris Barber
Staff Software Engineer
Blog

New RocksDB Binding for Node.js

rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Chris Barber
Apr 2026
Blog

New RocksDB Binding for Node.js

rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Chris Barber
Blog

New RocksDB Binding for Node.js

rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Chris Barber
Blog
GitHub Logo

Open Sourcing Harper

Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Blog
Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Person with shoulder‑length curly brown hair and light beard wearing a gray long‑sleeve shirt, smiling outdoors with trees and greenery in the background.
Ethan Arrowood
Senior Software Engineer
Blog

Open Sourcing Harper

Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Ethan Arrowood
Apr 2026
Blog

Open Sourcing Harper

Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Ethan Arrowood
Blog

Open Sourcing Harper

Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Ethan Arrowood