Click Below to Get the Code

Browse, clone, and build from real-world templates powered by Harper.
Blog
GitHub Logo

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Blog

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Aleks Haugom
Senior Manager of GTM
at Harper
April 20, 2026
Aleks Haugom
Senior Manager of GTM
at Harper
April 20, 2026
Aleks Haugom
Senior Manager of GTM
at Harper
April 20, 2026
April 20, 2026
Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Aleks Haugom
Senior Manager of GTM

A customer walks within 500 feet of your store. In the next minute or two, they decide whether to come in or keep walking. That window is the most valuable surface in brick‑and‑mortar retail. It's also one that most brands haven’t been able to act on in a personalized way because the plumbing to build a system that can act is brutal. It needs a geofence provider, a CRM, a POS extract, a campaign tool, a rules engine, and a push service. That’s six systems, six network hops, six integrations before anything gets sent. Every hop pulls the data the agent needs further from the decision it has to make.

This post shows the entire decision collapsed into a single Harper agentic runtime, with a working reference repo you can clone and run in 10 minutes. The pattern targets Ralph Lauren, Starbucks, McDonald's, Sephora, Sweetgreen, Home Depot, essentially any brand with stores, a mobile app, and a CTO tired of being told integrations are the hard part.

The Demo 

Open the Nearstore Agent app and simulator, you’ll see a map of Denver metro with 25 real McDonald's pinned, each ringed with a 500‑foot geofence. Pick one of five hand‑crafted personas, then click anywhere on the map and watch the decision render.

One caveat up front. The map click is a visualization only. In production, the GPS ping comes from the customer's phone, their mobile app detects a geofence entry, and POSTs the coordinates to Harper. The simulator gives you a human‑friendly way to poke at the same endpoint that the phone would hit. The decision path that fires afterward is identical in both cases.

Click the same pin with different personas, and the decisions diverge. Marcus, the coffee regular, gets a free coffee push at 7 AM. Elena's weekend family account gets put on hold since weekday mornings are nowhere near her pattern. Sam, the lapsed heavy user, gets a 20% win‑back. Alex, the newcomer, gets a welcome offer. Same pin, same time, different people, different decisions, with reasoning attached. End-to-end, the round trip is two to four seconds.

Three Swappable Parts, One Agentic Runtime

The architecture is three components, all co-located in a single Harper process. We leaned on context proximity as a key design decision: the agent's data, logic, and decision path all live inside the same process boundary, to reduce latency and keep token-heavy LLM reasoning out of the common case.

1. Location Lookup

Geofencing runs in Harper itself. No third‑party geolocation API, no external geofence service, no separate spatial database. Every store in the Store table has a precomputed H3 cell — Uber's hexagonal spatial index — at resolution 9, about 174m across, a natural fit for a 500‑foot radius.

At request time, Harper computes the customer's cell, expands to the seven‑cell neighborhood, queries the indexed store table, and filters the 0–2 candidates by exact distance. At 25 stores, it's overkill; the same pattern stays constant‑time at 250,000. Lifted from Kyle Bernhardy's geolookup repo, which pushes the spatial pattern further with full reverse‑geocoding.

2. Data Assembly 

Five tables hold everything: Store, Customer, Order, Campaign, BusinessRule. When a ping comes in, the handler reads from all five in‑process using indexed lookups, no network hops, no service calls, no cache tier. Each table also gets automatic REST and WebSocket APIs just by adding @export to the schema, so writes from outside the runtime are free (more on that below).

3. The Decision (pluggable)

The reference implementation uses Claude Haiku 4.5 via the Anthropic API, with a forced tool call to ensure structured output. That's a convenience (for the demo), not a constraint. The decision layer is pluggable:

  • Deterministic Rules. For many promo decisions, a handful of if/else branches over the same context works fine. The LLM isn't doing magic; it's pattern‑matching over persona history and campaign fit.
  • Self‑hosted Open‑source Model. The reasoning task here is not sophisticated. A Llama or Mistral instance served by vLLM or Ollama would handle it cleanly, and at McDonald's scale, that's probably what you'd want. Self‑hosting also gives you data‑residency and unit‑cost controls.
  • Hybrid. Hard rules (cooldown windows, quiet hours) run in code and short‑circuit before any model call. Soft rules (loyalty thresholds, lapsed thresholds) go to whatever decision layer you've chosen.

One optimization worth calling out: require two consecutive pings inside the same geofence before firing a decision. That single rule eliminates the highway‑speeder false positive — someone doing 65 mph past the store on I‑25 isn't a customer — and cuts your decision‑layer load substantially. In this pattern, that's a row in the BusinessRule table, not a code deploy.

Where the Data Comes From

In the demo, the five tables are seeded from JSON files, so you can clone and run. In production, those tables are hydrated from the systems that already own the data:

  • Orders stream from your POS — Toast, Square, Olo, Oracle MICROS
  • Customers come from your CRM or CDP — Salesforce, Segment
  • Stores sync from your master data system on a nightly cadence
  • Campaigns come from your ESP (Braze, Iterable) or live natively in Harper with a small admin UI
  • Business rules live natively in Harper and are edited through that same admin UI

Here's what's easy to miss: these integrations rarely require new infrastructure. The systems you already run ship with APIs and webhooks out of the box, whether that's a proprietary stack (McDonald's, Starbucks, and Chick‑fil‑A famously build and operate most of their own tech) or a SaaS platform (Toast, Square, Oracle Simphony / MICROS, or Olo for POS and digital ordering; Punchh, Paytronix, or Thanx for loyalty; Braze or Iterable for marketing). Those APIs already exist because your brand's own mobile app, loyalty program, and analytics pipeline depend on them. Harper's auto‑generated REST endpoints are on the receive side, so in most cases, pointing an existing webhook to a new URL completes the integration. No new middleware, no Kafka cluster, no reverse‑ETL license.

But here's the real payoff: once data is in Harper, the decision path stops making calls. Every customer ping at runtime reads from local memory instead of fanning out to five services. You pay for integration cost once at the edge; context proximity pays you back on every request after that. That's the trade, and it's the reason the pattern is worth the investment.

Try it in three commands

Clone the Repo

With Node 22+ and an Anthropic API key:

git clone <repo-url> nearstore-agent
cd nearstore-agent
npm install
cp .env.example .env   # paste your ANTHROPIC_API_KEY
npm install -g harperdb
npm run dev            # terminal 1
npm run seed           # terminal 2

Open http://localhost:9926/ and click a pin. Full instructions are in the README.

Deploying to Harper Fabric

Sign up for the free tier of Harper Fabric, no credit card required. Create a cluster, copy the credentials into your .env as CLI_TARGET, CLI_TARGET_USERNAME, and CLI_TARGET_PASSWORD, set your API key on the cluster, and run npm run deploy.

If you're already using a coding agent like Claude Code or Cursor, they can handle the deployment for you after you enter your credentials in the .env file. The agent runs the same npm run deploy and fixes any failures itself. For a project this small, the deploy is genuinely one prompt.

Small lift, Big Impact

The decision engine in this repo — geofence lookup, data assembly, pluggable decision call — is a week of work for a small team. The integrations that keep those tables hydrated scale with your source‑system count, just like any operational data store. What Harper changes: the location lookup runs in your own database, the data assembly is an in‑process read from indexed tables, and the decision layer is whatever you want it to be, hosted LLM, self‑hosted open‑source model, or plain code.

The bottom line: location‑aware personalization doesn't have to be expensive, complicated, or slow. A unified runtime like Harper collapses the plumbing — turning geo‑aware personalization at scale into a small engineering lift, with minimal cost per decision and minimal latency at runtime.

Repo: https://github.com/HarperFast/nearstore-agent-demo

Harper: harper.fast · docs · Fabric 

H3 spatial index: h3geo.org 

Inspiration for the spatial pattern: Kyle Bernhardy's geolookup

A customer walks within 500 feet of your store. In the next minute or two, they decide whether to come in or keep walking. That window is the most valuable surface in brick‑and‑mortar retail. It's also one that most brands haven’t been able to act on in a personalized way because the plumbing to build a system that can act is brutal. It needs a geofence provider, a CRM, a POS extract, a campaign tool, a rules engine, and a push service. That’s six systems, six network hops, six integrations before anything gets sent. Every hop pulls the data the agent needs further from the decision it has to make.

This post shows the entire decision collapsed into a single Harper agentic runtime, with a working reference repo you can clone and run in 10 minutes. The pattern targets Ralph Lauren, Starbucks, McDonald's, Sephora, Sweetgreen, Home Depot, essentially any brand with stores, a mobile app, and a CTO tired of being told integrations are the hard part.

The Demo 

Open the Nearstore Agent app and simulator, you’ll see a map of Denver metro with 25 real McDonald's pinned, each ringed with a 500‑foot geofence. Pick one of five hand‑crafted personas, then click anywhere on the map and watch the decision render.

One caveat up front. The map click is a visualization only. In production, the GPS ping comes from the customer's phone, their mobile app detects a geofence entry, and POSTs the coordinates to Harper. The simulator gives you a human‑friendly way to poke at the same endpoint that the phone would hit. The decision path that fires afterward is identical in both cases.

Click the same pin with different personas, and the decisions diverge. Marcus, the coffee regular, gets a free coffee push at 7 AM. Elena's weekend family account gets put on hold since weekday mornings are nowhere near her pattern. Sam, the lapsed heavy user, gets a 20% win‑back. Alex, the newcomer, gets a welcome offer. Same pin, same time, different people, different decisions, with reasoning attached. End-to-end, the round trip is two to four seconds.

Three Swappable Parts, One Agentic Runtime

The architecture is three components, all co-located in a single Harper process. We leaned on context proximity as a key design decision: the agent's data, logic, and decision path all live inside the same process boundary, to reduce latency and keep token-heavy LLM reasoning out of the common case.

1. Location Lookup

Geofencing runs in Harper itself. No third‑party geolocation API, no external geofence service, no separate spatial database. Every store in the Store table has a precomputed H3 cell — Uber's hexagonal spatial index — at resolution 9, about 174m across, a natural fit for a 500‑foot radius.

At request time, Harper computes the customer's cell, expands to the seven‑cell neighborhood, queries the indexed store table, and filters the 0–2 candidates by exact distance. At 25 stores, it's overkill; the same pattern stays constant‑time at 250,000. Lifted from Kyle Bernhardy's geolookup repo, which pushes the spatial pattern further with full reverse‑geocoding.

2. Data Assembly 

Five tables hold everything: Store, Customer, Order, Campaign, BusinessRule. When a ping comes in, the handler reads from all five in‑process using indexed lookups, no network hops, no service calls, no cache tier. Each table also gets automatic REST and WebSocket APIs just by adding @export to the schema, so writes from outside the runtime are free (more on that below).

3. The Decision (pluggable)

The reference implementation uses Claude Haiku 4.5 via the Anthropic API, with a forced tool call to ensure structured output. That's a convenience (for the demo), not a constraint. The decision layer is pluggable:

  • Deterministic Rules. For many promo decisions, a handful of if/else branches over the same context works fine. The LLM isn't doing magic; it's pattern‑matching over persona history and campaign fit.
  • Self‑hosted Open‑source Model. The reasoning task here is not sophisticated. A Llama or Mistral instance served by vLLM or Ollama would handle it cleanly, and at McDonald's scale, that's probably what you'd want. Self‑hosting also gives you data‑residency and unit‑cost controls.
  • Hybrid. Hard rules (cooldown windows, quiet hours) run in code and short‑circuit before any model call. Soft rules (loyalty thresholds, lapsed thresholds) go to whatever decision layer you've chosen.

One optimization worth calling out: require two consecutive pings inside the same geofence before firing a decision. That single rule eliminates the highway‑speeder false positive — someone doing 65 mph past the store on I‑25 isn't a customer — and cuts your decision‑layer load substantially. In this pattern, that's a row in the BusinessRule table, not a code deploy.

Where the Data Comes From

In the demo, the five tables are seeded from JSON files, so you can clone and run. In production, those tables are hydrated from the systems that already own the data:

  • Orders stream from your POS — Toast, Square, Olo, Oracle MICROS
  • Customers come from your CRM or CDP — Salesforce, Segment
  • Stores sync from your master data system on a nightly cadence
  • Campaigns come from your ESP (Braze, Iterable) or live natively in Harper with a small admin UI
  • Business rules live natively in Harper and are edited through that same admin UI

Here's what's easy to miss: these integrations rarely require new infrastructure. The systems you already run ship with APIs and webhooks out of the box, whether that's a proprietary stack (McDonald's, Starbucks, and Chick‑fil‑A famously build and operate most of their own tech) or a SaaS platform (Toast, Square, Oracle Simphony / MICROS, or Olo for POS and digital ordering; Punchh, Paytronix, or Thanx for loyalty; Braze or Iterable for marketing). Those APIs already exist because your brand's own mobile app, loyalty program, and analytics pipeline depend on them. Harper's auto‑generated REST endpoints are on the receive side, so in most cases, pointing an existing webhook to a new URL completes the integration. No new middleware, no Kafka cluster, no reverse‑ETL license.

But here's the real payoff: once data is in Harper, the decision path stops making calls. Every customer ping at runtime reads from local memory instead of fanning out to five services. You pay for integration cost once at the edge; context proximity pays you back on every request after that. That's the trade, and it's the reason the pattern is worth the investment.

Try it in three commands

Clone the Repo

With Node 22+ and an Anthropic API key:

git clone <repo-url> nearstore-agent
cd nearstore-agent
npm install
cp .env.example .env   # paste your ANTHROPIC_API_KEY
npm install -g harperdb
npm run dev            # terminal 1
npm run seed           # terminal 2

Open http://localhost:9926/ and click a pin. Full instructions are in the README.

Deploying to Harper Fabric

Sign up for the free tier of Harper Fabric, no credit card required. Create a cluster, copy the credentials into your .env as CLI_TARGET, CLI_TARGET_USERNAME, and CLI_TARGET_PASSWORD, set your API key on the cluster, and run npm run deploy.

If you're already using a coding agent like Claude Code or Cursor, they can handle the deployment for you after you enter your credentials in the .env file. The agent runs the same npm run deploy and fixes any failures itself. For a project this small, the deploy is genuinely one prompt.

Small lift, Big Impact

The decision engine in this repo — geofence lookup, data assembly, pluggable decision call — is a week of work for a small team. The integrations that keep those tables hydrated scale with your source‑system count, just like any operational data store. What Harper changes: the location lookup runs in your own database, the data assembly is an in‑process read from indexed tables, and the decision layer is whatever you want it to be, hosted LLM, self‑hosted open‑source model, or plain code.

The bottom line: location‑aware personalization doesn't have to be expensive, complicated, or slow. A unified runtime like Harper collapses the plumbing — turning geo‑aware personalization at scale into a small engineering lift, with minimal cost per decision and minimal latency at runtime.

Repo: https://github.com/HarperFast/nearstore-agent-demo

Harper: harper.fast · docs · Fabric 

H3 spatial index: h3geo.org 

Inspiration for the spatial pattern: Kyle Bernhardy's geolookup

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.

Download

White arrow pointing right
Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.

Download

White arrow pointing right
Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.

Download

White arrow pointing right

Explore Recent Resources

Blog
GitHub Logo

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Blog
Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM
Blog

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Aleks Haugom
Apr 2026
Blog

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Aleks Haugom
Blog

The Nearstore Agent: a reference pattern for low-latency, geofenced, promotional decisions

Build a real-time, geofenced promo engine on Harper's agentic runtime. The Nearstore Agent collapses geofence lookup, customer data, campaigns, and AI decisions into a single process. Clone the reference repo and deploy in minutes.
Aleks Haugom
Blog
GitHub Logo

How a Shopify Custom Tie Shop Exposes a Common Flaw in Agent Architecture

Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Blog
Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM
Blog

How a Shopify Custom Tie Shop Exposes a Common Flaw in Agent Architecture

Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Aleks Haugom
Apr 2026
Blog

How a Shopify Custom Tie Shop Exposes a Common Flaw in Agent Architecture

Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Aleks Haugom
Blog

How a Shopify Custom Tie Shop Exposes a Common Flaw in Agent Architecture

Explore how a Shopify-based custom tie shop reveals a critical flaw in one LLM agent design strategy, and why context-first architectures with unified runtimes deliver faster, more accurate, and scalable customer support automation.
Aleks Haugom
Blog
GitHub Logo

Nobody Wants to Pick a Data Center (And They Shouldn't Have To)

Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Blog
Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Headshot of a smiling woman with shoulder-length dark hair wearing a black sweater with white stripes and a gold pendant necklace, standing outdoors with blurred trees and mountains in the background.
Bari Jay
Senior Director of Product Management
Blog

Nobody Wants to Pick a Data Center (And They Shouldn't Have To)

Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Bari Jay
Apr 2026
Blog

Nobody Wants to Pick a Data Center (And They Shouldn't Have To)

Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Bari Jay
Blog

Nobody Wants to Pick a Data Center (And They Shouldn't Have To)

Harper Fabric simplifies cloud deployment by eliminating the need to choose data centers, automating infrastructure, scaling, and global distribution. Built for Harper’s unified runtime, it enables developers to deploy high-performance, distributed applications quickly without managing complex cloud configurations or infrastructure overhead.
Bari Jay
Blog
GitHub Logo

New RocksDB Binding for Node.js

rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Blog
rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Person with short hair and rectangular glasses wearing a plaid shirt over a dark T‑shirt, smiling broadly with a blurred outdoor background of trees and hills.
Chris Barber
Staff Software Engineer
Blog

New RocksDB Binding for Node.js

rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Chris Barber
Apr 2026
Blog

New RocksDB Binding for Node.js

rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Chris Barber
Blog

New RocksDB Binding for Node.js

rocksdb-js is a modern Node.js binding for RocksDB, offering full transaction support, lazy range queries, and a TypeScript API. Built for performance and scalability, it enables reliable write-heavy workloads, real-time replication, and high-concurrency applications in Harper 5.0 and beyond.
Chris Barber
Blog
GitHub Logo

Open Sourcing Harper

Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Blog
Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Person with shoulder‑length curly brown hair and light beard wearing a gray long‑sleeve shirt, smiling outdoors with trees and greenery in the background.
Ethan Arrowood
Senior Software Engineer
Blog

Open Sourcing Harper

Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Ethan Arrowood
Apr 2026
Blog

Open Sourcing Harper

Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Ethan Arrowood
Blog

Open Sourcing Harper

Harper is now open source, with its core platform released under Apache 2.0 and enterprise features source-available. This shift builds trust, enables community contributions, and positions Harper as a unified, transparent platform for developers and AI-driven applications.
Ethan Arrowood