Click Below to Get the Code

Browse, clone, and build from real-world templates powered by Harper.
News
GitHub Logo

New - Unleash the Power of Federated API Acceleration with Distributed Cache

Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.
Announcement
News
Announcement

New - Unleash the Power of Federated API Acceleration with Distributed Cache

By
Harper
September 26, 2023
By
Harper
September 26, 2023
By
Harper
September 26, 2023
September 26, 2023
Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.
Harper

We are thrilled to introduce a game-changing solution that's set to redefine the world of application acceleration at scale - HarperDB Distributed Cache. Designed as an intermediary layer between Content Delivery Networks (CDNs) and origin servers, Distributed Cache offers unparalleled flexibility and power, setting new standards for cache performance.

‍

Unmatched Cache Performance

Distributed Cache redefines cache performance as we know it. Passive mode - where Distributed Cache simply calls an origin API and caches the response, offers origin offload up to 99%, ensuring swift content delivery for your users once the repository is built out. 

For even greater offload and better performance, Active Caching lets you sync your origin data source out to the edge and create your API endpoints directly on top of the data, allowing up to 100% origin offload, and near-instant content access. 

For those use cases where ultimate performance is desired, HarperDB's Distributed Cache can be provisioned to hold all cached values in RAM, lowering read latency to sub-millisecond levels and delivering an unparalleled experience for your end users.

‍

Pioneering Caching of New Data Types

Distributed Cache isn't just about cache hit rates; it's about unlocking new possibilities. We're proud to introduce the ability to create cache keys from any part of the request, including POST body, slices of graphQL payloads, URL query parameters, and even user-specific data like JWT payloads. All with the core objective of allowing you to optimize your web applications like never before.

‍

Boost Your Business with Distributed Cache

Distributed Cache isn't just a technological marvel; it's a strategic asset for your business.

  • SEO Dominance: In the world of SEO, speed is paramount. With Distributed Cache, your website's performance will soar, resulting in higher search engine rankings and increased visibility. Say hello to improved organic traffic!
  • Maximize Revenue Potential: Faster content delivery directly translates into higher conversion rates and revenue. By leveraging Distributed Cache, you're not just optimizing content delivery but boosting your bottom line.
  • Elevate User Experiences: Speed and reliability are at the heart of user satisfaction. With Distributed Cache, you'll provide seamless, lightning-fast user experiences, leading to higher retention rates and enhanced customer loyalty.
    ‍

‍

Custom-Tailored for Industry Titans

Distributed Cache is purpose-built for organizations with colossal catalogs, particularly in the retail and gaming sectors. We understand the unique challenges of handling vast volumes of data. Distributed Cache is built on dedicated distributed cloud infrastructure to meet these challenges head-on, ensuring scalability and reliability on a global scale.

‍

How It Works

Distributed Cache functions as the intermediary layer between your CDN and origin server. It efficiently delivers data for CDN cache misses without requiring frequent callbacks to the origin. Designed to replicate cache keys between Distributed Cache’s geographically distributed nodes, only a single origin call per payload is needed to populate a cache key’s value globally. 

For CDNs with thousands of POPs, a single expired cache key can trigger thousands of origin hits. With Distributed Cache, the same cached value is replicated globally, with TTLs that can be individually tailored on a per-key basis. Applied to long-tail catalogs comprised of millions of items, the value of Distributed Cache increases exponentially.

‍

The Future of API Acceleration Is Distributed

Distributed Cache isn't just a cache solution; it's a strategic advantage. Bid farewell to sluggish load times, SEO worries, and missed revenue opportunities. Embrace a future where your content delivery is faster, more profitable, and user-friendly.

Join us on this incredible journey as we redefine content delivery. To discover more about Distributed Cache and how it can turbocharge your business for the upcoming holiday season, visit our website and connect with our team today. 

The future of content delivery is here, and it's more distributed than ever!

‍

We are thrilled to introduce a game-changing solution that's set to redefine the world of application acceleration at scale - HarperDB Distributed Cache. Designed as an intermediary layer between Content Delivery Networks (CDNs) and origin servers, Distributed Cache offers unparalleled flexibility and power, setting new standards for cache performance.

‍

Unmatched Cache Performance

Distributed Cache redefines cache performance as we know it. Passive mode - where Distributed Cache simply calls an origin API and caches the response, offers origin offload up to 99%, ensuring swift content delivery for your users once the repository is built out. 

For even greater offload and better performance, Active Caching lets you sync your origin data source out to the edge and create your API endpoints directly on top of the data, allowing up to 100% origin offload, and near-instant content access. 

For those use cases where ultimate performance is desired, HarperDB's Distributed Cache can be provisioned to hold all cached values in RAM, lowering read latency to sub-millisecond levels and delivering an unparalleled experience for your end users.

‍

Pioneering Caching of New Data Types

Distributed Cache isn't just about cache hit rates; it's about unlocking new possibilities. We're proud to introduce the ability to create cache keys from any part of the request, including POST body, slices of graphQL payloads, URL query parameters, and even user-specific data like JWT payloads. All with the core objective of allowing you to optimize your web applications like never before.

‍

Boost Your Business with Distributed Cache

Distributed Cache isn't just a technological marvel; it's a strategic asset for your business.

  • SEO Dominance: In the world of SEO, speed is paramount. With Distributed Cache, your website's performance will soar, resulting in higher search engine rankings and increased visibility. Say hello to improved organic traffic!
  • Maximize Revenue Potential: Faster content delivery directly translates into higher conversion rates and revenue. By leveraging Distributed Cache, you're not just optimizing content delivery but boosting your bottom line.
  • Elevate User Experiences: Speed and reliability are at the heart of user satisfaction. With Distributed Cache, you'll provide seamless, lightning-fast user experiences, leading to higher retention rates and enhanced customer loyalty.
    ‍

‍

Custom-Tailored for Industry Titans

Distributed Cache is purpose-built for organizations with colossal catalogs, particularly in the retail and gaming sectors. We understand the unique challenges of handling vast volumes of data. Distributed Cache is built on dedicated distributed cloud infrastructure to meet these challenges head-on, ensuring scalability and reliability on a global scale.

‍

How It Works

Distributed Cache functions as the intermediary layer between your CDN and origin server. It efficiently delivers data for CDN cache misses without requiring frequent callbacks to the origin. Designed to replicate cache keys between Distributed Cache’s geographically distributed nodes, only a single origin call per payload is needed to populate a cache key’s value globally. 

For CDNs with thousands of POPs, a single expired cache key can trigger thousands of origin hits. With Distributed Cache, the same cached value is replicated globally, with TTLs that can be individually tailored on a per-key basis. Applied to long-tail catalogs comprised of millions of items, the value of Distributed Cache increases exponentially.

‍

The Future of API Acceleration Is Distributed

Distributed Cache isn't just a cache solution; it's a strategic advantage. Bid farewell to sluggish load times, SEO worries, and missed revenue opportunities. Embrace a future where your content delivery is faster, more profitable, and user-friendly.

Join us on this incredible journey as we redefine content delivery. To discover more about Distributed Cache and how it can turbocharge your business for the upcoming holiday season, visit our website and connect with our team today. 

The future of content delivery is here, and it's more distributed than ever!

‍

Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.

Download

White arrow pointing right
Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.

Download

White arrow pointing right
Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.

Download

White arrow pointing right

Explore Recent Resources

Repo
GitHub Logo

Edge AI Ops

This repository demonstrates edge AI implementation using Harper as your data layer and compute platform. Instead of sending user data to distant AI services, we run TensorFlow.js models directly within Harper, achieving sub-50ms AI inference while keeping user data local.
JavaScript
Repo
This repository demonstrates edge AI implementation using Harper as your data layer and compute platform. Instead of sending user data to distant AI services, we run TensorFlow.js models directly within Harper, achieving sub-50ms AI inference while keeping user data local.
A man with short dark hair, glasses, and a goatee smiles slightly, wearing a black shirt in front of a nature background.
Ivan R. Judson, Ph.D.
Distinguished Solution Architect
Repo

Edge AI Ops

This repository demonstrates edge AI implementation using Harper as your data layer and compute platform. Instead of sending user data to distant AI services, we run TensorFlow.js models directly within Harper, achieving sub-50ms AI inference while keeping user data local.
Ivan R. Judson, Ph.D.
Jan 2026
Repo

Edge AI Ops

This repository demonstrates edge AI implementation using Harper as your data layer and compute platform. Instead of sending user data to distant AI services, we run TensorFlow.js models directly within Harper, achieving sub-50ms AI inference while keeping user data local.
Ivan R. Judson, Ph.D.
Repo

Edge AI Ops

This repository demonstrates edge AI implementation using Harper as your data layer and compute platform. Instead of sending user data to distant AI services, we run TensorFlow.js models directly within Harper, achieving sub-50ms AI inference while keeping user data local.
Ivan R. Judson, Ph.D.
Blog
GitHub Logo

Why a Multi-Tier Cache Delivers Better ROI Than a CDN Alone

Learn why a multi-tier caching strategy combining a CDN and mid-tier cache delivers better ROI. Discover how deterministic caching, improved origin offload, lower tail latency, and predictable costs outperform a CDN-only architecture for modern applications.
Cache
Blog
Learn why a multi-tier caching strategy combining a CDN and mid-tier cache delivers better ROI. Discover how deterministic caching, improved origin offload, lower tail latency, and predictable costs outperform a CDN-only architecture for modern applications.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM & Marketing
Blog

Why a Multi-Tier Cache Delivers Better ROI Than a CDN Alone

Learn why a multi-tier caching strategy combining a CDN and mid-tier cache delivers better ROI. Discover how deterministic caching, improved origin offload, lower tail latency, and predictable costs outperform a CDN-only architecture for modern applications.
Aleks Haugom
Jan 2026
Blog

Why a Multi-Tier Cache Delivers Better ROI Than a CDN Alone

Learn why a multi-tier caching strategy combining a CDN and mid-tier cache delivers better ROI. Discover how deterministic caching, improved origin offload, lower tail latency, and predictable costs outperform a CDN-only architecture for modern applications.
Aleks Haugom
Blog

Why a Multi-Tier Cache Delivers Better ROI Than a CDN Alone

Learn why a multi-tier caching strategy combining a CDN and mid-tier cache delivers better ROI. Discover how deterministic caching, improved origin offload, lower tail latency, and predictable costs outperform a CDN-only architecture for modern applications.
Aleks Haugom
Tutorial
GitHub Logo

Real-Time Pub/Sub Without the "Stack"

Explore a real-time pub/sub architecture where MQTT, WebSockets, Server-Sent Events, and REST work together with persistent data storage in one end-to-end system, enabling real-time interoperability, stateful messaging, and simplified service-to-device and browser communication.
Harper Learn
Tutorial
Explore a real-time pub/sub architecture where MQTT, WebSockets, Server-Sent Events, and REST work together with persistent data storage in one end-to-end system, enabling real-time interoperability, stateful messaging, and simplified service-to-device and browser communication.
A man with short dark hair, glasses, and a goatee smiles slightly, wearing a black shirt in front of a nature background.
Ivan R. Judson, Ph.D.
Distinguished Solution Architect
Tutorial

Real-Time Pub/Sub Without the "Stack"

Explore a real-time pub/sub architecture where MQTT, WebSockets, Server-Sent Events, and REST work together with persistent data storage in one end-to-end system, enabling real-time interoperability, stateful messaging, and simplified service-to-device and browser communication.
Ivan R. Judson, Ph.D.
Jan 2026
Tutorial

Real-Time Pub/Sub Without the "Stack"

Explore a real-time pub/sub architecture where MQTT, WebSockets, Server-Sent Events, and REST work together with persistent data storage in one end-to-end system, enabling real-time interoperability, stateful messaging, and simplified service-to-device and browser communication.
Ivan R. Judson, Ph.D.
Tutorial

Real-Time Pub/Sub Without the "Stack"

Explore a real-time pub/sub architecture where MQTT, WebSockets, Server-Sent Events, and REST work together with persistent data storage in one end-to-end system, enabling real-time interoperability, stateful messaging, and simplified service-to-device and browser communication.
Ivan R. Judson, Ph.D.