Click Below to Get the Code

Browse, clone, and build from real-world templates powered by Harper.
News
GitHub Logo

New - Unleash the Power of Federated API Acceleration with Distributed Cache

Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.
Announcement
News
Announcement

New - Unleash the Power of Federated API Acceleration with Distributed Cache

By
Harper
September 26, 2023
By
Harper
September 26, 2023
By
Harper
September 26, 2023
September 26, 2023
Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.
Harper

We are thrilled to introduce a game-changing solution that's set to redefine the world of application acceleration at scale - HarperDB Distributed Cache. Designed as an intermediary layer between Content Delivery Networks (CDNs) and origin servers, Distributed Cache offers unparalleled flexibility and power, setting new standards for cache performance.

Unmatched Cache Performance

Distributed Cache redefines cache performance as we know it. Passive mode - where Distributed Cache simply calls an origin API and caches the response, offers origin offload up to 99%, ensuring swift content delivery for your users once the repository is built out. 

For even greater offload and better performance, Active Caching lets you sync your origin data source out to the edge and create your API endpoints directly on top of the data, allowing up to 100% origin offload, and near-instant content access. 

For those use cases where ultimate performance is desired, HarperDB's Distributed Cache can be provisioned to hold all cached values in RAM, lowering read latency to sub-millisecond levels and delivering an unparalleled experience for your end users.

Pioneering Caching of New Data Types

Distributed Cache isn't just about cache hit rates; it's about unlocking new possibilities. We're proud to introduce the ability to create cache keys from any part of the request, including POST body, slices of graphQL payloads, URL query parameters, and even user-specific data like JWT payloads. All with the core objective of allowing you to optimize your web applications like never before.

Boost Your Business with Distributed Cache

Distributed Cache isn't just a technological marvel; it's a strategic asset for your business.

  • SEO Dominance: In the world of SEO, speed is paramount. With Distributed Cache, your website's performance will soar, resulting in higher search engine rankings and increased visibility. Say hello to improved organic traffic!
  • Maximize Revenue Potential: Faster content delivery directly translates into higher conversion rates and revenue. By leveraging Distributed Cache, you're not just optimizing content delivery but boosting your bottom line.
  • Elevate User Experiences: Speed and reliability are at the heart of user satisfaction. With Distributed Cache, you'll provide seamless, lightning-fast user experiences, leading to higher retention rates and enhanced customer loyalty.

Custom-Tailored for Industry Titans

Distributed Cache is purpose-built for organizations with colossal catalogs, particularly in the retail and gaming sectors. We understand the unique challenges of handling vast volumes of data. Distributed Cache is built on dedicated distributed cloud infrastructure to meet these challenges head-on, ensuring scalability and reliability on a global scale.

How It Works

Distributed Cache functions as the intermediary layer between your CDN and origin server. It efficiently delivers data for CDN cache misses without requiring frequent callbacks to the origin. Designed to replicate cache keys between Distributed Cache’s geographically distributed nodes, only a single origin call per payload is needed to populate a cache key’s value globally. 

For CDNs with thousands of POPs, a single expired cache key can trigger thousands of origin hits. With Distributed Cache, the same cached value is replicated globally, with TTLs that can be individually tailored on a per-key basis. Applied to long-tail catalogs comprised of millions of items, the value of Distributed Cache increases exponentially.

The Future of API Acceleration Is Distributed

Distributed Cache isn't just a cache solution; it's a strategic advantage. Bid farewell to sluggish load times, SEO worries, and missed revenue opportunities. Embrace a future where your content delivery is faster, more profitable, and user-friendly.

Join us on this incredible journey as we redefine content delivery. To discover more about Distributed Cache and how it can turbocharge your business for the upcoming holiday season, visit our website and connect with our team today. 

The future of content delivery is here, and it's more distributed than ever!

We are thrilled to introduce a game-changing solution that's set to redefine the world of application acceleration at scale - HarperDB Distributed Cache. Designed as an intermediary layer between Content Delivery Networks (CDNs) and origin servers, Distributed Cache offers unparalleled flexibility and power, setting new standards for cache performance.

Unmatched Cache Performance

Distributed Cache redefines cache performance as we know it. Passive mode - where Distributed Cache simply calls an origin API and caches the response, offers origin offload up to 99%, ensuring swift content delivery for your users once the repository is built out. 

For even greater offload and better performance, Active Caching lets you sync your origin data source out to the edge and create your API endpoints directly on top of the data, allowing up to 100% origin offload, and near-instant content access. 

For those use cases where ultimate performance is desired, HarperDB's Distributed Cache can be provisioned to hold all cached values in RAM, lowering read latency to sub-millisecond levels and delivering an unparalleled experience for your end users.

Pioneering Caching of New Data Types

Distributed Cache isn't just about cache hit rates; it's about unlocking new possibilities. We're proud to introduce the ability to create cache keys from any part of the request, including POST body, slices of graphQL payloads, URL query parameters, and even user-specific data like JWT payloads. All with the core objective of allowing you to optimize your web applications like never before.

Boost Your Business with Distributed Cache

Distributed Cache isn't just a technological marvel; it's a strategic asset for your business.

  • SEO Dominance: In the world of SEO, speed is paramount. With Distributed Cache, your website's performance will soar, resulting in higher search engine rankings and increased visibility. Say hello to improved organic traffic!
  • Maximize Revenue Potential: Faster content delivery directly translates into higher conversion rates and revenue. By leveraging Distributed Cache, you're not just optimizing content delivery but boosting your bottom line.
  • Elevate User Experiences: Speed and reliability are at the heart of user satisfaction. With Distributed Cache, you'll provide seamless, lightning-fast user experiences, leading to higher retention rates and enhanced customer loyalty.

Custom-Tailored for Industry Titans

Distributed Cache is purpose-built for organizations with colossal catalogs, particularly in the retail and gaming sectors. We understand the unique challenges of handling vast volumes of data. Distributed Cache is built on dedicated distributed cloud infrastructure to meet these challenges head-on, ensuring scalability and reliability on a global scale.

How It Works

Distributed Cache functions as the intermediary layer between your CDN and origin server. It efficiently delivers data for CDN cache misses without requiring frequent callbacks to the origin. Designed to replicate cache keys between Distributed Cache’s geographically distributed nodes, only a single origin call per payload is needed to populate a cache key’s value globally. 

For CDNs with thousands of POPs, a single expired cache key can trigger thousands of origin hits. With Distributed Cache, the same cached value is replicated globally, with TTLs that can be individually tailored on a per-key basis. Applied to long-tail catalogs comprised of millions of items, the value of Distributed Cache increases exponentially.

The Future of API Acceleration Is Distributed

Distributed Cache isn't just a cache solution; it's a strategic advantage. Bid farewell to sluggish load times, SEO worries, and missed revenue opportunities. Embrace a future where your content delivery is faster, more profitable, and user-friendly.

Join us on this incredible journey as we redefine content delivery. To discover more about Distributed Cache and how it can turbocharge your business for the upcoming holiday season, visit our website and connect with our team today. 

The future of content delivery is here, and it's more distributed than ever!

Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.

Download

White arrow pointing right
Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.

Download

White arrow pointing right
Discover the future of application acceleration with Distributed Cache - a game-changing solution between CDNs and origin servers. Unmatched cache performance, new data types caching, SEO dominance, revenue boost, and industry-tailored scalability. Join us in redefining content delivery for a faster, more profitable, user-friendly experience.

Download

White arrow pointing right

Explore Recent Resources

News
GitHub Logo

Harper 5.0 Is Here: Open Source, RocksDB, and a Runtime Built for the Agentic Era

Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Product Update
News
Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM & Marketing
News

Harper 5.0 Is Here: Open Source, RocksDB, and a Runtime Built for the Agentic Era

Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Aleks Haugom
Apr 2026
News

Harper 5.0 Is Here: Open Source, RocksDB, and a Runtime Built for the Agentic Era

Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Aleks Haugom
News

Harper 5.0 Is Here: Open Source, RocksDB, and a Runtime Built for the Agentic Era

Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Aleks Haugom
Podcast
GitHub Logo

Maintaining Momentum: Versioning, Stability & the Road to Nuxt 5 with Daniel Roe

In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Select*
Podcast
In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Person with short hair wearing a light blue patterned shirt, smiling widely outdoors with blurred greenery and trees in the background.
Austin Akers
Head of Developer Relations
Podcast

Maintaining Momentum: Versioning, Stability & the Road to Nuxt 5 with Daniel Roe

In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Austin Akers
Apr 2026
Podcast

Maintaining Momentum: Versioning, Stability & the Road to Nuxt 5 with Daniel Roe

In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Austin Akers
Podcast

Maintaining Momentum: Versioning, Stability & the Road to Nuxt 5 with Daniel Roe

In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Austin Akers
Blog
GitHub Logo

Most LLM Calls Are Waste. Here's the Math.

Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Blog
Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM & Marketing
Blog

Most LLM Calls Are Waste. Here's the Math.

Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Aleks Haugom
Apr 2026
Blog

Most LLM Calls Are Waste. Here's the Math.

Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Aleks Haugom
Blog

Most LLM Calls Are Waste. Here's the Math.

Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Aleks Haugom
Blog
GitHub Logo

Build a Conversational AI Agent on Harper in 5 Minutes

Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
Blog
Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
A smiling man with a beard and salt-and-pepper hair stands outdoors with arms crossed, wearing a white button-down shirt.
Stephen Goldberg
CEO & Co-Founder
Blog

Build a Conversational AI Agent on Harper in 5 Minutes

Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
Stephen Goldberg
Apr 2026
Blog

Build a Conversational AI Agent on Harper in 5 Minutes

Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
Stephen Goldberg
Blog

Build a Conversational AI Agent on Harper in 5 Minutes

Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
Stephen Goldberg
Podcast
GitHub Logo

Inside PixiJS, AT Protocol, and Modern Game Development with Trezy Who

Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Select*
Podcast
Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Person with short hair wearing a light blue patterned shirt, smiling widely outdoors with blurred greenery and trees in the background.
Austin Akers
Head of Developer Relations
Podcast

Inside PixiJS, AT Protocol, and Modern Game Development with Trezy Who

Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Austin Akers
Mar 2026
Podcast

Inside PixiJS, AT Protocol, and Modern Game Development with Trezy Who

Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Austin Akers
Podcast

Inside PixiJS, AT Protocol, and Modern Game Development with Trezy Who

Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Austin Akers