Click Below to Get the Code

Browse, clone, and build from real-world templates powered by Harper.
Blog
GitHub Logo

Distributed Cache: Stretch Your Memory

Kyle describes the benefits of using distributed cache to lower costs and improve performance.
Blog

Distributed Cache: Stretch Your Memory

By
Kyle Bernhardy
February 12, 2018
By
Kyle Bernhardy
February 12, 2018
February 12, 2018
Kyle describes the benefits of using distributed cache to lower costs and improve performance.
Kyle Bernhardy
EVP of Strategy & Co-Founder

Data drives our lives. Our apps, our spending, our home, it’s all driven from a datastore somewhere. On the scale that we constantly ask questions of our devices we might wonder, just how are these systems keeping up with the sheer scale of all the constant data requests being delivered to our brains?  NoSQL databases are built to horizontally scale natively as can some relational databases.  Monolithic databases are of course a thing that happens. Both of these options can work but the cost of this type of scale surrounding databases can be staggering and keeping it all alive can make a seasoned DevOps shudder. The answer to this question of scale is memory and caching. 

In many applications, data retrieved from data stores does not change rapidly so it makes sense to cache the response from a database to be utilized at a later time.  This helps rapidly improve performance and keep infrastructure costs down.  Or, it could be that your app needs to query results from your data warehouse, which only gets refreshed nightly.  In this case the data will not change throughout the day and it is safe to cache this data for easy retrieval. Another application mechanism for caching is memoization, this allows for caching the results of complex function calls. 

Your Database Remembers

Many databases have internalized these caching needs by either being in-memory natively, having an in-memory cache or the ability to simulate in-memory with potentially limited features.  The spectrum that this solution has been met is broad.  There are simple key value stores aplenty in this space as well as complex relational databases that offer in-memory caching.  The benefit is it offloads the need to design a caching mechanism on the app side.  The downside is data size can exceed the memory footprint available to the database server creating out of memory exceptions on a critical part of your infrastructure.  

Caching the Internet

A few months ago, Stephen discussed the potential end of net neutrality. In the scenario of a peer to peer internet what will facilitate the transfer of data is an Information-centric Network (ICN). No longer would there be a host-centric network, but rather a series of nodes with caches of data where the client is routed based on the information required. Currently, when a request is sent across the internet you are routed ultimately to an IP address. In an ICN architecture the information needed points you to nodes that house data related to the request. A distribution of caches in this framework allows for failover and replication of information enabling high scalability and reliability. Simply hitting data stores over and over again is effective, but inefficient. Bludgeoning a database for frequently polled data can cause row locks, connection issues, and force costs to soar to keep up for inefficient design patterns.  As is always found, systems on scale require finesse and solutions that maximize the potential of the hardware.  Leveraging memory caching, whether thru your own application or from an in memory database can utilize resources already available to you and enable you to do more with less.

Data drives our lives. Our apps, our spending, our home, it’s all driven from a datastore somewhere. On the scale that we constantly ask questions of our devices we might wonder, just how are these systems keeping up with the sheer scale of all the constant data requests being delivered to our brains?  NoSQL databases are built to horizontally scale natively as can some relational databases.  Monolithic databases are of course a thing that happens. Both of these options can work but the cost of this type of scale surrounding databases can be staggering and keeping it all alive can make a seasoned DevOps shudder. The answer to this question of scale is memory and caching. 

In many applications, data retrieved from data stores does not change rapidly so it makes sense to cache the response from a database to be utilized at a later time.  This helps rapidly improve performance and keep infrastructure costs down.  Or, it could be that your app needs to query results from your data warehouse, which only gets refreshed nightly.  In this case the data will not change throughout the day and it is safe to cache this data for easy retrieval. Another application mechanism for caching is memoization, this allows for caching the results of complex function calls. 

Your Database Remembers

Many databases have internalized these caching needs by either being in-memory natively, having an in-memory cache or the ability to simulate in-memory with potentially limited features.  The spectrum that this solution has been met is broad.  There are simple key value stores aplenty in this space as well as complex relational databases that offer in-memory caching.  The benefit is it offloads the need to design a caching mechanism on the app side.  The downside is data size can exceed the memory footprint available to the database server creating out of memory exceptions on a critical part of your infrastructure.  

Caching the Internet

A few months ago, Stephen discussed the potential end of net neutrality. In the scenario of a peer to peer internet what will facilitate the transfer of data is an Information-centric Network (ICN). No longer would there be a host-centric network, but rather a series of nodes with caches of data where the client is routed based on the information required. Currently, when a request is sent across the internet you are routed ultimately to an IP address. In an ICN architecture the information needed points you to nodes that house data related to the request. A distribution of caches in this framework allows for failover and replication of information enabling high scalability and reliability. Simply hitting data stores over and over again is effective, but inefficient. Bludgeoning a database for frequently polled data can cause row locks, connection issues, and force costs to soar to keep up for inefficient design patterns.  As is always found, systems on scale require finesse and solutions that maximize the potential of the hardware.  Leveraging memory caching, whether thru your own application or from an in memory database can utilize resources already available to you and enable you to do more with less.

Kyle describes the benefits of using distributed cache to lower costs and improve performance.

Download

White arrow pointing right
Kyle describes the benefits of using distributed cache to lower costs and improve performance.

Download

White arrow pointing right
Kyle describes the benefits of using distributed cache to lower costs and improve performance.

Download

White arrow pointing right

Explore Recent Resources

Blog
GitHub Logo

Answer Engine Optimization: How to Get Cited by AI Answers

Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Search Optimization
Blog
Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Colorful geometric illustration of a dog's head in shades of purple, pink and teal.
Martin Spiek
SEO Subject Matter Expert
Blog

Answer Engine Optimization: How to Get Cited by AI Answers

Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Martin Spiek
Sep 2025
Blog

Answer Engine Optimization: How to Get Cited by AI Answers

Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Martin Spiek
Blog

Answer Engine Optimization: How to Get Cited by AI Answers

Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Martin Spiek
Case Study
GitHub Logo

The Impact of Early Hints - Auto Parts

A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Early Hints
Case Study
A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Colorful geometric illustration of a dog's head resembling folded paper art in shades of teal and pink.
Harper
Case Study

The Impact of Early Hints - Auto Parts

A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Harper
Sep 2025
Case Study

The Impact of Early Hints - Auto Parts

A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Harper
Case Study

The Impact of Early Hints - Auto Parts

A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Harper