Click Below to Get the Code

Browse, clone, and build from real-world templates powered by Harper.
Blog
GitHub Logo

Microliths — The “Goldilocks” Architecture Between Monoliths and Microservices

Microliths are an emerging software architecture that combine the simplicity and performance of monoliths with the modularity and scalability of microservices, offering vertically integrated units where application logic, database, cache, and APIs coexist in a single deployable runtime. This approach enables ultra-low-latency, secure, and composable systems with significantly less code and operational overhead, making it ideal for modern, data-intensive applications.
Blog

Microliths — The “Goldilocks” Architecture Between Monoliths and Microservices

By
Aleks Haugom
June 3, 2025
By
Aleks Haugom
June 3, 2025
By
Aleks Haugom
June 3, 2025
June 3, 2025
Microliths are an emerging software architecture that combine the simplicity and performance of monoliths with the modularity and scalability of microservices, offering vertically integrated units where application logic, database, cache, and APIs coexist in a single deployable runtime. This approach enables ultra-low-latency, secure, and composable systems with significantly less code and operational overhead, making it ideal for modern, data-intensive applications.
Aleks Haugom
Senior Manager of GTM & Marketing

The longstanding discourse surrounding software architecture has primarily revolved around two opposing models: monoliths and microservices. The merits of each model have been debated for decades, as each presents unique benefits and drawbacks. However, a new, powerful concept is emerging, one that blends the composability and scalability of microservices with the performance and simplicity of monoliths: the microlith.

This idea isn't entirely new. Discussions around microliths have surfaced before, such as in a detailed exploration by New Zealand IT here and in Oracle's Java Magazine here. Yet, what is especially exciting today is the evolution of the microlithic concepts to meet modern challenges of latency-sensitive, data-intensive applications.

What Is a Microlith?

A microlith is a software architectural model that embodies the best traits of monoliths and microservices. It's small, self-contained, and highly performant — but crucially, it still preserves the deployment flexibility and modular thinking that made microservices popular.

Broadly, there are two interpretations of microlith architecture:

  1. Horizontal Microlith — An application where services are horizontally integrated in the application space but still rely on an external, often separate, database or cache layer.
  2. Vertical Microlith — A vertically integrated application that bundles application logic, database, cache, and APIs all into a single deployable unit aimed at achieving a specific purpose.

This article focuses on the second model: vertically integrated microliths. In many ways, this architecture represents the future for organizations needing low-latency, high-throughput applications that remain simple to manage and deploy.

Why Vertically Integrated Microliths?

In a traditional microservices setup, each service communicates over the network, introducing latency, complexity, and security concerns. By contrast, a vertically integrated microlith eliminates those issues. Services communicate in "function space" within the same runtime rather than over the network.

Key Benefits:

  • Low Latency: No network hops between your application, cache, and database.
  • Higher Security: Eliminates risks like man-in-the-middle attacks by removing network calls altogether.
  • Lower Total Cost of Ownership (TCO): Less infrastructure to manage, fewer points of failure, simpler deployment models.
  • Composability: Easily compose multiple microliths into a larger, scalable system without the brittle complexity of microservices.
  • Less Code: With a fully integrated stack, you can eliminate much of the connective "glue code" that often bloats microservices — think 500 lines reduced to 30-50 lines.

Microliths in Action: Use Cases

Imagine you are building an application that must deliver ultra-low-latency responses to user queries against massive datasets. Think e-commerce product pages that need to be performance optimized for SEO, real-time sports betting platforms, or even multiplayer gaming backends.

In a traditional microservices model, every user interaction would trigger a cascade of API calls: front-end to gateway, gateway to service, service to database, database back to service, service to cache, and so on.

With a vertically integrated microlith:

  • The application layer, cache, and database all reside within the same runtime.
  • A user's request traverses in-memory function calls.
  • Persistence and replication is handled with native functionality of the microlith technology (like Harper does).

The result? Milliseconds saved at every turn, happier users, less operational complexity and even high revenue.

Scaling Microliths

A common misconception is that vertically integrated microliths don't scale well. Not true.

Scaling vertically integrated microliths is straightforward:

  • Deploy multiple microliths based on application domains or resource needs.
  • Set up real-time data synchronization between microlith servers so all have a persistence layer, allowing them to operate independently when responding to clients. 

To achieve industry-leading speed for product pages, implement a geographically distributed network of microliths. Each microlith must be capable of independently handling page requests. This requires the microlith to contain an API that can identify the requested page, retrieve the pre-rendered product page from memory, and return it to the client.

In the real world, large-scale deployments have seen hundreds of nodes operating in microlith clusters. Each node a fully self-sufficient unit, coordinating data replication intelligently. 

Composability: The Secret Weapon

One of the most understated advantages of vertically integrated microliths is composability.

Each microlith can be seen as a "Lego brick" — a self-contained unit that you can combine with others to create complex systems without worrying about dependency hell or service orchestration nightmares.

  • Need a new feature? Spin up a new microlith.
  • Need to update a component? Swap it out with zero downtime.
  • Need to scale globally? Replicate microliths to edge locations with replication turned on.

Unlike monoliths, microliths allow granular, targeted upgrades and deployments, but unlike microservices, they don't fragment your architecture into an unmanageable mess.

Write Less Code, Build More Value

Jaxon Repp, Field CTO at Harper, put it best: with a vertically integrated microlith model, production-grade applications can be built with under 100 lines of code.

How?

  • No boilerplate database connection logic.
  • No complex caching layers to manage.
  • No sprawling API gateway configurations.

Instead, developers focus purely on business logic and proper data indexing. This not only accelerates development cycles but also dramatically reduces the likelihood of bugs creeping in through poorly connected infrastructure code.

The beauty of microliths is in their simplicity. They let you get back to solving real problems, rather than wrestling with the plumbing.

Getting Started

If you're curious about putting microliths into action, the best way to start is with a small project or demo. Build something latency-sensitive, like a consolidated redirect system or a dynamic e-commerce website with personalization.

See for yourself how much code you don't have to write.

And if you're ready to dive deeper, check out Harper — a platform built from the ground up to enable vertically integrated microlith architectures. Harper combines database, cache, APIs, and programmable application logic in one deployable unit, helping you achieve the promise of microliths today.

Learn more and get started with Harper here.

The longstanding discourse surrounding software architecture has primarily revolved around two opposing models: monoliths and microservices. The merits of each model have been debated for decades, as each presents unique benefits and drawbacks. However, a new, powerful concept is emerging, one that blends the composability and scalability of microservices with the performance and simplicity of monoliths: the microlith.

This idea isn't entirely new. Discussions around microliths have surfaced before, such as in a detailed exploration by New Zealand IT here and in Oracle's Java Magazine here. Yet, what is especially exciting today is the evolution of the microlithic concepts to meet modern challenges of latency-sensitive, data-intensive applications.

What Is a Microlith?

A microlith is a software architectural model that embodies the best traits of monoliths and microservices. It's small, self-contained, and highly performant — but crucially, it still preserves the deployment flexibility and modular thinking that made microservices popular.

Broadly, there are two interpretations of microlith architecture:

  1. Horizontal Microlith — An application where services are horizontally integrated in the application space but still rely on an external, often separate, database or cache layer.
  2. Vertical Microlith — A vertically integrated application that bundles application logic, database, cache, and APIs all into a single deployable unit aimed at achieving a specific purpose.

This article focuses on the second model: vertically integrated microliths. In many ways, this architecture represents the future for organizations needing low-latency, high-throughput applications that remain simple to manage and deploy.

Why Vertically Integrated Microliths?

In a traditional microservices setup, each service communicates over the network, introducing latency, complexity, and security concerns. By contrast, a vertically integrated microlith eliminates those issues. Services communicate in "function space" within the same runtime rather than over the network.

Key Benefits:

  • Low Latency: No network hops between your application, cache, and database.
  • Higher Security: Eliminates risks like man-in-the-middle attacks by removing network calls altogether.
  • Lower Total Cost of Ownership (TCO): Less infrastructure to manage, fewer points of failure, simpler deployment models.
  • Composability: Easily compose multiple microliths into a larger, scalable system without the brittle complexity of microservices.
  • Less Code: With a fully integrated stack, you can eliminate much of the connective "glue code" that often bloats microservices — think 500 lines reduced to 30-50 lines.

Microliths in Action: Use Cases

Imagine you are building an application that must deliver ultra-low-latency responses to user queries against massive datasets. Think e-commerce product pages that need to be performance optimized for SEO, real-time sports betting platforms, or even multiplayer gaming backends.

In a traditional microservices model, every user interaction would trigger a cascade of API calls: front-end to gateway, gateway to service, service to database, database back to service, service to cache, and so on.

With a vertically integrated microlith:

  • The application layer, cache, and database all reside within the same runtime.
  • A user's request traverses in-memory function calls.
  • Persistence and replication is handled with native functionality of the microlith technology (like Harper does).

The result? Milliseconds saved at every turn, happier users, less operational complexity and even high revenue.

Scaling Microliths

A common misconception is that vertically integrated microliths don't scale well. Not true.

Scaling vertically integrated microliths is straightforward:

  • Deploy multiple microliths based on application domains or resource needs.
  • Set up real-time data synchronization between microlith servers so all have a persistence layer, allowing them to operate independently when responding to clients. 

To achieve industry-leading speed for product pages, implement a geographically distributed network of microliths. Each microlith must be capable of independently handling page requests. This requires the microlith to contain an API that can identify the requested page, retrieve the pre-rendered product page from memory, and return it to the client.

In the real world, large-scale deployments have seen hundreds of nodes operating in microlith clusters. Each node a fully self-sufficient unit, coordinating data replication intelligently. 

Composability: The Secret Weapon

One of the most understated advantages of vertically integrated microliths is composability.

Each microlith can be seen as a "Lego brick" — a self-contained unit that you can combine with others to create complex systems without worrying about dependency hell or service orchestration nightmares.

  • Need a new feature? Spin up a new microlith.
  • Need to update a component? Swap it out with zero downtime.
  • Need to scale globally? Replicate microliths to edge locations with replication turned on.

Unlike monoliths, microliths allow granular, targeted upgrades and deployments, but unlike microservices, they don't fragment your architecture into an unmanageable mess.

Write Less Code, Build More Value

Jaxon Repp, Field CTO at Harper, put it best: with a vertically integrated microlith model, production-grade applications can be built with under 100 lines of code.

How?

  • No boilerplate database connection logic.
  • No complex caching layers to manage.
  • No sprawling API gateway configurations.

Instead, developers focus purely on business logic and proper data indexing. This not only accelerates development cycles but also dramatically reduces the likelihood of bugs creeping in through poorly connected infrastructure code.

The beauty of microliths is in their simplicity. They let you get back to solving real problems, rather than wrestling with the plumbing.

Getting Started

If you're curious about putting microliths into action, the best way to start is with a small project or demo. Build something latency-sensitive, like a consolidated redirect system or a dynamic e-commerce website with personalization.

See for yourself how much code you don't have to write.

And if you're ready to dive deeper, check out Harper — a platform built from the ground up to enable vertically integrated microlith architectures. Harper combines database, cache, APIs, and programmable application logic in one deployable unit, helping you achieve the promise of microliths today.

Learn more and get started with Harper here.

Microliths are an emerging software architecture that combine the simplicity and performance of monoliths with the modularity and scalability of microservices, offering vertically integrated units where application logic, database, cache, and APIs coexist in a single deployable runtime. This approach enables ultra-low-latency, secure, and composable systems with significantly less code and operational overhead, making it ideal for modern, data-intensive applications.

Download

White arrow pointing right
Microliths are an emerging software architecture that combine the simplicity and performance of monoliths with the modularity and scalability of microservices, offering vertically integrated units where application logic, database, cache, and APIs coexist in a single deployable runtime. This approach enables ultra-low-latency, secure, and composable systems with significantly less code and operational overhead, making it ideal for modern, data-intensive applications.

Download

White arrow pointing right
Microliths are an emerging software architecture that combine the simplicity and performance of monoliths with the modularity and scalability of microservices, offering vertically integrated units where application logic, database, cache, and APIs coexist in a single deployable runtime. This approach enables ultra-low-latency, secure, and composable systems with significantly less code and operational overhead, making it ideal for modern, data-intensive applications.

Download

White arrow pointing right

Explore Recent Resources

News
GitHub Logo

Harper 5.0 Is Here: Open Source, RocksDB, and a Runtime Built for the Agentic Era

Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Product Update
News
Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM & Marketing
News

Harper 5.0 Is Here: Open Source, RocksDB, and a Runtime Built for the Agentic Era

Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Aleks Haugom
Apr 2026
News

Harper 5.0 Is Here: Open Source, RocksDB, and a Runtime Built for the Agentic Era

Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Aleks Haugom
News

Harper 5.0 Is Here: Open Source, RocksDB, and a Runtime Built for the Agentic Era

Harper 5.0 launches with a fully open-source core under Apache 2.0, RocksDB as a native storage engine alongside LMDB, and source-available Harper Pro. This release delivers a unified runtime purpose-built for agentic engineering, from prototype to production.
Aleks Haugom
Podcast
GitHub Logo

Maintaining Momentum: Versioning, Stability & the Road to Nuxt 5 with Daniel Roe

In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Select*
Podcast
In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Person with short hair wearing a light blue patterned shirt, smiling widely outdoors with blurred greenery and trees in the background.
Austin Akers
Head of Developer Relations
Podcast

Maintaining Momentum: Versioning, Stability & the Road to Nuxt 5 with Daniel Roe

In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Austin Akers
Apr 2026
Podcast

Maintaining Momentum: Versioning, Stability & the Road to Nuxt 5 with Daniel Roe

In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Austin Akers
Podcast

Maintaining Momentum: Versioning, Stability & the Road to Nuxt 5 with Daniel Roe

In this podcast episode, Daniel Roe, lead of the Nuxt framework, shares insights on Nuxt 3, 4, and the upcoming Nuxt 5 release. We discuss open-source development, upgrading Nuxt apps, Vue-powered full-stack web apps, version maintenance, and the future of modern web development.
Austin Akers
Blog
GitHub Logo

Most LLM Calls Are Waste. Here's the Math.

Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Blog
Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM & Marketing
Blog

Most LLM Calls Are Waste. Here's the Math.

Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Aleks Haugom
Apr 2026
Blog

Most LLM Calls Are Waste. Here's the Math.

Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Aleks Haugom
Blog

Most LLM Calls Are Waste. Here's the Math.

Semantic caching for LLMs can reduce API costs by 20–70% by reusing similar responses. Combined with deterministic routing and improved retrieval, enterprises can significantly lower LLM usage, though effectiveness varies by workload and improves over time.
Aleks Haugom
Blog
GitHub Logo

Build a Conversational AI Agent on Harper in 5 Minutes

Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
Blog
Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
A smiling man with a beard and salt-and-pepper hair stands outdoors with arms crossed, wearing a white button-down shirt.
Stephen Goldberg
CEO & Co-Founder
Blog

Build a Conversational AI Agent on Harper in 5 Minutes

Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
Stephen Goldberg
Apr 2026
Blog

Build a Conversational AI Agent on Harper in 5 Minutes

Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
Stephen Goldberg
Blog

Build a Conversational AI Agent on Harper in 5 Minutes

Build a conversational AI agent in minutes using Harper’s unified platform. This guide shows how to create, deploy, and scale real-time AI agents with built-in database, vector search, and APIs—eliminating infrastructure complexity for faster development.
Stephen Goldberg
Podcast
GitHub Logo

Inside PixiJS, AT Protocol, and Modern Game Development with Trezy Who

Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Select*
Podcast
Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Person with short hair wearing a light blue patterned shirt, smiling widely outdoors with blurred greenery and trees in the background.
Austin Akers
Head of Developer Relations
Podcast

Inside PixiJS, AT Protocol, and Modern Game Development with Trezy Who

Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Austin Akers
Mar 2026
Podcast

Inside PixiJS, AT Protocol, and Modern Game Development with Trezy Who

Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Austin Akers
Podcast

Inside PixiJS, AT Protocol, and Modern Game Development with Trezy Who

Trezy shares his journey from professional drummer and filmmaker to software engineer and open source maintainer. Learn about PixieJS, game development, AT Proto, BlueSky, data sovereignty, and how developers can confidently contribute to open source projects.
Austin Akers