Click Below to Get the Code

Browse, clone, and build from real-world templates powered by Harper.
Blog
GitHub Logo

Myth Busted: Parallelization in our Node.js Database

This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.
Blog

Myth Busted: Parallelization in our Node.js Database

By
Eli Palmer
December 19, 2017
By
Eli Palmer
December 19, 2017
By
Eli Palmer
December 19, 2017
December 19, 2017
This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.
Eli Palmer
Engineer

Like any heavily adopted programming language, node.js has its critics.  Some of the criticism is accurate, but there’s a specific one that really grinds my gears.   

“JavaScript is single-threaded and therefore can’t do parallelization.”  There are variations on this depending on how granular or pedantic the critic gets, but the idea is the same.  The event loop is single-threaded, so the node is single-threaded.  As we at Harper have created a node.js database, it would be pretty crazy if we weren’t able to do parallelization.  Comparing a single node process versus a single java process, this is true and by design.  

However, Node provides a way around this by offering child processes and clustering in its core module.  Boilerplate code allows us to create a cluster of node processes on each core of your machine.  With some debatably minor architectural planning, we can parallelize nearly any computation or operation without the hassles of shared memory.  What’s better is that clustering is not constrained to the host machine's cores.  One can create a cluster of processes or a cluster of clusters that are distributed across a network. 

Cluster initialization looks like this:

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
    console.log(`Master ${process.pid} is running`);

    // Fork workers.
   for (let i = 0; i < numCPUs; i++) {
       cluster.fork();
 }

     cluster.on('exit', (worker, code, signal) => {
     console.log(`worker ${worker.process.pid} died`);
 });
} else {
     // Workers can share any TCP connection
     // In this case it is an HTTP server
     http.createServer((req, res) => {
       res.writeHead(200);
       res.end('hello worldn');
 }).listen(8000);

     console.log(`Worker ${process.pid} started`);
}

 It is that easy.  This code was taken directly from the cluster docs.  You can find the core Clustering documentation here.  The cluster module duplicates the process running this code as many times as cluster.fork() is called.  The master process runs the code from line 1 to line 11, it simply sits and listens on a port. When a request comes in, it passes the request to one of the forked processes which run the code from line 12 to line 20.   

Clusters are excellent when you need to perform the same task repeatedly such as serving web requests.  Clusters run the same code across all processes,  and even though we can’t utilize shared memory we can send messages between processes. 

But what if we need to run different code?  We can’t spawn a cluster every time we need to pass some work off to another thread.  Here node provides the opportunity to spawn child processes that can execute commands or code specified in the initialization.  This trivial code will create 4 child processes, each of which will execute the code in hello.js.

// app.js
const child_process = require ('child_process');
.
.
.
// For simplicty we hard coded 4 as the number of processes we want to spawn. We could// also use numCPUs = require('os').cpus().length to query the system for the number  // of cores available.
for (let i =0; i<4; i++) {
   let my_process = child_process.fork("hello.js");

   my_process.on('message', (msg)=>{
     console.log(`Parent received message from pid ${my_process.pid}`);
     console.log(`${msg}`);
   });
}

As mentioned above, child processes and cluster processes are able to communicate with the parent process via a process communication channel.  In our example, our child process will invoke the hello.js module and will send a message to its parent, identifying itself.

//hello.js

console.log(`Hello world, I am ${process.pid}`);

 Our output shows 

Hello world, I am 1200
Hello world, I am 1198
Parent received message from pid 1200
Hi mom, I am proccess 1200
Parent received message from pid 1198
Hi mom, I am proccess 1198
Hello world, I am 1199
Parent received message from pid 1199
Hi mom, I am proccess 1199
Hello world, I am 1197
Parent received message from pid 1197
Hi mom, I am proccess 1197

From our output, we can see the results of calling process. send() in our child processes.  Node uses the handler we defined in my_process.on() to consume the process message and act on it.  Rather than using shared memory as we would in a more traditional language, we are able to pass data via this inter-process communication to facilitate parallelism.  We will get deeper into the details and performance gains of parallelism in a later blog post. However, by giving us more direct control over parallelism and threading we feel that node.js actually allows us to better control resource usage. This is critical for IoT database use cases as we discuss in our blog Growing Pains with Industrial IoT

Like any heavily adopted programming language, node.js has its critics.  Some of the criticism is accurate, but there’s a specific one that really grinds my gears.   

“JavaScript is single-threaded and therefore can’t do parallelization.”  There are variations on this depending on how granular or pedantic the critic gets, but the idea is the same.  The event loop is single-threaded, so the node is single-threaded.  As we at Harper have created a node.js database, it would be pretty crazy if we weren’t able to do parallelization.  Comparing a single node process versus a single java process, this is true and by design.  

However, Node provides a way around this by offering child processes and clustering in its core module.  Boilerplate code allows us to create a cluster of node processes on each core of your machine.  With some debatably minor architectural planning, we can parallelize nearly any computation or operation without the hassles of shared memory.  What’s better is that clustering is not constrained to the host machine's cores.  One can create a cluster of processes or a cluster of clusters that are distributed across a network. 

Cluster initialization looks like this:

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
    console.log(`Master ${process.pid} is running`);

    // Fork workers.
   for (let i = 0; i &lt; numCPUs; i++) {
       cluster.fork();
 }

     cluster.on('exit', (worker, code, signal) =&gt; {
     console.log(`worker ${worker.process.pid} died`);
 });
} else {
     // Workers can share any TCP connection
     // In this case it is an HTTP server
     http.createServer((req, res) =&gt; {
       res.writeHead(200);
       res.end('hello worldn');
 }).listen(8000);

     console.log(`Worker ${process.pid} started`);
}

 It is that easy.  This code was taken directly from the cluster docs.  You can find the core Clustering documentation here.  The cluster module duplicates the process running this code as many times as cluster.fork() is called.  The master process runs the code from line 1 to line 11, it simply sits and listens on a port. When a request comes in, it passes the request to one of the forked processes which run the code from line 12 to line 20.   

Clusters are excellent when you need to perform the same task repeatedly such as serving web requests.  Clusters run the same code across all processes,  and even though we can’t utilize shared memory we can send messages between processes. 

But what if we need to run different code?  We can’t spawn a cluster every time we need to pass some work off to another thread.  Here node provides the opportunity to spawn child processes that can execute commands or code specified in the initialization.  This trivial code will create 4 child processes, each of which will execute the code in hello.js.

// app.js
const child_process = require ('child_process');
.
.
.
// For simplicty we hard coded 4 as the number of processes we want to spawn. We could// also use numCPUs = require('os').cpus().length to query the system for the number  // of cores available.
for (let i =0; i<4; i++) {
   let my_process = child_process.fork("hello.js");

   my_process.on('message', (msg)=>{
     console.log(`Parent received message from pid ${my_process.pid}`);
     console.log(`${msg}`);
   });
}

As mentioned above, child processes and cluster processes are able to communicate with the parent process via a process communication channel.  In our example, our child process will invoke the hello.js module and will send a message to its parent, identifying itself.

//hello.js

console.log(`Hello world, I am ${process.pid}`);

 Our output shows 

Hello world, I am 1200
Hello world, I am 1198
Parent received message from pid 1200
Hi mom, I am proccess 1200
Parent received message from pid 1198
Hi mom, I am proccess 1198
Hello world, I am 1199
Parent received message from pid 1199
Hi mom, I am proccess 1199
Hello world, I am 1197
Parent received message from pid 1197
Hi mom, I am proccess 1197

From our output, we can see the results of calling process. send() in our child processes.  Node uses the handler we defined in my_process.on() to consume the process message and act on it.  Rather than using shared memory as we would in a more traditional language, we are able to pass data via this inter-process communication to facilitate parallelism.  We will get deeper into the details and performance gains of parallelism in a later blog post. However, by giving us more direct control over parallelism and threading we feel that node.js actually allows us to better control resource usage. This is critical for IoT database use cases as we discuss in our blog Growing Pains with Industrial IoT

This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.

Download

White arrow pointing right
This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.

Download

White arrow pointing right
This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.

Download

White arrow pointing right

Explore Recent Resources

Blog
GitHub Logo

Happy Thanksgiving! Here is an AI-Coded Harper Game for Your Day Off

Discover how Harper’s unified application platform and AI-first development tools make it possible for anyone—even non-developers—to build and deploy real apps. In this Thanksgiving story, follow the journey of creating a fun Pac-Man-style game using Google’s Antigravity IDE, Gemini, Claude, and Harper’s open-source templates. Learn how Harper simplifies backend development, accelerates AI-driven coding, and unlocks creativity with seamless deployment on Harper Fabric. Play the game and experience the power of Harper for modern app development.
Blog
Discover how Harper’s unified application platform and AI-first development tools make it possible for anyone—even non-developers—to build and deploy real apps. In this Thanksgiving story, follow the journey of creating a fun Pac-Man-style game using Google’s Antigravity IDE, Gemini, Claude, and Harper’s open-source templates. Learn how Harper simplifies backend development, accelerates AI-driven coding, and unlocks creativity with seamless deployment on Harper Fabric. Play the game and experience the power of Harper for modern app development.
Person with short dark hair and moustache, wearing a colorful plaid shirt, smiling outdoors in a forested mountain landscape.
Aleks Haugom
Senior Manager of GTM & Marketing
Blog

Happy Thanksgiving! Here is an AI-Coded Harper Game for Your Day Off

Discover how Harper’s unified application platform and AI-first development tools make it possible for anyone—even non-developers—to build and deploy real apps. In this Thanksgiving story, follow the journey of creating a fun Pac-Man-style game using Google’s Antigravity IDE, Gemini, Claude, and Harper’s open-source templates. Learn how Harper simplifies backend development, accelerates AI-driven coding, and unlocks creativity with seamless deployment on Harper Fabric. Play the game and experience the power of Harper for modern app development.
Aleks Haugom
Nov 2025
Blog

Happy Thanksgiving! Here is an AI-Coded Harper Game for Your Day Off

Discover how Harper’s unified application platform and AI-first development tools make it possible for anyone—even non-developers—to build and deploy real apps. In this Thanksgiving story, follow the journey of creating a fun Pac-Man-style game using Google’s Antigravity IDE, Gemini, Claude, and Harper’s open-source templates. Learn how Harper simplifies backend development, accelerates AI-driven coding, and unlocks creativity with seamless deployment on Harper Fabric. Play the game and experience the power of Harper for modern app development.
Aleks Haugom
Blog

Happy Thanksgiving! Here is an AI-Coded Harper Game for Your Day Off

Discover how Harper’s unified application platform and AI-first development tools make it possible for anyone—even non-developers—to build and deploy real apps. In this Thanksgiving story, follow the journey of creating a fun Pac-Man-style game using Google’s Antigravity IDE, Gemini, Claude, and Harper’s open-source templates. Learn how Harper simplifies backend development, accelerates AI-driven coding, and unlocks creativity with seamless deployment on Harper Fabric. Play the game and experience the power of Harper for modern app development.
Aleks Haugom
Blog
GitHub Logo

Pub/Sub for AI: The New Requirements for Real-Time Data

Harper’s unified pub/sub architecture delivers real-time data, low-latency replication, and multi-protocol streaming for AI and edge applications. Learn how database-native MQTT, WebSockets, and SSE replace legacy brokers and pipelines, enabling millisecond decisions, resilient edge deployments, and globally consistent state for next-generation intelligent systems.
A.I.
Blog
Harper’s unified pub/sub architecture delivers real-time data, low-latency replication, and multi-protocol streaming for AI and edge applications. Learn how database-native MQTT, WebSockets, and SSE replace legacy brokers and pipelines, enabling millisecond decisions, resilient edge deployments, and globally consistent state for next-generation intelligent systems.
A man with short dark hair, glasses, and a goatee smiles slightly, wearing a black shirt in front of a nature background.
Ivan R. Judson, Ph.D.
Distinguished Solution Architect
Blog

Pub/Sub for AI: The New Requirements for Real-Time Data

Harper’s unified pub/sub architecture delivers real-time data, low-latency replication, and multi-protocol streaming for AI and edge applications. Learn how database-native MQTT, WebSockets, and SSE replace legacy brokers and pipelines, enabling millisecond decisions, resilient edge deployments, and globally consistent state for next-generation intelligent systems.
Ivan R. Judson, Ph.D.
Nov 2025
Blog

Pub/Sub for AI: The New Requirements for Real-Time Data

Harper’s unified pub/sub architecture delivers real-time data, low-latency replication, and multi-protocol streaming for AI and edge applications. Learn how database-native MQTT, WebSockets, and SSE replace legacy brokers and pipelines, enabling millisecond decisions, resilient edge deployments, and globally consistent state for next-generation intelligent systems.
Ivan R. Judson, Ph.D.
Blog

Pub/Sub for AI: The New Requirements for Real-Time Data

Harper’s unified pub/sub architecture delivers real-time data, low-latency replication, and multi-protocol streaming for AI and edge applications. Learn how database-native MQTT, WebSockets, and SSE replace legacy brokers and pipelines, enabling millisecond decisions, resilient edge deployments, and globally consistent state for next-generation intelligent systems.
Ivan R. Judson, Ph.D.
Blog
GitHub Logo

Deliver Performance and Simplicity with Distributed Microliths

Distributed microliths unify data, logic, and execution into one high-performance runtime, eliminating microservice latency and complexity. By replicating a single coherent process across regions, they deliver sub-millisecond responses, active-active resilience, and edge-level speed. Platforms like Harper prove this model reduces infrastructure, simplifies operations, and scales globally with ease.
System Design
Blog
Distributed microliths unify data, logic, and execution into one high-performance runtime, eliminating microservice latency and complexity. By replicating a single coherent process across regions, they deliver sub-millisecond responses, active-active resilience, and edge-level speed. Platforms like Harper prove this model reduces infrastructure, simplifies operations, and scales globally with ease.
A man with short dark hair, glasses, and a goatee smiles slightly, wearing a black shirt in front of a nature background.
Ivan R. Judson, Ph.D.
Distinguished Solution Architect
Blog

Deliver Performance and Simplicity with Distributed Microliths

Distributed microliths unify data, logic, and execution into one high-performance runtime, eliminating microservice latency and complexity. By replicating a single coherent process across regions, they deliver sub-millisecond responses, active-active resilience, and edge-level speed. Platforms like Harper prove this model reduces infrastructure, simplifies operations, and scales globally with ease.
Ivan R. Judson, Ph.D.
Nov 2025
Blog

Deliver Performance and Simplicity with Distributed Microliths

Distributed microliths unify data, logic, and execution into one high-performance runtime, eliminating microservice latency and complexity. By replicating a single coherent process across regions, they deliver sub-millisecond responses, active-active resilience, and edge-level speed. Platforms like Harper prove this model reduces infrastructure, simplifies operations, and scales globally with ease.
Ivan R. Judson, Ph.D.
Blog

Deliver Performance and Simplicity with Distributed Microliths

Distributed microliths unify data, logic, and execution into one high-performance runtime, eliminating microservice latency and complexity. By replicating a single coherent process across regions, they deliver sub-millisecond responses, active-active resilience, and edge-level speed. Platforms like Harper prove this model reduces infrastructure, simplifies operations, and scales globally with ease.
Ivan R. Judson, Ph.D.