Click Below to Get the Code

Browse, clone, and build from real-world templates powered by Harper.
Blog
GitHub Logo

Myth Busted: Parallelization in our Node.js Database

This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.
Blog

Myth Busted: Parallelization in our Node.js Database

By
Eli Palmer
December 19, 2017
By
Eli Palmer
December 19, 2017
December 19, 2017
This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.
Eli Palmer
Engineer

Like any heavily adopted programming language, node.js has its critics.  Some of the criticism is accurate, but there’s a specific one that really grinds my gears.   

“JavaScript is single-threaded and therefore can’t do parallelization.”  There are variations on this depending on how granular or pedantic the critic gets, but the idea is the same.  The event loop is single-threaded, so the node is single-threaded.  As we at Harper have created a node.js database, it would be pretty crazy if we weren’t able to do parallelization.  Comparing a single node process versus a single java process, this is true and by design.  

However, Node provides a way around this by offering child processes and clustering in its core module.  Boilerplate code allows us to create a cluster of node processes on each core of your machine.  With some debatably minor architectural planning, we can parallelize nearly any computation or operation without the hassles of shared memory.  What’s better is that clustering is not constrained to the host machine's cores.  One can create a cluster of processes or a cluster of clusters that are distributed across a network. 

Cluster initialization looks like this:

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
    console.log(`Master ${process.pid} is running`);

    // Fork workers.
   for (let i = 0; i < numCPUs; i++) {
       cluster.fork();
 }

     cluster.on('exit', (worker, code, signal) => {
     console.log(`worker ${worker.process.pid} died`);
 });
} else {
     // Workers can share any TCP connection
     // In this case it is an HTTP server
     http.createServer((req, res) => {
       res.writeHead(200);
       res.end('hello worldn');
 }).listen(8000);

     console.log(`Worker ${process.pid} started`);
}

 It is that easy.  This code was taken directly from the cluster docs.  You can find the core Clustering documentation here.  The cluster module duplicates the process running this code as many times as cluster.fork() is called.  The master process runs the code from line 1 to line 11, it simply sits and listens on a port. When a request comes in, it passes the request to one of the forked processes which run the code from line 12 to line 20.   

Clusters are excellent when you need to perform the same task repeatedly such as serving web requests.  Clusters run the same code across all processes,  and even though we can’t utilize shared memory we can send messages between processes. 

But what if we need to run different code?  We can’t spawn a cluster every time we need to pass some work off to another thread.  Here node provides the opportunity to spawn child processes that can execute commands or code specified in the initialization.  This trivial code will create 4 child processes, each of which will execute the code in hello.js.

// app.js
const child_process = require ('child_process');
.
.
.
// For simplicty we hard coded 4 as the number of processes we want to spawn. We could// also use numCPUs = require('os').cpus().length to query the system for the number  // of cores available.
for (let i =0; i<4; i++) {
   let my_process = child_process.fork("hello.js");

   my_process.on('message', (msg)=>{
     console.log(`Parent received message from pid ${my_process.pid}`);
     console.log(`${msg}`);
   });
}

As mentioned above, child processes and cluster processes are able to communicate with the parent process via a process communication channel.  In our example, our child process will invoke the hello.js module and will send a message to its parent, identifying itself.

//hello.js

console.log(`Hello world, I am ${process.pid}`);

 Our output shows 

Hello world, I am 1200
Hello world, I am 1198
Parent received message from pid 1200
Hi mom, I am proccess 1200
Parent received message from pid 1198
Hi mom, I am proccess 1198
Hello world, I am 1199
Parent received message from pid 1199
Hi mom, I am proccess 1199
Hello world, I am 1197
Parent received message from pid 1197
Hi mom, I am proccess 1197

From our output, we can see the results of calling process. send() in our child processes.  Node uses the handler we defined in my_process.on() to consume the process message and act on it.  Rather than using shared memory as we would in a more traditional language, we are able to pass data via this inter-process communication to facilitate parallelism.  We will get deeper into the details and performance gains of parallelism in a later blog post. However, by giving us more direct control over parallelism and threading we feel that node.js actually allows us to better control resource usage. This is critical for IoT database use cases as we discuss in our blog Growing Pains with Industrial IoT

Like any heavily adopted programming language, node.js has its critics.  Some of the criticism is accurate, but there’s a specific one that really grinds my gears.   

“JavaScript is single-threaded and therefore can’t do parallelization.”  There are variations on this depending on how granular or pedantic the critic gets, but the idea is the same.  The event loop is single-threaded, so the node is single-threaded.  As we at Harper have created a node.js database, it would be pretty crazy if we weren’t able to do parallelization.  Comparing a single node process versus a single java process, this is true and by design.  

However, Node provides a way around this by offering child processes and clustering in its core module.  Boilerplate code allows us to create a cluster of node processes on each core of your machine.  With some debatably minor architectural planning, we can parallelize nearly any computation or operation without the hassles of shared memory.  What’s better is that clustering is not constrained to the host machine's cores.  One can create a cluster of processes or a cluster of clusters that are distributed across a network. 

Cluster initialization looks like this:

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
    console.log(`Master ${process.pid} is running`);

    // Fork workers.
   for (let i = 0; i &lt; numCPUs; i++) {
       cluster.fork();
 }

     cluster.on('exit', (worker, code, signal) =&gt; {
     console.log(`worker ${worker.process.pid} died`);
 });
} else {
     // Workers can share any TCP connection
     // In this case it is an HTTP server
     http.createServer((req, res) =&gt; {
       res.writeHead(200);
       res.end('hello worldn');
 }).listen(8000);

     console.log(`Worker ${process.pid} started`);
}

 It is that easy.  This code was taken directly from the cluster docs.  You can find the core Clustering documentation here.  The cluster module duplicates the process running this code as many times as cluster.fork() is called.  The master process runs the code from line 1 to line 11, it simply sits and listens on a port. When a request comes in, it passes the request to one of the forked processes which run the code from line 12 to line 20.   

Clusters are excellent when you need to perform the same task repeatedly such as serving web requests.  Clusters run the same code across all processes,  and even though we can’t utilize shared memory we can send messages between processes. 

But what if we need to run different code?  We can’t spawn a cluster every time we need to pass some work off to another thread.  Here node provides the opportunity to spawn child processes that can execute commands or code specified in the initialization.  This trivial code will create 4 child processes, each of which will execute the code in hello.js.

// app.js
const child_process = require ('child_process');
.
.
.
// For simplicty we hard coded 4 as the number of processes we want to spawn. We could// also use numCPUs = require('os').cpus().length to query the system for the number  // of cores available.
for (let i =0; i<4; i++) {
   let my_process = child_process.fork("hello.js");

   my_process.on('message', (msg)=>{
     console.log(`Parent received message from pid ${my_process.pid}`);
     console.log(`${msg}`);
   });
}

As mentioned above, child processes and cluster processes are able to communicate with the parent process via a process communication channel.  In our example, our child process will invoke the hello.js module and will send a message to its parent, identifying itself.

//hello.js

console.log(`Hello world, I am ${process.pid}`);

 Our output shows 

Hello world, I am 1200
Hello world, I am 1198
Parent received message from pid 1200
Hi mom, I am proccess 1200
Parent received message from pid 1198
Hi mom, I am proccess 1198
Hello world, I am 1199
Parent received message from pid 1199
Hi mom, I am proccess 1199
Hello world, I am 1197
Parent received message from pid 1197
Hi mom, I am proccess 1197

From our output, we can see the results of calling process. send() in our child processes.  Node uses the handler we defined in my_process.on() to consume the process message and act on it.  Rather than using shared memory as we would in a more traditional language, we are able to pass data via this inter-process communication to facilitate parallelism.  We will get deeper into the details and performance gains of parallelism in a later blog post. However, by giving us more direct control over parallelism and threading we feel that node.js actually allows us to better control resource usage. This is critical for IoT database use cases as we discuss in our blog Growing Pains with Industrial IoT

This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.

Download

White arrow pointing right
This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.

Download

White arrow pointing right
This article debunks the myth that Node.js can’t handle parallelization because it’s single-threaded. It shows how Harper’s Node-based database uses worker threads and other techniques to achieve parallel processing and high performance.

Download

White arrow pointing right

Explore Recent Resources

Blog
GitHub Logo

Answer Engine Optimization: How to Get Cited by AI Answers

Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Search Optimization
Blog
Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Colorful geometric illustration of a dog's head in shades of purple, pink and teal.
Martin Spiek
SEO Subject Matter Expert
Blog

Answer Engine Optimization: How to Get Cited by AI Answers

Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Martin Spiek
Sep 2025
Blog

Answer Engine Optimization: How to Get Cited by AI Answers

Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Martin Spiek
Blog

Answer Engine Optimization: How to Get Cited by AI Answers

Answer Engine Optimization (AEO) is the next evolution of SEO. Learn how to prepare your content for Google’s AI Overviews, Perplexity, and other answer engines. From structuring pages to governing bots, discover how to stay visible, earn citations, and capture future traffic streams.
Martin Spiek
Case Study
GitHub Logo

The Impact of Early Hints - Auto Parts

A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Early Hints
Case Study
A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Colorful geometric illustration of a dog's head resembling folded paper art in shades of teal and pink.
Harper
Case Study

The Impact of Early Hints - Auto Parts

A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Harper
Sep 2025
Case Study

The Impact of Early Hints - Auto Parts

A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Harper
Case Study

The Impact of Early Hints - Auto Parts

A leading U.S. auto parts retailer used Harper’s Early Hints technology to overcome Core Web Vitals failures, achieving faster load speeds, dramatically improved indexation, and an estimated $8.6M annual revenue uplift. With minimal code changes, the proof-of-concept validated that even small performance gains can unlock significant growth opportunities for large-scale e-commerce businesses.
Harper