👉 Follow along with the code: https://github.com/HarperFast/mqtt-getting-started
If you want to run everything you see in the video yourself, start with the repository above. It contains a working example of Harper acting as a real-time backbone—brokering MQTT, WebSockets, and Server-Sent Events, while also persisting data in one place.
This post is here to answer a slightly bigger question:
Why would you want to build something like this in the first place?
Problem: Real-Time Is Fragmented
Most real-world systems don’t use just one real-time protocol.
You might have:
- Devices publishing telemetry over MQTT
- Browsers consuming live updates over WebSockets
- Dashboards or services subscribing via HTTP events
- Applications that need the data stored for querying, replay, or recovery
Traditionally, these concerns get split apart:
- A broker handles MQTT
- A database stores the data
- A separate service fans events out to clients
- More glue connects everything together
Each piece works—but the system becomes harder to reason about, harder to scale, and harder to change.
What we’re exploring here is a simpler model:
one runtime that handles messaging, fan-out, and persistence together.
What Harper Is Doing Differently
In this setup, Harper sits in the middle as a real-time backbone, not just a message broker.
That means:
- MQTT clients can publish and subscribe directly to Harper
- WebSocket clients can publish and subscribe to the same resources
- Server-Sent Events clients can subscribe over plain HTTP
- Messages are persisted automatically as structured data
- Every protocol sees the same events, in real time
There’s no translation layer and no external sync process. Once a message arrives, it’s immediately usable everywhere.
Getting Started: Minimal Setup, Real Results
Most of this works with almost no configuration.
In config.yml, enabling REST is enough:
rest: true
This turns on:
- WebSocket support
- Server-Sent Events
- Resource-based pub/sub
MQTT support is already enabled by default.
From there, we define a couple of tables in schema.graphql:
type topics @export {
id: ID!
}
type sensors {
id: ID!
location: String
temperature: Float
}
The exported topics table enables wildcard subscriptions, while sensors gives us a place to persist incoming telemetry.
This is important: messages aren’t ephemeral. They’re stored, queryable, and replayable.
Publish Once, Use Everywhere
An MQTT publisher sends sensor data:
{
"id": "sensor-101",
"location": "lab",
"temperature": 97.2
}
That single publish:
- Updates the
sensorstable - Notifies MQTT subscribers
- Pushes updates to WebSocket clients
- Streams events to Server-Sent Events clients
For example, a WebSocket client can subscribe with:
ws://localhost:9926/sensors/sensor-101And an SSE client uses the same resource over HTTP:
http://localhost:9926/sensors/sensor-101Same data. Same topic. Different protocols.
Why This Matters in Practice
This pattern shows up in a lot of real systems:
- IoT and edge telemetry, where devices publish over MQTT but humans consume data in browsers
- Operational dashboards, where live data and historical data must stay in sync
- Event-driven applications, where services need both real-time notifications and durable state
- Distributed systems, where reducing moving parts directly improves reliability
By collapsing messaging, storage, and fan-out into one runtime, Harper makes these systems easier to build and easier to operate.
You spend less time wiring infrastructure together—and more time building the application logic that actually matters.
What Comes Next
In the next step (and the next video), we build an application that listens to these table-level events directly inside Harper. That’s where this turns from “broker demo” into a full event-driven application model.
Until then, clone the repo, run it locally, and watch the data flow.
Take a breath. Grab a cup of tea.
And enjoy building simpler real-time systems




.jpg)


