The real story behind “single thread,” the event loop, and how JS secretly gets work done in parallel.

Introduction
JavaScript is “single-threaded.” You’ve heard it in interviews, docs, and conference talks. Then you see your app fetching data, handling clicks, animating, and parsing JSON at the same time… on one thread? 🤔
Here’s the truth: the JavaScript language executes on a single main thread, but the runtime around it (browser or Node.js) uses many threads and processes to do work concurrently — timers, network, file I/O, compression, even CPU-heavy tasks — then hands results back to your single thread through the event loop.
In this guide, we’ll unpack how that works — with real examples in the browser and Node.js, gotchas to avoid, and practical patterns to keep your UI smooth and your servers fast.
1) What “Single-Threaded” Actually Means
When we say JS is single-threaded, we mean:
- The JS call stack (where your code runs) executes one frame at a time on one thread.
- Two JS lines can’t run simultaneously on two CPU cores in the same context (ignoring Workers/isolates).
- Race conditions on shared JS variables can’t happen within that one thread — that’s part of JS’s simplicity.
But the runtime (browser or Node.js) is not single-threaded:
- Browsers are multi-process/multi-threaded (rendering, raster, I/O, networking, storage, audio, etc.).
- Node.js uses libuv with a thread pool (for DNS, FS, crypto) and a multi-phase event loop.
Analogy: You’re a chef (JS thread) taking one order at a time. The kitchen (runtime) has many sous-chefs (threads) prepping ingredients in parallel. They ring a bell when ready; you plate the dish (run the callback) on your single station.
2) The Event Loop: The Traffic Cop
The event loop coordinates what happens next on the main thread:
- Run synchronous code on the call stack until it’s empty.
- Drain the microtask queue (Promises,
queueMicrotask
, MutationObserver). - If the stack is empty and microtasks are done, pull one macrotask (timers, UI events, network callbacks) and run it.
- Repeat.
Key ordering:
- Microtasks run before macrotasks. This is why
Promise.then(...)
can run before asetTimeout(..., 0)
.
console.log("A");
setTimeout(() => console.log("B (macrotask)"), 0);
Promise.resolve().then(() => console.log("C (microtask)"));
console.log("D");
// A, D, C, B
Even though JS is single-threaded, work enqueued by other threads (network, timers) feeds the queues. That’s how it feels concurrent.
3) Where the “Other Threads” Hide (Browser)
Under the hood, browsers split work across processes/threads:
- Renderer process → your tab (includes the JS main thread).
- Compositor/raster threads → paint and render off the main thread.
- Networking thread → downloads happen concurrently.
- Storage/IndexedDB threads → don’t block JS.
- Audio/Media threads → decoding/playback off main thread.
- Web Workers → your explicit off-main-thread JS.
These workers/threads never mutate your variables directly. They post messages; your main thread handles them later.
Result: You write single-threaded JS, while the browser orchestrates parallelism safely.
4) Where the “Other Threads” Hide (Node.js)
Node’s story:
- One main JS thread executing your code.
- libuv’s thread pool (size defaults to 4, configurable via
UV_THREADPOOL_SIZE
) for filesystem I/O, crypto, zlib, DNS. - Event loop phases: timers → pending callbacks → idle/prepare → poll → check → close callbacks.
- Worker Threads: your explicit multi-threading option for CPU-bound tasks.
- Cluster: multi-process scaling across CPU cores with a load balancer.
Takeaway: Node.js is single-threaded where it matters (your JS logic) but leverages threads for I/O and can run multiple JS threads or processes when you ask it to.
5) Concurrency vs Parallelism (The Dev Reality)
- Concurrency: making progress on multiple tasks by interleaving them (JS event loop).
- Parallelism: multiple tasks literally run at the same time (browser threads, Node thread pool, Workers).
JS achieves concurrency by default and parallelism by delegation.
6) The Good, the Bad, and the Janky
Good: Simplicity & Safety
- No shared-memory data races by default; you mutate state in one place.
- You can reason about order — no preemptive thread switching mid-function.
Bad: One Big Bottleneck
- A long sync task blocks everything: rendering, clicks, timers, fetch handlers.
- UI freezes (“jank”) and servers stall if you CPU-spike on the main thread.
Fix: Offload & Chunk Work
- Web Workers (browser) or Worker Threads (Node) for CPU-bound tasks.
- Break heavy loops using
setTimeout
,requestAnimationFrame
, orqueueMicrotask
to keep the loop breathing. - Stream, don’t buffer giant payloads.
7) Real-World: Blocking vs Non-Blocking (Browser)
❌ Blocking the Main Thread
console.log("Start");
setTimeout(() => console.log("Timer fired"), 0);
// Heavy CPU work (blocks ~300ms)
const end = Date.now() + 300;
while (Date.now() < end) {}
console.log("End");
// Output: Start → End → (after ~300ms) Timer fired
The timer is ready earlier, but can’t run until the stack clears.
✅ Offload to a Web Worker
main.js
const worker = new Worker("./heavy.worker.js", { type: "module" });
worker.onmessage = (e) => console.log("Result:", e.data);
worker.postMessage({ size: 20_000_000 }); // big job
console.log("UI remains responsive");
heavy.worker.js
self.onmessage = (e) => {
const { size } = e.data;
let sum = 0;
for (let i = 0; i < size; i++) sum += i;
self.postMessage(sum);
};
Now the CPU crunch happens off the main thread; your UI remains smooth.
8) Real-World: Parallel I/O (Node.js)
✅ Let I/O Run Concurrently
// node: parallel fetch-like pattern with Promise.all
const fs = require("fs/promises");
async function readMany(paths) {
const reads = paths.map(p => fs.readFile(p, "utf8"));
return Promise.all(reads);
}
libuv’s thread pool and the OS disk scheduler work in parallel while your JS thread stays free.
❌ CPU-Bound on the Main Thread
function fib(n){ return n<2 ? n : fib(n-1)+fib(n-2); }
console.time("cpu");
console.log(fib(43)); // big number, big block
console.timeEnd("cpu"); // Everything else waits 😬
✅ Use a Worker Thread
// worker.js
const { parentPort, workerData } = require("node:worker_threads");
function fib(n){ return n<2 ? n : fib(n-1)+fib(n-2); }
parentPort.postMessage(fib(workerData));
// main.js
const { Worker } = require("node:worker_threads");
function fibAsync(n){
return new Promise((res, rej) => {
const w = new Worker(require.resolve("./worker.js"), { workerData: n });
w.once("message", res);
w.once("error", rej);
});
}
console.time("cpu");
fibAsync(43).then(x => {
console.log(x);
console.timeEnd("cpu"); // main thread stayed responsive
});
9) Microtasks vs Macrotasks: Subtle but Crucial
Microtasks: Promise.then
, queueMicrotask
Macrotasks: setTimeout
, setInterval
, message channel, some I/O callbacks
Rule: After your current call stack finishes, all microtasks run to completion before the event loop takes the next macrotask.
React/State Example (Browser or Node SSR)
setCount(count + 1);
console.log(count); // stale — update is queued
// Use:
setCount(prev => prev + 1);
Understanding that updates/thenables use the microtask queue prevents head-scratchers.
10) WebAssembly & SharedArrayBuffer: True Parallelism (Advanced)
Modern browsers (with proper security headers) and Node can share memory between workers using SharedArrayBuffer and coordinate with Atomics for true parallelism.
// main
const sab = new SharedArrayBuffer(4); // Int32
const view = new Int32Array(sab);
const worker = new Worker("./atomic.worker.js", { type: "module" });
worker.postMessage(sab);
Atomics.store(view, 0, 0);
// later:
Atomics.wait(view, 0, 0); // waits until worker writes
console.log("Worker signaled");
// atomic.worker.js
onmessage = (e) => {
const view = new Int32Array(e.data);
// ... do work ...
Atomics.store(view, 0, 1);
Atomics.notify(view, 0, 1);
};
This is real multi-threading with shared memory — use carefully; you can reintroduce race conditions and need atomics for correctness.
11) Ordering Guarantees & Gotchas
- JS in one context is sequential: a function runs to completion before another starts (no preemption).
- Queues determine ordering across tasks: microtasks first, then macrotasks.
- Timers aren’t precise:
setTimeout(fn, 0)
is “as soon as possible after current job & microtasks,” not immediate. - Long tasks break everything: if you block >50ms on the main thread, users feel it; aim for chunks < 16ms for 60fps.
12) Practical Patterns (Copy-Paste Ready)
Chunk a Heavy Job (UI-Friendly)
function chunkedProcess(items, chunk = 1000) {
let i = 0;
function next() {
const end = Math.min(i + chunk, items.length);
while (i < end) processItem(items[i++]); // synchronous slice
if (i < items.length) setTimeout(next, 0); // yield back to loop
}
next();
}
Prioritize Rendering
function measureHeavyWork() {
requestAnimationFrame(() => { // let browser paint first
// do light work…
setTimeout(() => { /* heavier work */ }, 0);
});
}
Backpressure with Streams (Node)
const fs = require("fs");
fs.createReadStream("big.csv")
.pipe(transformStream)
.pipe(fs.createWriteStream("out.csv"));
Avoids buffering the whole file; cooperates with the event loop.
13) Node.js Event Loop Phases (Speed Tour)
A single “tick” roughly goes:
- Timers: run
setTimeout
/setInterval
callbacks whose time has elapsed. - Pending callbacks: e.g., some I/O from the previous cycle.
- Idle/Prepare: internal.
- Poll: retrieve new I/O events; execute I/O callbacks.
- Check:
setImmediate
callbacks. - Close callbacks: e.g.,
socket.on('close')
.
Microtasks (process.nextTick
, Promises) run between phases with special priority. (Rule of thumb: don’t overuse nextTick
—it can starve I/O.)
14) Why JS Chose This Model (and Why You Benefit)
- History: JavaScript was created to make pages interactive; a single UI thread avoided data races in the DOM.
- Simplicity: No locks, mutexes, or deadlocks in everyday code.
- Scalability: Concurrency via async I/O scales better for many workloads than naive multi-threading.
- Safety: Message passing (workers) prevents a large class of shared-state bugs.
Today we still keep the advantages, and when we need true parallelism, we opt-in (Workers, Worker Threads, WASM threads, cluster).
15) Quick Reference

16) Checklist: Keep Apps Smooth & Fast
- Never do heavy CPU work on the main thread — use Workers/Worker Threads.
- Break work into micro-chunks (≤16ms) to keep 60fps.
- Prefer Promises/async-await for clarity; understand microtask ordering.
- Use streaming (web streams/Node streams) for large data.
- Profile first; optimize bottlenecks (often parsing/serialization).
- In Node, scale with cluster across cores; share nothing or use message channels.
Conclusion
JavaScript’s execution model is single-threaded, but your apps aren’t confined to a single lane. The runtime pulls off a clever trick: concurrency through the event loop plus parallelism in the layers beneath (OS, threads, workers). You get the ergonomics of simple, sequential code — and the performance of a concurrent system when you use the right tools.
Key takeaways:
- JS code on the main thread is single-threaded; the platform is not.
- Event loop + queues enable concurrency without data races.
- For CPU-bound tasks, offload to Workers (browser) or Worker Threads (Node).
- Microtasks beat macrotasks in scheduling; design with that in mind.
- True parallelism is available (SharedArrayBuffer/Atomics, WASM threads) when you need it.
Pro tip: When something “should run now” but doesn’t, ask: Is my code on the stack, a microtask, or a macrotask — and what’s blocking the loop?
Call to Action
What’s the worst “I blocked the event loop” moment you’ve had — frozen UI or a Node server that stopped responding?
💬 Share the story (and fix) in the comments.
🔖 Bookmark this as your event-loop cheat sheet.
👩💻 Share with the teammate who still thinks setTimeout(fn, 0)
runs immediately.
Leave a Reply