Single-Threaded Event Loop Architecture
Node.js runs JavaScript on a single thread. Instead of blocking while waiting for I/O, it registers callbacks and continues executing other code. When the I/O completes, the callback is queued and executed on the main thread.
Think of a restaurant with ONE waiter (main thread) and FOUR kitchen staff (thread pool). The waiter takes everyone's orders without waiting, but if all four cooks are making souffles (CPU work), even a simple salad order gets stuck in the kitchen queue.
The main thread runs the V8 JavaScript engine and the libuv event loop. When Node encounters an async operation (fs.readFile, http.get), it hands the operation off to libuv, which either delegates it to the OS kernel (for network I/O via epoll/kqueue/IOCP) or to its internal thread pool (for file I/O, crypto, DNS). The event loop continuously polls for completed operations and dispatches their callbacks.
The event loop is implemented in libuv as a loop over six phases. Each iteration (a 'tick') checks: (1) expired timers, (2) pending OS-level callbacks, (3) idle/prepare hooks, (4) poll for new I/O (blocks if queue is empty and timers are not imminent), (5) check phase for setImmediate callbacks, (6) close callbacks. JavaScript execution and V8 GC happen exclusively on the main thread. The 'single-threaded' label applies to JS execution; libuv uses a configurable thread pool (UV_THREADPOOL_SIZE, default 4, max 1024) for operations the OS can't do asynchronously (file I/O, getaddrinfo, CPU-bound crypto like pbkdf2).
Node.js uses a single thread for JavaScript execution backed by libuv for async I/O. Network I/O uses OS-native async primitives (epoll on Linux, kqueue on macOS), so it truly doesn't block a thread. File I/O and CPU-heavy operations like bcrypt run on libuv's internal thread pool (default 4 threads). This design achieves high concurrency for I/O-bound workloads but struggles with CPU-bound tasks that block the main thread.
Senior engineers often say 'Node.js is non-blocking' categorically—but file I/O in Node uses libuv's thread pool, not kernel async I/O. Four concurrent bcrypt calls will exhaust the pool and queue all subsequent fs operations, causing visible latency.