Streamshigh

pipe() and stream.pipeline()

pipe() connects a Readable to a Writable, automatically managing data flow and backpressure. stream.pipeline() is an improved API that also handles error propagation and cleanup.

Memory anchor

pipe() is duct tape connecting hoses—cheap and fast but if one hose bursts, the others keep spraying water everywhere. pipeline() is professional plumbing with shut-off valves—one burst and everything closes cleanly.

Expected depth

readable.pipe(writable) returns the destination, enabling chaining: readable.pipe(transform).pipe(writable). It pauses the readable when write() returns false and resumes on 'drain'. However, pipe() does not forward errors—an error in any stream in the chain leaves other streams open. stream.pipeline(src, ...transforms, dest, callback) correctly destroys all streams in the pipeline on error and calls the callback with the error.

Deep — senior internals

Node.js 15+ exposes stream.pipeline as a promise-based API via stream/promises: `import { pipeline } from 'stream/promises'`. The pipeline function also accepts async generators as stages, enabling powerful composition: `await pipeline(fsReadStream, async function*(source) { for await (const chunk of source) yield transform(chunk); }, fsWriteStream)`. This pattern avoids Transform class boilerplate while maintaining correct backpressure and error handling. AbortController can be passed to pipeline to cancel it mid-stream.

🎤Interview-ready answer

pipe() manages backpressure automatically but silently ignores errors. stream.pipeline() is the production-safe alternative—it propagates errors, destroys all streams in the chain, and invokes a completion callback. In modern Node.js, `stream/promises` pipeline with async generators is the most ergonomic pattern for complex streaming ETL pipelines.

Common trap

Chaining pipe() without error handlers is a common resource leak. If a gzip transform errors mid-stream, the source file stream and destination writable stay open. In production with many concurrent requests, this accumulates open file descriptors until EMFILE ('too many open files') crashes the process.