• A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

    import { pipeline } from 'node:stream';
    import fs from 'node:fs';
    import zlib from 'node:zlib';

    // Use the pipeline API to easily pipe a series of streams
    // together and get notified when the pipeline is fully done.

    // A pipeline to gzip a potentially huge tar file efficiently:

    pipeline(
    fs.createReadStream('archive.tar'),
    zlib.createGzip(),
    fs.createWriteStream('archive.tar.gz'),
    (err) => {
    if (err) {
    console.error('Pipeline failed.', err);
    } else {
    console.log('Pipeline succeeded.');
    }
    },
    );

    The pipeline API provides a promise version.

    stream.pipeline() will call stream.destroy(err) on all streams except:

    • Readable streams which have emitted 'end' or 'close'.
    • Writable streams which have emitted 'finish' or 'close'.

    stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

    stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

    import fs from 'node:fs';
    import http from 'node:http';
    import { pipeline } from 'node:stream';

    const server = http.createServer((req, res) => {
    const fileStream = fs.createReadStream('./fileNotExist.txt');
    pipeline(fileStream, res, (err) => {
    if (err) {
    console.log(err); // No such file
    // this message can't be sent once `pipeline` already destroyed the socket
    return res.end('error!!!');
    }
    });
    });

    Type Parameters

    Parameters

    Returns B extends WritableStream ? B : WritableStream

    v10.0.0

  • A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

    import { pipeline } from 'node:stream';
    import fs from 'node:fs';
    import zlib from 'node:zlib';

    // Use the pipeline API to easily pipe a series of streams
    // together and get notified when the pipeline is fully done.

    // A pipeline to gzip a potentially huge tar file efficiently:

    pipeline(
    fs.createReadStream('archive.tar'),
    zlib.createGzip(),
    fs.createWriteStream('archive.tar.gz'),
    (err) => {
    if (err) {
    console.error('Pipeline failed.', err);
    } else {
    console.log('Pipeline succeeded.');
    }
    },
    );

    The pipeline API provides a promise version.

    stream.pipeline() will call stream.destroy(err) on all streams except:

    • Readable streams which have emitted 'end' or 'close'.
    • Writable streams which have emitted 'finish' or 'close'.

    stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

    stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

    import fs from 'node:fs';
    import http from 'node:http';
    import { pipeline } from 'node:stream';

    const server = http.createServer((req, res) => {
    const fileStream = fs.createReadStream('./fileNotExist.txt');
    pipeline(fileStream, res, (err) => {
    if (err) {
    console.log(err); // No such file
    // this message can't be sent once `pipeline` already destroyed the socket
    return res.end('error!!!');
    }
    });
    });

    Type Parameters

    Parameters

    • source: A
    • transform1: T1
    • destination: B
    • callback: PipelineCallback<B>

      Called when the pipeline is fully done.

    Returns B extends WritableStream ? B : WritableStream

    v10.0.0

  • A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

    import { pipeline } from 'node:stream';
    import fs from 'node:fs';
    import zlib from 'node:zlib';

    // Use the pipeline API to easily pipe a series of streams
    // together and get notified when the pipeline is fully done.

    // A pipeline to gzip a potentially huge tar file efficiently:

    pipeline(
    fs.createReadStream('archive.tar'),
    zlib.createGzip(),
    fs.createWriteStream('archive.tar.gz'),
    (err) => {
    if (err) {
    console.error('Pipeline failed.', err);
    } else {
    console.log('Pipeline succeeded.');
    }
    },
    );

    The pipeline API provides a promise version.

    stream.pipeline() will call stream.destroy(err) on all streams except:

    • Readable streams which have emitted 'end' or 'close'.
    • Writable streams which have emitted 'finish' or 'close'.

    stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

    stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

    import fs from 'node:fs';
    import http from 'node:http';
    import { pipeline } from 'node:stream';

    const server = http.createServer((req, res) => {
    const fileStream = fs.createReadStream('./fileNotExist.txt');
    pipeline(fileStream, res, (err) => {
    if (err) {
    console.log(err); // No such file
    // this message can't be sent once `pipeline` already destroyed the socket
    return res.end('error!!!');
    }
    });
    });

    Type Parameters

    Parameters

    • source: A
    • transform1: T1
    • transform2: T2
    • destination: B
    • callback: PipelineCallback<B>

      Called when the pipeline is fully done.

    Returns B extends WritableStream ? B : WritableStream

    v10.0.0

  • A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

    import { pipeline } from 'node:stream';
    import fs from 'node:fs';
    import zlib from 'node:zlib';

    // Use the pipeline API to easily pipe a series of streams
    // together and get notified when the pipeline is fully done.

    // A pipeline to gzip a potentially huge tar file efficiently:

    pipeline(
    fs.createReadStream('archive.tar'),
    zlib.createGzip(),
    fs.createWriteStream('archive.tar.gz'),
    (err) => {
    if (err) {
    console.error('Pipeline failed.', err);
    } else {
    console.log('Pipeline succeeded.');
    }
    },
    );

    The pipeline API provides a promise version.

    stream.pipeline() will call stream.destroy(err) on all streams except:

    • Readable streams which have emitted 'end' or 'close'.
    • Writable streams which have emitted 'finish' or 'close'.

    stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

    stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

    import fs from 'node:fs';
    import http from 'node:http';
    import { pipeline } from 'node:stream';

    const server = http.createServer((req, res) => {
    const fileStream = fs.createReadStream('./fileNotExist.txt');
    pipeline(fileStream, res, (err) => {
    if (err) {
    console.log(err); // No such file
    // this message can't be sent once `pipeline` already destroyed the socket
    return res.end('error!!!');
    }
    });
    });

    Type Parameters

    Parameters

    • source: A
    • transform1: T1
    • transform2: T2
    • transform3: T3
    • destination: B
    • callback: PipelineCallback<B>

      Called when the pipeline is fully done.

    Returns B extends WritableStream ? B : WritableStream

    v10.0.0

  • A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

    import { pipeline } from 'node:stream';
    import fs from 'node:fs';
    import zlib from 'node:zlib';

    // Use the pipeline API to easily pipe a series of streams
    // together and get notified when the pipeline is fully done.

    // A pipeline to gzip a potentially huge tar file efficiently:

    pipeline(
    fs.createReadStream('archive.tar'),
    zlib.createGzip(),
    fs.createWriteStream('archive.tar.gz'),
    (err) => {
    if (err) {
    console.error('Pipeline failed.', err);
    } else {
    console.log('Pipeline succeeded.');
    }
    },
    );

    The pipeline API provides a promise version.

    stream.pipeline() will call stream.destroy(err) on all streams except:

    • Readable streams which have emitted 'end' or 'close'.
    • Writable streams which have emitted 'finish' or 'close'.

    stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

    stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

    import fs from 'node:fs';
    import http from 'node:http';
    import { pipeline } from 'node:stream';

    const server = http.createServer((req, res) => {
    const fileStream = fs.createReadStream('./fileNotExist.txt');
    pipeline(fileStream, res, (err) => {
    if (err) {
    console.log(err); // No such file
    // this message can't be sent once `pipeline` already destroyed the socket
    return res.end('error!!!');
    }
    });
    });

    Type Parameters

    Parameters

    • source: A
    • transform1: T1
    • transform2: T2
    • transform3: T3
    • transform4: T4
    • destination: B
    • callback: PipelineCallback<B>

      Called when the pipeline is fully done.

    Returns B extends WritableStream ? B : WritableStream

    v10.0.0

  • A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

    import { pipeline } from 'node:stream';
    import fs from 'node:fs';
    import zlib from 'node:zlib';

    // Use the pipeline API to easily pipe a series of streams
    // together and get notified when the pipeline is fully done.

    // A pipeline to gzip a potentially huge tar file efficiently:

    pipeline(
    fs.createReadStream('archive.tar'),
    zlib.createGzip(),
    fs.createWriteStream('archive.tar.gz'),
    (err) => {
    if (err) {
    console.error('Pipeline failed.', err);
    } else {
    console.log('Pipeline succeeded.');
    }
    },
    );

    The pipeline API provides a promise version.

    stream.pipeline() will call stream.destroy(err) on all streams except:

    • Readable streams which have emitted 'end' or 'close'.
    • Writable streams which have emitted 'finish' or 'close'.

    stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

    stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

    import fs from 'node:fs';
    import http from 'node:http';
    import { pipeline } from 'node:stream';

    const server = http.createServer((req, res) => {
    const fileStream = fs.createReadStream('./fileNotExist.txt');
    pipeline(fileStream, res, (err) => {
    if (err) {
    console.log(err); // No such file
    // this message can't be sent once `pipeline` already destroyed the socket
    return res.end('error!!!');
    }
    });
    });

    Parameters

    Returns WritableStream

    v10.0.0

  • A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

    import { pipeline } from 'node:stream';
    import fs from 'node:fs';
    import zlib from 'node:zlib';

    // Use the pipeline API to easily pipe a series of streams
    // together and get notified when the pipeline is fully done.

    // A pipeline to gzip a potentially huge tar file efficiently:

    pipeline(
    fs.createReadStream('archive.tar'),
    zlib.createGzip(),
    fs.createWriteStream('archive.tar.gz'),
    (err) => {
    if (err) {
    console.error('Pipeline failed.', err);
    } else {
    console.log('Pipeline succeeded.');
    }
    },
    );

    The pipeline API provides a promise version.

    stream.pipeline() will call stream.destroy(err) on all streams except:

    • Readable streams which have emitted 'end' or 'close'.
    • Writable streams which have emitted 'finish' or 'close'.

    stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

    stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

    import fs from 'node:fs';
    import http from 'node:http';
    import { pipeline } from 'node:stream';

    const server = http.createServer((req, res) => {
    const fileStream = fs.createReadStream('./fileNotExist.txt');
    pipeline(fileStream, res, (err) => {
    if (err) {
    console.log(err); // No such file
    // this message can't be sent once `pipeline` already destroyed the socket
    return res.end('error!!!');
    }
    });
    });

    Parameters

    Returns WritableStream

    v10.0.0