Called when the pipeline is fully done.
A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
import { pipeline } from 'node:stream';
import fs from 'node:fs';
import zlib from 'node:zlib';
// Use the pipeline API to easily pipe a series of streams
// together and get notified when the pipeline is fully done.
// A pipeline to gzip a potentially huge tar file efficiently:
pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
(err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
},
);
The pipeline
API provides a promise version
.
stream.pipeline()
will call stream.destroy(err)
on all streams except:
Readable
streams which have emitted 'end'
or 'close'
.Writable
streams which have emitted 'finish'
or 'close'
.stream.pipeline()
leaves dangling event listeners on the streams
after the callback
has been invoked. In the case of reuse of streams after
failure, this can cause event listener leaks and swallowed errors. If the last
stream is readable, dangling event listeners will be removed so that the last
stream can be consumed later.
stream.pipeline()
closes all the streams when an error is raised.
The IncomingRequest
usage with pipeline
could lead to an unexpected behavior
once it would destroy the socket without sending the expected response.
See the example below:
import fs from 'node:fs';
import http from 'node:http';
import { pipeline } from 'node:stream';
const server = http.createServer((req, res) => {
const fileStream = fs.createReadStream('./fileNotExist.txt');
pipeline(fileStream, res, (err) => {
if (err) {
console.log(err); // No such file
// this message can't be sent once `pipeline` already destroyed the socket
return res.end('error!!!');
}
});
});
Called when the pipeline is fully done.
A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
import { pipeline } from 'node:stream';
import fs from 'node:fs';
import zlib from 'node:zlib';
// Use the pipeline API to easily pipe a series of streams
// together and get notified when the pipeline is fully done.
// A pipeline to gzip a potentially huge tar file efficiently:
pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
(err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
},
);
The pipeline
API provides a promise version
.
stream.pipeline()
will call stream.destroy(err)
on all streams except:
Readable
streams which have emitted 'end'
or 'close'
.Writable
streams which have emitted 'finish'
or 'close'
.stream.pipeline()
leaves dangling event listeners on the streams
after the callback
has been invoked. In the case of reuse of streams after
failure, this can cause event listener leaks and swallowed errors. If the last
stream is readable, dangling event listeners will be removed so that the last
stream can be consumed later.
stream.pipeline()
closes all the streams when an error is raised.
The IncomingRequest
usage with pipeline
could lead to an unexpected behavior
once it would destroy the socket without sending the expected response.
See the example below:
import fs from 'node:fs';
import http from 'node:http';
import { pipeline } from 'node:stream';
const server = http.createServer((req, res) => {
const fileStream = fs.createReadStream('./fileNotExist.txt');
pipeline(fileStream, res, (err) => {
if (err) {
console.log(err); // No such file
// this message can't be sent once `pipeline` already destroyed the socket
return res.end('error!!!');
}
});
});
A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
import { pipeline } from 'node:stream';
import fs from 'node:fs';
import zlib from 'node:zlib';
// Use the pipeline API to easily pipe a series of streams
// together and get notified when the pipeline is fully done.
// A pipeline to gzip a potentially huge tar file efficiently:
pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
(err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
},
);
The pipeline
API provides a promise version
.
stream.pipeline()
will call stream.destroy(err)
on all streams except:
Readable
streams which have emitted 'end'
or 'close'
.Writable
streams which have emitted 'finish'
or 'close'
.stream.pipeline()
leaves dangling event listeners on the streams
after the callback
has been invoked. In the case of reuse of streams after
failure, this can cause event listener leaks and swallowed errors. If the last
stream is readable, dangling event listeners will be removed so that the last
stream can be consumed later.
stream.pipeline()
closes all the streams when an error is raised.
The IncomingRequest
usage with pipeline
could lead to an unexpected behavior
once it would destroy the socket without sending the expected response.
See the example below:
import fs from 'node:fs';
import http from 'node:http';
import { pipeline } from 'node:stream';
const server = http.createServer((req, res) => {
const fileStream = fs.createReadStream('./fileNotExist.txt');
pipeline(fileStream, res, (err) => {
if (err) {
console.log(err); // No such file
// this message can't be sent once `pipeline` already destroyed the socket
return res.end('error!!!');
}
});
});
A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
import { pipeline } from 'node:stream';
import fs from 'node:fs';
import zlib from 'node:zlib';
// Use the pipeline API to easily pipe a series of streams
// together and get notified when the pipeline is fully done.
// A pipeline to gzip a potentially huge tar file efficiently:
pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
(err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
},
);
The pipeline
API provides a promise version
.
stream.pipeline()
will call stream.destroy(err)
on all streams except:
Readable
streams which have emitted 'end'
or 'close'
.Writable
streams which have emitted 'finish'
or 'close'
.stream.pipeline()
leaves dangling event listeners on the streams
after the callback
has been invoked. In the case of reuse of streams after
failure, this can cause event listener leaks and swallowed errors. If the last
stream is readable, dangling event listeners will be removed so that the last
stream can be consumed later.
stream.pipeline()
closes all the streams when an error is raised.
The IncomingRequest
usage with pipeline
could lead to an unexpected behavior
once it would destroy the socket without sending the expected response.
See the example below:
import fs from 'node:fs';
import http from 'node:http';
import { pipeline } from 'node:stream';
const server = http.createServer((req, res) => {
const fileStream = fs.createReadStream('./fileNotExist.txt');
pipeline(fileStream, res, (err) => {
if (err) {
console.log(err); // No such file
// this message can't be sent once `pipeline` already destroyed the socket
return res.end('error!!!');
}
});
});
A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
import { pipeline } from 'node:stream';
import fs from 'node:fs';
import zlib from 'node:zlib';
// Use the pipeline API to easily pipe a series of streams
// together and get notified when the pipeline is fully done.
// A pipeline to gzip a potentially huge tar file efficiently:
pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
(err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
},
);
The pipeline
API provides a promise version
.
stream.pipeline()
will call stream.destroy(err)
on all streams except:
Readable
streams which have emitted 'end'
or 'close'
.Writable
streams which have emitted 'finish'
or 'close'
.stream.pipeline()
leaves dangling event listeners on the streams
after the callback
has been invoked. In the case of reuse of streams after
failure, this can cause event listener leaks and swallowed errors. If the last
stream is readable, dangling event listeners will be removed so that the last
stream can be consumed later.
stream.pipeline()
closes all the streams when an error is raised.
The IncomingRequest
usage with pipeline
could lead to an unexpected behavior
once it would destroy the socket without sending the expected response.
See the example below:
import fs from 'node:fs';
import http from 'node:http';
import { pipeline } from 'node:stream';
const server = http.createServer((req, res) => {
const fileStream = fs.createReadStream('./fileNotExist.txt');
pipeline(fileStream, res, (err) => {
if (err) {
console.log(err); // No such file
// this message can't be sent once `pipeline` already destroyed the socket
return res.end('error!!!');
}
});
});
Called when the pipeline is fully done.
A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
import { pipeline } from 'node:stream';
import fs from 'node:fs';
import zlib from 'node:zlib';
// Use the pipeline API to easily pipe a series of streams
// together and get notified when the pipeline is fully done.
// A pipeline to gzip a potentially huge tar file efficiently:
pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz'),
(err) => {
if (err) {
console.error('Pipeline failed.', err);
} else {
console.log('Pipeline succeeded.');
}
},
);
The pipeline
API provides a promise version
.
stream.pipeline()
will call stream.destroy(err)
on all streams except:
Readable
streams which have emitted 'end'
or 'close'
.Writable
streams which have emitted 'finish'
or 'close'
.stream.pipeline()
leaves dangling event listeners on the streams
after the callback
has been invoked. In the case of reuse of streams after
failure, this can cause event listener leaks and swallowed errors. If the last
stream is readable, dangling event listeners will be removed so that the last
stream can be consumed later.
stream.pipeline()
closes all the streams when an error is raised.
The IncomingRequest
usage with pipeline
could lead to an unexpected behavior
once it would destroy the socket without sending the expected response.
See the example below:
import fs from 'node:fs';
import http from 'node:http';
import { pipeline } from 'node:stream';
const server = http.createServer((req, res) => {
const fileStream = fs.createReadStream('./fileNotExist.txt');
pipeline(fileStream, res, (err) => {
if (err) {
console.log(err); // No such file
// this message can't be sent once `pipeline` already destroyed the socket
return res.end('error!!!');
}
});
});
A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.
The
pipeline
API provides apromise version
.stream.pipeline()
will callstream.destroy(err)
on all streams except:Readable
streams which have emitted'end'
or'close'
.Writable
streams which have emitted'finish'
or'close'
.stream.pipeline()
leaves dangling event listeners on the streams after thecallback
has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.stream.pipeline()
closes all the streams when an error is raised. TheIncomingRequest
usage withpipeline
could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below: