Node.js v4.9.1 Documentation


Table of Contents

About this Documentation#

The goal of this documentation is to comprehensively explain the Node.js API, both from a reference as well as a conceptual point of view. Each section describes a built-in module or high-level concept.

Where appropriate, property types, method arguments, and the arguments provided to event handlers are detailed in a list underneath the topic heading.

Every .html document has a corresponding .json document presenting the same information in a structured manner. This feature is experimental, and added for the benefit of IDEs and other utilities that wish to do programmatic things with the documentation.

Every .html and .json file is generated based on the corresponding .md file in the doc/api/ folder in Node.js's source tree. The documentation is generated using the tools/doc/generate.js program. The HTML template is located at doc/template.html.

If you find an error in this documentation, please submit an issue or see the contributing guide for directions on how to submit a patch.

Stability Index#

Throughout the documentation, you will see indications of a section's stability. The Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Some are so proven, and so relied upon, that they are unlikely to ever change at all. Others are brand new and experimental, or known to be hazardous and in the process of being redesigned.

The stability indices are as follows:

Stability: 0 - Deprecated This feature is known to be problematic, and changes are planned.  Do not rely on it.  Use of the feature may cause warnings.  Backwards compatibility should not be expected.
Stability: 1 - Experimental This feature is subject to change, and is gated by a command line flag. It may change or be removed in future versions.
Stability: 2 - Stable The API has proven satisfactory. Compatibility with the npm ecosystem is a high priority, and will not be broken unless absolutely necessary.
Stability: 3 - Locked Only bug fixes, security fixes, and performance improvements will be accepted. Please do not suggest API changes in this area; they will be refused.

JSON Output#

Stability: 1 - Experimental

Every HTML file in the markdown has a corresponding JSON file with the same data.

This feature was added in Node.js v0.6.12. It is experimental.

Syscalls and man pages#

System calls like open(2) and read(2) define the interface between user programs and the underlying operating system. Node functions which simply wrap a syscall, like fs.open(), will document that. The docs link to the corresponding man pages (short for manual pages) which describe how the syscalls work.

Caveat: some syscalls, like lchown(2), are BSD-specific. That means, for example, that fs.lchown() only works on Mac OS X and other BSD-derived systems, and is not available on Linux.

Most Unix syscalls have Windows equivalents, but behavior may differ on Windows relative to Linux and OS X. For an example of the subtle ways in which it's sometimes impossible to replace Unix syscall semantics on Windows, see Node issue 4760.

Usage#

node [options] [v8 options] [script.js | -e "script"] [arguments]

Please see the Command Line Options document for information about different options and ways to run scripts with Node.

Example#

An example of a web server written with Node.js which responds with 'Hello World':

const http = require('http');

const hostname = '127.0.0.1';
const port = 3000;

const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello World\n');
});

server.listen(port, hostname, () => {
  console.log(`Server running at http://${hostname}:${port}/`);
});

To run the server, put the code into a file called example.js and execute it with Node.js:

$ node example.js
Server running at http://127.0.0.1:3000/

All of the examples in the documentation can be run similarly.

C/C++ Addons#

Node.js Addons are dynamically-linked shared objects, written in C or C++, that can be loaded into Node.js using the require() function, and used just as if they were an ordinary Node.js module. They are used primarily to provide an interface between JavaScript running in Node.js and C/C++ libraries.

At the moment, the method for implementing Addons is rather complicated, involving knowledge of several components and APIs :

  • V8: the C++ library Node.js currently uses to provide the JavaScript implementation. V8 provides the mechanisms for creating objects, calling functions, etc. V8's API is documented mostly in the v8.h header file (deps/v8/include/v8.h in the Node.js source tree), which is also available online.

  • libuv: The C library that implements the Node.js event loop, its worker threads and all of the asynchronous behaviors of the platform. It also serves as a cross-platform abstraction library, giving easy, POSIX-like access across all major operating systems to many common system tasks, such as interacting with the filesystem, sockets, timers and system events. libuv also provides a pthreads-like threading abstraction that may be used to power more sophisticated asynchronous Addons that need to move beyond the standard event loop. Addon authors are encouraged to think about how to avoid blocking the event loop with I/O or other time-intensive tasks by off-loading work via libuv to non-blocking system operations, worker threads or a custom use of libuv's threads.

  • Internal Node.js libraries. Node.js itself exports a number of C/C++ APIs that Addons can use — the most important of which is the node::ObjectWrap class.

  • Node.js includes a number of other statically linked libraries including OpenSSL. These other libraries are located in the deps/ directory in the Node.js source tree. Only the V8 and OpenSSL symbols are purposefully re-exported by Node.js and may be used to various extents by Addons. See Linking to Node.js' own dependencies for additional information.

All of the following examples are available for download and may be used as a starting-point for your own Addon.

Hello world#

This "Hello world" example is a simple Addon, written in C++, that is the equivalent of the following JavaScript code:

module.exports.hello = () => 'world';

First, create the file hello.cc:

// hello.cc
#include <node.h>

namespace demo {

using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;

void Method(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();
  args.GetReturnValue().Set(String::NewFromUtf8(isolate, "world"));
}

void init(Local<Object> exports) {
  NODE_SET_METHOD(exports, "hello", Method);
}

NODE_MODULE(addon, init)

}  // namespace demo

Note that all Node.js Addons must export an initialization function following the pattern:

void Initialize(Local<Object> exports);
NODE_MODULE(module_name, Initialize)

There is no semi-colon after NODE_MODULE as it's not a function (see node.h).

The module_name must match the filename of the final binary (excluding the .node suffix).

In the hello.cc example, then, the initialization function is init and the Addon module name is addon.

Building#

Once the source code has been written, it must be compiled into the binary addon.node file. To do so, create a file called binding.gyp in the top-level of the project describing the build configuration of your module using a JSON-like format. This file is used by node-gyp -- a tool written specifically to compile Node.js Addons.

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [ "hello.cc" ]
    }
  ]
}

Note: A version of the node-gyp utility is bundled and distributed with Node.js as part of npm. This version is not made directly available for developers to use and is intended only to support the ability to use the npm install command to compile and install Addons. Developers who wish to use node-gyp directly can install it using the command npm install -g node-gyp. See the node-gyp installation instructions for more information, including platform-specific requirements.

Once the binding.gyp file has been created, use node-gyp configure to generate the appropriate project build files for the current platform. This will generate either a Makefile (on Unix platforms) or a vcxproj file (on Windows) in the build/ directory.

Next, invoke the node-gyp build command to generate the compiled addon.node file. This will be put into the build/Release/ directory.

When using npm install to install a Node.js Addon, npm uses its own bundled version of node-gyp to perform this same set of actions, generating a compiled version of the Addon for the user's platform on demand.

Once built, the binary Addon can be used from within Node.js by pointing require() to the built addon.node module:

// hello.js
const addon = require('./build/Release/addon');

console.log(addon.hello()); // 'world'

Please see the examples below for further information or https://github.com/arturadib/node-qt for an example in production.

Because the exact path to the compiled Addon binary can vary depending on how it is compiled (i.e. sometimes it may be in ./build/Debug/), Addons can use the bindings package to load the compiled module.

Note that while the bindings package implementation is more sophisticated in how it locates Addon modules, it is essentially using a try-catch pattern similar to:

try {
  return require('./build/Release/addon.node');
} catch (err) {
  return require('./build/Debug/addon.node');
}

Linking to Node.js' own dependencies#

Node.js uses a number of statically linked libraries such as V8, libuv and OpenSSL. All Addons are required to link to V8 and may link to any of the other dependencies as well. Typically, this is as simple as including the appropriate #include <...> statements (e.g. #include <v8.h>) and node-gyp will locate the appropriate headers automatically. However, there are a few caveats to be aware of:

  • When node-gyp runs, it will detect the specific release version of Node.js and download either the full source tarball or just the headers. If the full source is downloaded, Addons will have complete access to the full set of Node.js dependencies. However, if only the Node.js headers are downloaded, then only the symbols exported by Node.js will be available.

  • node-gyp can be run using the --nodedir flag pointing at a local Node.js source image. Using this option, the Addon will have access to the full set of dependencies.

Loading Addons using require()#

The filename extension of the compiled Addon binary is .node (as opposed to .dll or .so). The require() function is written to look for files with the .node file extension and initialize those as dynamically-linked libraries.

When calling require(), the .node extension can usually be omitted and Node.js will still find and initialize the Addon. One caveat, however, is that Node.js will first attempt to locate and load modules or JavaScript files that happen to share the same base name. For instance, if there is a file addon.js in the same directory as the binary addon.node, then require('addon') will give precedence to the addon.js file and load it instead.

Native Abstractions for Node.js#

Each of the examples illustrated in this document make direct use of the Node.js and V8 APIs for implementing Addons. It is important to understand that the V8 API can, and has, changed dramatically from one V8 release to the next (and one major Node.js release to the next). With each change, Addons may need to be updated and recompiled in order to continue functioning. The Node.js release schedule is designed to minimize the frequency and impact of such changes but there is little that Node.js can do currently to ensure stability of the V8 APIs.

The Native Abstractions for Node.js (or nan) provide a set of tools that Addon developers are recommended to use to keep compatibility between past and future releases of V8 and Node.js. See the nan examples for an illustration of how it can be used.

Addon examples#

Following are some example Addons intended to help developers get started. The examples make use of the V8 APIs. Refer to the online V8 reference for help with the various V8 calls, and V8's Embedder's Guide for an explanation of several concepts used such as handles, scopes, function templates, etc.

Each of these examples using the following binding.gyp file:

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [ "addon.cc" ]
    }
  ]
}

In cases where there is more than one .cc file, simply add the additional filename to the sources array. For example:

"sources": ["addon.cc", "myexample.cc"]

Once the binding.gyp file is ready, the example Addons can be configured and built using node-gyp:

$ node-gyp configure build

Function arguments#

Addons will typically expose objects and functions that can be accessed from JavaScript running within Node.js. When functions are invoked from JavaScript, the input arguments and return value must be mapped to and from the C/C++ code.

The following example illustrates how to read function arguments passed from JavaScript and how to return a result:

// addon.cc
#include <node.h>

namespace demo {

using v8::Exception;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::String;
using v8::Value;

// This is the implementation of the "add" method
// Input arguments are passed using the
// const FunctionCallbackInfo<Value>& args struct
void Add(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  // Check the number of arguments passed.
  if (args.Length() < 2) {
    // Throw an Error that is passed back to JavaScript
    isolate->ThrowException(Exception::TypeError(
        String::NewFromUtf8(isolate, "Wrong number of arguments")));
    return;
  }

  // Check the argument types
  if (!args[0]->IsNumber() || !args[1]->IsNumber()) {
    isolate->ThrowException(Exception::TypeError(
        String::NewFromUtf8(isolate, "Wrong arguments")));
    return;
  }

  // Perform the operation
  double value = args[0]->NumberValue() + args[1]->NumberValue();
  Local<Number> num = Number::New(isolate, value);

  // Set the return value (using the passed in
  // FunctionCallbackInfo<Value>&)
  args.GetReturnValue().Set(num);
}

void Init(Local<Object> exports) {
  NODE_SET_METHOD(exports, "add", Add);
}

NODE_MODULE(addon, Init)

}  // namespace demo

Once compiled, the example Addon can be required and used from within Node.js:

// test.js
const addon = require('./build/Release/addon');

console.log('This should be eight:', addon.add(3, 5));

Callbacks#

It is common practice within Addons to pass JavaScript functions to a C++ function and execute them from there. The following example illustrates how to invoke such callbacks:

// addon.cc
#include <node.h>

namespace demo {

using v8::Function;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Null;
using v8::Object;
using v8::String;
using v8::Value;

void RunCallback(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();
  Local<Function> cb = Local<Function>::Cast(args[0]);
  const unsigned argc = 1;
  Local<Value> argv[argc] = { String::NewFromUtf8(isolate, "hello world") };
  cb->Call(Null(isolate), argc, argv);
}

void Init(Local<Object> exports, Local<Object> module) {
  NODE_SET_METHOD(module, "exports", RunCallback);
}

NODE_MODULE(addon, Init)

}  // namespace demo

Note that this example uses a two-argument form of Init() that receives the full module object as the second argument. This allows the Addon to completely overwrite exports with a single function instead of adding the function as a property of exports.

To test it, run the following JavaScript:

// test.js
const addon = require('./build/Release/addon');

addon((msg) => {
  console.log(msg); // 'hello world'
});

Note that, in this example, the callback function is invoked synchronously.

Object factory#

Addons can create and return new objects from within a C++ function as illustrated in the following example. An object is created and returned with a property msg that echoes the string passed to createObject():

// addon.cc
#include <node.h>

namespace demo {

using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;

void CreateObject(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  Local<Object> obj = Object::New(isolate);
  obj->Set(String::NewFromUtf8(isolate, "msg"), args[0]->ToString());

  args.GetReturnValue().Set(obj);
}

void Init(Local<Object> exports, Local<Object> module) {
  NODE_SET_METHOD(module, "exports", CreateObject);
}

NODE_MODULE(addon, Init)

}  // namespace demo

To test it in JavaScript:

// test.js
const addon = require('./build/Release/addon');

var obj1 = addon('hello');
var obj2 = addon('world');
console.log(obj1.msg, obj2.msg); // 'hello world'

Function factory#

Another common scenario is creating JavaScript functions that wrap C++ functions and returning those back to JavaScript:

// addon.cc
#include <node.h>

namespace demo {

using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;

void MyFunction(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();
  args.GetReturnValue().Set(String::NewFromUtf8(isolate, "hello world"));
}

void CreateFunction(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, MyFunction);
  Local<Function> fn = tpl->GetFunction();

  // omit this to make it anonymous
  fn->SetName(String::NewFromUtf8(isolate, "theFunction"));

  args.GetReturnValue().Set(fn);
}

void Init(Local<Object> exports, Local<Object> module) {
  NODE_SET_METHOD(module, "exports", CreateFunction);
}

NODE_MODULE(addon, Init)

}  // namespace demo

To test:

// test.js
const addon = require('./build/Release/addon');

var fn = addon();
console.log(fn()); // 'hello world'

Wrapping C++ objects#

It is also possible to wrap C++ objects/classes in a way that allows new instances to be created using the JavaScript new operator:

// addon.cc
#include <node.h>
#include "myobject.h"

namespace demo {

using v8::Local;
using v8::Object;

void InitAll(Local<Object> exports) {
  MyObject::Init(exports);
}

NODE_MODULE(addon, InitAll)

}  // namespace demo

Then, in myobject.h, the wrapper class inherits from node::ObjectWrap:

// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H

#include <node.h>
#include <node_object_wrap.h>

namespace demo {

class MyObject : public node::ObjectWrap {
 public:
  static void Init(v8::Local<v8::Object> exports);

 private:
  explicit MyObject(double value = 0);
  ~MyObject();

  static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
  static void PlusOne(const v8::FunctionCallbackInfo<v8::Value>& args);
  static v8::Persistent<v8::Function> constructor;
  double value_;
};

}  // namespace demo

#endif

In myobject.cc, implement the various methods that are to be exposed. Below, the method plusOne() is exposed by adding it to the constructor's prototype:

// myobject.cc
#include "myobject.h"

namespace demo {

using v8::Context;
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::Persistent;
using v8::String;
using v8::Value;

Persistent<Function> MyObject::constructor;

MyObject::MyObject(double value) : value_(value) {
}

MyObject::~MyObject() {
}

void MyObject::Init(Local<Object> exports) {
  Isolate* isolate = exports->GetIsolate();

  // Prepare constructor template
  Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
  tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
  tpl->InstanceTemplate()->SetInternalFieldCount(1);

  // Prototype
  NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);

  constructor.Reset(isolate, tpl->GetFunction());
  exports->Set(String::NewFromUtf8(isolate, "MyObject"),
               tpl->GetFunction());
}

void MyObject::New(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  if (args.IsConstructCall()) {
    // Invoked as constructor: `new MyObject(...)`
    double value = args[0]->IsUndefined() ? 0 : args[0]->NumberValue();
    MyObject* obj = new MyObject(value);
    obj->Wrap(args.This());
    args.GetReturnValue().Set(args.This());
  } else {
    // Invoked as plain function `MyObject(...)`, turn into construct call.
    const int argc = 1;
    Local<Value> argv[argc] = { args[0] };
    Local<Context> context = isolate->GetCurrentContext();
    Local<Function> cons = Local<Function>::New(isolate, constructor);
    Local<Object> result =
        cons->NewInstance(context, argc, argv).ToLocalChecked();
    args.GetReturnValue().Set(result);
  }
}

void MyObject::PlusOne(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.Holder());
  obj->value_ += 1;

  args.GetReturnValue().Set(Number::New(isolate, obj->value_));
}

}  // namespace demo

To build this example, the myobject.cc file must be added to the binding.gyp:

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [
        "addon.cc",
        "myobject.cc"
      ]
    }
  ]
}

Test it with:

// test.js
const addon = require('./build/Release/addon');

var obj = new addon.MyObject(10);
console.log(obj.plusOne()); // 11
console.log(obj.plusOne()); // 12
console.log(obj.plusOne()); // 13

Factory of wrapped objects#

Alternatively, it is possible to use a factory pattern to avoid explicitly creating object instances using the JavaScript new operator:

var obj = addon.createObject();
// instead of:
// var obj = new addon.Object();

First, the createObject() method is implemented in addon.cc:

// addon.cc
#include <node.h>
#include "myobject.h"

namespace demo {

using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;

void CreateObject(const FunctionCallbackInfo<Value>& args) {
  MyObject::NewInstance(args);
}

void InitAll(Local<Object> exports, Local<Object> module) {
  MyObject::Init(exports->GetIsolate());

  NODE_SET_METHOD(module, "exports", CreateObject);
}

NODE_MODULE(addon, InitAll)

}  // namespace demo

In myobject.h, the static method NewInstance() is added to handle instantiating the object. This method takes the place of using new in JavaScript:

// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H

#include <node.h>
#include <node_object_wrap.h>

namespace demo {

class MyObject : public node::ObjectWrap {
 public:
  static void Init(v8::Isolate* isolate);
  static void NewInstance(const v8::FunctionCallbackInfo<v8::Value>& args);

 private:
  explicit MyObject(double value = 0);
  ~MyObject();

  static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
  static void PlusOne(const v8::FunctionCallbackInfo<v8::Value>& args);
  static v8::Persistent<v8::Function> constructor;
  double value_;
};

}  // namespace demo

#endif

The implementation in myobject.cc is similar to the previous example:

// myobject.cc
#include <node.h>
#include "myobject.h"

namespace demo {

using v8::Context;
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::Persistent;
using v8::String;
using v8::Value;

Persistent<Function> MyObject::constructor;

MyObject::MyObject(double value) : value_(value) {
}

MyObject::~MyObject() {
}

void MyObject::Init(Isolate* isolate) {
  // Prepare constructor template
  Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
  tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
  tpl->InstanceTemplate()->SetInternalFieldCount(1);

  // Prototype
  NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);

  constructor.Reset(isolate, tpl->GetFunction());
}

void MyObject::New(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  if (args.IsConstructCall()) {
    // Invoked as constructor: `new MyObject(...)`
    double value = args[0]->IsUndefined() ? 0 : args[0]->NumberValue();
    MyObject* obj = new MyObject(value);
    obj->Wrap(args.This());
    args.GetReturnValue().Set(args.This());
  } else {
    // Invoked as plain function `MyObject(...)`, turn into construct call.
    const int argc = 1;
    Local<Value> argv[argc] = { args[0] };
    Local<Function> cons = Local<Function>::New(isolate, constructor);
    Local<Context> context = isolate->GetCurrentContext();
    Local<Object> instance =
        cons->NewInstance(context, argc, argv).ToLocalChecked();
    args.GetReturnValue().Set(instance);
  }
}

void MyObject::NewInstance(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  const unsigned argc = 1;
  Local<Value> argv[argc] = { args[0] };
  Local<Function> cons = Local<Function>::New(isolate, constructor);
  Local<Context> context = isolate->GetCurrentContext();
  Local<Object> instance =
      cons->NewInstance(context, argc, argv).ToLocalChecked();

  args.GetReturnValue().Set(instance);
}

void MyObject::PlusOne(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.Holder());
  obj->value_ += 1;

  args.GetReturnValue().Set(Number::New(isolate, obj->value_));
}

}  // namespace demo

Once again, to build this example, the myobject.cc file must be added to the binding.gyp:

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [
        "addon.cc",
        "myobject.cc"
      ]
    }
  ]
}

Test it with:

// test.js
const createObject = require('./build/Release/addon');

var obj = createObject(10);
console.log(obj.plusOne()); // 11
console.log(obj.plusOne()); // 12
console.log(obj.plusOne()); // 13

var obj2 = createObject(20);
console.log(obj2.plusOne()); // 21
console.log(obj2.plusOne()); // 22
console.log(obj2.plusOne()); // 23

Passing wrapped objects around#

In addition to wrapping and returning C++ objects, it is possible to pass wrapped objects around by unwrapping them with the Node.js helper function node::ObjectWrap::Unwrap. The following examples shows a function add() that can take two MyObject objects as input arguments:

// addon.cc
#include <node.h>
#include <node_object_wrap.h>
#include "myobject.h"

namespace demo {

using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::String;
using v8::Value;

void CreateObject(const FunctionCallbackInfo<Value>& args) {
  MyObject::NewInstance(args);
}

void Add(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  MyObject* obj1 = node::ObjectWrap::Unwrap<MyObject>(
      args[0]->ToObject());
  MyObject* obj2 = node::ObjectWrap::Unwrap<MyObject>(
      args[1]->ToObject());

  double sum = obj1->value() + obj2->value();
  args.GetReturnValue().Set(Number::New(isolate, sum));
}

void InitAll(Local<Object> exports) {
  MyObject::Init(exports->GetIsolate());

  NODE_SET_METHOD(exports, "createObject", CreateObject);
  NODE_SET_METHOD(exports, "add", Add);
}

NODE_MODULE(addon, InitAll)

}  // namespace demo

In myobject.h, a new public method is added to allow access to private values after unwrapping the object.

// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H

#include <node.h>
#include <node_object_wrap.h>

namespace demo {

class MyObject : public node::ObjectWrap {
 public:
  static void Init(v8::Isolate* isolate);
  static void NewInstance(const v8::FunctionCallbackInfo<v8::Value>& args);
  inline double value() const { return value_; }

 private:
  explicit MyObject(double value = 0);
  ~MyObject();

  static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
  static v8::Persistent<v8::Function> constructor;
  double value_;
};

}  // namespace demo

#endif

The implementation of myobject.cc is similar to before:

// myobject.cc
#include <node.h>
#include "myobject.h"

namespace demo {

using v8::Context;
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::Persistent;
using v8::String;
using v8::Value;

Persistent<Function> MyObject::constructor;

MyObject::MyObject(double value) : value_(value) {
}

MyObject::~MyObject() {
}

void MyObject::Init(Isolate* isolate) {
  // Prepare constructor template
  Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
  tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
  tpl->InstanceTemplate()->SetInternalFieldCount(1);

  constructor.Reset(isolate, tpl->GetFunction());
}

void MyObject::New(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  if (args.IsConstructCall()) {
    // Invoked as constructor: `new MyObject(...)`
    double value = args[0]->IsUndefined() ? 0 : args[0]->NumberValue();
    MyObject* obj = new MyObject(value);
    obj->Wrap(args.This());
    args.GetReturnValue().Set(args.This());
  } else {
    // Invoked as plain function `MyObject(...)`, turn into construct call.
    const int argc = 1;
    Local<Value> argv[argc] = { args[0] };
    Local<Context> context = isolate->GetCurrentContext();
    Local<Function> cons = Local<Function>::New(isolate, constructor);
    Local<Object> instance =
        cons->NewInstance(context, argc, argv).ToLocalChecked();
    args.GetReturnValue().Set(instance);
  }
}

void MyObject::NewInstance(const FunctionCallbackInfo<Value>& args) {
  Isolate* isolate = args.GetIsolate();

  const unsigned argc = 1;
  Local<Value> argv[argc] = { args[0] };
  Local<Function> cons = Local<Function>::New(isolate, constructor);
  Local<Context> context = isolate->GetCurrentContext();
  Local<Object> instance =
      cons->NewInstance(context, argc, argv).ToLocalChecked();

  args.GetReturnValue().Set(instance);
}

}  // namespace demo

Test it with:

// test.js
const addon = require('./build/Release/addon');

var obj1 = addon.createObject(10);
var obj2 = addon.createObject(20);
var result = addon.add(obj1, obj2);

console.log(result); // 30

AtExit hooks#

An "AtExit" hook is a function that is invoked after the Node.js event loop has ended but before the JavaScript VM is terminated and Node.js shuts down. "AtExit" hooks are registered using the node::AtExit API.

void AtExit(callback, args)#

  • callback: void (*)(void*) - A pointer to the function to call at exit.
  • args: void* - A pointer to pass to the callback at exit.

Registers exit hooks that run after the event loop has ended but before the VM is killed.

AtExit takes two parameters: a pointer to a callback function to run at exit, and a pointer to untyped context data to be passed to that callback.

Callbacks are run in last-in first-out order.

The following addon.cc implements AtExit:

// addon.cc
#undef NDEBUG
#include <assert.h>
#include <stdlib.h>
#include <node.h>

namespace demo {

using node::AtExit;
using v8::HandleScope;
using v8::Isolate;
using v8::Local;
using v8::Object;

static char cookie[] = "yum yum";
static int at_exit_cb1_called = 0;
static int at_exit_cb2_called = 0;

static void at_exit_cb1(void* arg) {
  Isolate* isolate = static_cast<Isolate*>(arg);
  HandleScope scope(isolate);
  Local<Object> obj = Object::New(isolate);
  assert(!obj.IsEmpty()); // assert VM is still alive
  assert(obj->IsObject());
  at_exit_cb1_called++;
}

static void at_exit_cb2(void* arg) {
  assert(arg == static_cast<void*>(cookie));
  at_exit_cb2_called++;
}

static void sanity_check(void*) {
  assert(at_exit_cb1_called == 1);
  assert(at_exit_cb2_called == 2);
}

void init(Local<Object> exports) {
  AtExit(sanity_check);
  AtExit(at_exit_cb2, cookie);
  AtExit(at_exit_cb2, cookie);
  AtExit(at_exit_cb1, exports->GetIsolate());
}

NODE_MODULE(addon, init);

}  // namespace demo

Test in JavaScript by running:

// test.js
const addon = require('./build/Release/addon');

Assert#

Stability: 2 - Stable

The assert module provides a simple set of assertion tests that can be used to test invariants.

assert(value[, message])#

  • value <any>
  • message <any>

An alias of assert.ok() .

const assert = require('assert');

assert(true);  // OK
assert(1);     // OK
assert(false);
  // throws "AssertionError: false == true"
assert(0);
  // throws "AssertionError: 0 == true"
assert(false, 'it\'s false');
  // throws "AssertionError: it's false"

assert.deepEqual(actual, expected[, message])#

  • actual <any>
  • expected <any>
  • message <any>

Tests for deep equality between the actual and expected parameters. Primitive values are compared with the equal comparison operator ( == ).

Only enumerable "own" properties are considered. The deepEqual() implementation does not test object prototypes, attached symbols, or non-enumerable properties. This can lead to some potentially surprising results. For example, the following example does not throw an AssertionError because the properties on the Error object are non-enumerable:

// WARNING: This does not throw an AssertionError!
assert.deepEqual(Error('a'), Error('b'));

"Deep" equality means that the enumerable "own" properties of child objects are evaluated also:

const assert = require('assert');

const obj1 = {
  a : {
    b : 1
  }
};
const obj2 = {
  a : {
    b : 2
  }
};
const obj3 = {
  a : {
    b : 1
  }
}
const obj4 = Object.create(obj1);

assert.deepEqual(obj1, obj1);
  // OK, object is equal to itself

assert.deepEqual(obj1, obj2);
  // AssertionError: { a: { b: 1 } } deepEqual { a: { b: 2 } }
  // values of b are different

assert.deepEqual(obj1, obj3);
  // OK, objects are equal

assert.deepEqual(obj1, obj4);
  // AssertionError: { a: { b: 1 } } deepEqual {}
  // Prototypes are ignored

If the values are not equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.deepStrictEqual(actual, expected[, message])#

  • actual <any>
  • expected <any>
  • message <any>

Generally identical to assert.deepEqual() with two exceptions. First, primitive values are compared using the strict equality operator ( === ). Second, object comparisons include a strict equality check of their prototypes.

const assert = require('assert');

assert.deepEqual({a:1}, {a:'1'});
  // OK, because 1 == '1'

assert.deepStrictEqual({a:1}, {a:'1'});
  // AssertionError: { a: 1 } deepStrictEqual { a: '1' }
  // because 1 !== '1' using strict equality

If the values are not equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.doesNotThrow(block[, error][, message])#

Asserts that the function block does not throw an error. See assert.throws() for more details.

When assert.doesNotThrow() is called, it will immediately call the block function.

If an error is thrown and it is the same type as that specified by the error parameter, then an AssertionError is thrown. If the error is of a different type, or if the error parameter is undefined, the error is propagated back to the caller.

The following, for instance, will throw the TypeError because there is no matching error type in the assertion:

assert.doesNotThrow(
  () => {
    throw new TypeError('Wrong value');
  },
  SyntaxError
);

However, the following will result in an AssertionError with the message 'Got unwanted exception (TypeError)..':

assert.doesNotThrow(
  () => {
    throw new TypeError('Wrong value');
  },
  TypeError
);

If an AssertionError is thrown and a value is provided for the message parameter, the value of message will be appended to the AssertionError message:

assert.doesNotThrow(
  () => {
    throw new TypeError('Wrong value');
  },
  TypeError,
  'Whoops'
);
// Throws: AssertionError: Got unwanted exception (TypeError). Whoops

assert.equal(actual, expected[, message])#

  • actual <any>
  • expected <any>
  • message <any>

Tests shallow, coercive equality between the actual and expected parameters using the equal comparison operator ( == ).

const assert = require('assert');

assert.equal(1, 1);
  // OK, 1 == 1
assert.equal(1, '1');
  // OK, 1 == '1'

assert.equal(1, 2);
  // AssertionError: 1 == 2
assert.equal({a: {b: 1}}, {a: {b: 1}});
  //AssertionError: { a: { b: 1 } } == { a: { b: 1 } }

If the values are not equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.fail(actual, expected, message, operator)#

  • actual <any>
  • expected <any>
  • message <any>
  • operator <String>

Throws an AssertionError. If message is falsy, the error message is set as the values of actual and expected separated by the provided operator. Otherwise, the error message is the value of message.

const assert = require('assert');

assert.fail(1, 2, undefined, '>');
  // AssertionError: 1 > 2

assert.fail(1, 2, 'whoops', '>');
  // AssertionError: whoops

assert.ifError(value)#

  • value <any>

Throws value if value is truthy. This is useful when testing the error argument in callbacks.

const assert = require('assert');

assert.ifError(0); // OK
assert.ifError(1); // Throws 1
assert.ifError('error') // Throws 'error'
assert.ifError(new Error()); // Throws Error

assert.notDeepEqual(actual, expected[, message])#

  • actual <any>
  • expected <any>
  • message <any>

Tests for any deep inequality. Opposite of assert.deepEqual().

const assert = require('assert');

const obj1 = {
  a : {
    b : 1
  }
};
const obj2 = {
  a : {
    b : 2
  }
};
const obj3 = {
  a : {
    b : 1
  }
};
const obj4 = Object.create(obj1);

assert.notDeepEqual(obj1, obj1);
  // AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }

assert.notDeepEqual(obj1, obj2);
  // OK, obj1 and obj2 are not deeply equal

assert.notDeepEqual(obj1, obj3);
  // AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }

assert.notDeepEqual(obj1, obj4);
  // OK, obj1 and obj4 are not deeply equal

If the values are deeply equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.notDeepStrictEqual(actual, expected[, message])#

  • actual <any>
  • expected <any>
  • message <any>

Tests for deep strict inequality. Opposite of assert.deepStrictEqual().

const assert = require('assert');

assert.notDeepEqual({a:1}, {a:'1'});
  // AssertionError: { a: 1 } notDeepEqual { a: '1' }

assert.notDeepStrictEqual({a:1}, {a:'1'});
  // OK

If the values are deeply and strictly equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.notEqual(actual, expected[, message])#

  • actual <any>
  • expected <any>
  • message <any>

Tests shallow, coercive inequality with the not equal comparison operator ( != ).

const assert = require('assert');

assert.notEqual(1, 2);
  // OK

assert.notEqual(1, 1);
  // AssertionError: 1 != 1

assert.notEqual(1, '1');
  // AssertionError: 1 != '1'

If the values are equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.notStrictEqual(actual, expected[, message])#

  • actual <any>
  • expected <any>
  • message <any>

Tests strict inequality as determined by the strict not equal operator ( !== ).

const assert = require('assert');

assert.notStrictEqual(1, 2);
  // OK

assert.notStrictEqual(1, 1);
  // AssertionError: 1 !== 1

assert.notStrictEqual(1, '1');
  // OK

If the values are strictly equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.ok(value[, message])#

  • value <any>
  • message <any>

Tests if value is truthy. It is equivalent to assert.equal(!!value, true, message).

If value is not truthy, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

const assert = require('assert');

assert.ok(true);  // OK
assert.ok(1);     // OK
assert.ok(false);
  // throws "AssertionError: false == true"
assert.ok(0);
  // throws "AssertionError: 0 == true"
assert.ok(false, 'it\'s false');
  // throws "AssertionError: it's false"

assert.strictEqual(actual, expected[, message])#

  • actual <any>
  • expected <any>
  • message <any>

Tests strict equality as determined by the strict equality operator ( === ).

const assert = require('assert');

assert.strictEqual(1, 2);
  // AssertionError: 1 === 2

assert.strictEqual(1, 1);
  // OK

assert.strictEqual(1, '1');
  // AssertionError: 1 === '1'

If the values are not strictly equal, an AssertionError is thrown with a message property set equal to the value of the message parameter. If the message parameter is undefined, a default error message is assigned.

assert.throws(block[, error][, message])#

Expects the function block to throw an error.

If specified, error can be a constructor, RegExp, or validation function.

If specified, message will be the message provided by the AssertionError if the block fails to throw.

Validate instanceof using constructor:

assert.throws(
  () => {
    throw new Error('Wrong value');
  },
  Error
);

Validate error message using RegExp:

assert.throws(
  () => {
    throw new Error('Wrong value');
  },
  /value/
);

Custom error validation:

assert.throws(
  () => {
    throw new Error('Wrong value');
  },
  function(err) {
    if ( (err instanceof Error) && /value/.test(err) ) {
      return true;
    }
  },
  'unexpected error'
);

Note that error can not be a string. If a string is provided as the second argument, then error is assumed to be omitted and the string will be used for message instead. This can lead to easy-to-miss mistakes:

// THIS IS A MISTAKE! DO NOT DO THIS!
assert.throws(myFunction, 'missing foo', 'did not throw with expected message');

// Do this instead.
assert.throws(myFunction, /missing foo/, 'did not throw with expected message');

Buffer#

Stability: 2 - Stable

Prior to the introduction of TypedArray in ECMAScript 2015 (ES6), the JavaScript language had no mechanism for reading or manipulating streams of binary data. The Buffer class was introduced as part of the Node.js API to make it possible to interact with octet streams in the context of things like TCP streams and file system operations.

Now that TypedArray has been added in ES6, the Buffer class implements the Uint8Array API in a manner that is more optimized and suitable for Node.js' use cases.

Instances of the Buffer class are similar to arrays of integers but correspond to fixed-sized, raw memory allocations outside the V8 heap. The size of the Buffer is established when it is created and cannot be resized.

The Buffer class is a global within Node.js, making it unlikely that one would need to ever use require('buffer').Buffer.

const buf1 = new Buffer(10);
  // creates a buffer of length 10
  // This is the same as Buffer.allocUnsafe(10), and the returned
  // Buffer instance might contain old data that needs to be
  // overwritten using either fill() or write().

const buf2 = new Buffer([1,2,3]);
  // creates a buffer containing [01, 02, 03]
  // This is the same as Buffer.from([1,2,3]).

const buf3 = new Buffer('test');
  // creates a buffer containing ASCII bytes [74, 65, 73, 74]
  // This is the same as Buffer.from('test').

const buf4 = new Buffer('tést', 'utf8');
  // creates a buffer containing UTF8 bytes [74, c3, a9, 73, 74]
  // This is the same as Buffer.from('tést', 'utf8').

const buf5 = Buffer.alloc(10);
  // Creates a zero-filled Buffer of length 10.

const buf6 = Buffer.alloc(10, 1);
  // Creates a Buffer of length 10, filled with 0x01.

const buf7 = Buffer.allocUnsafe(10);
  // Creates an uninitialized buffer of length 10.
  // This is faster than calling Buffer.alloc() but the returned
  // Buffer instance might contain old data that needs to be
  // overwritten using either fill() or write().

const buf8 = Buffer.from([1,2,3]);
  // Creates a Buffer containing [01, 02, 03].

const buf9 = Buffer.from('test');
  // Creates a Buffer containing ASCII bytes [74, 65, 73, 74].

const buf8 = Buffer.from('tést', 'utf8');
  // Creates a Buffer containing UTF8 bytes [74, c3, a9, 73, 74].

Buffer.from(), Buffer.alloc(), and Buffer.allocUnsafe()#

Historically, Buffer instances have been created using the Buffer constructor function, which allocates the returned Buffer differently based on what arguments are provided:

  • Passing a number as the first argument to Buffer() (e.g. new Buffer(10)), allocates a new Buffer object of the specified size. The memory allocated for such Buffer instances is not initialized and can contain sensitive data. Such Buffer objects must be initialized manually by using either buf.fill(0) or by writing to the Buffer completely. While this behavior is intentional to improve performance, development experience has demonstrated that a more explicit distinction is required between creating a fast-but-uninitialized Buffer versus creating a slower-but-safer Buffer.
  • Passing a string, array, or Buffer as the first argument copies the passed object's data into the Buffer.
  • Passing an ArrayBuffer returns a Buffer that shares allocated memory with the given ArrayBuffer.

Because the behavior of new Buffer() changes significantly based on the type of value passed as the first argument, applications that do not properly validate the input arguments passed to new Buffer(), or that fail to appropriately initialize newly allocated Buffer content, can inadvertently introduce security and reliability issues into their code.

To make the creation of Buffer objects more reliable and less error prone, new Buffer.from(), Buffer.alloc(), and Buffer.allocUnsafe() methods have been introduced as an alternative means of creating Buffer instances.

Developers should migrate all existing uses of the new Buffer() constructors to one of these new APIs.

Buffer instances returned by Buffer.allocUnsafe(size) may be allocated off a shared internal memory pool if size is less than or equal to half Buffer.poolSize. Instances returned by Buffer.allocUnsafeSlow(size) never use the shared internal memory pool.

What makes Buffer.allocUnsafe(size) and Buffer.allocUnsafeSlow(size) "unsafe"?#

When calling Buffer.allocUnsafe() (and Buffer.allocUnsafeSlow()), the segment of allocated memory is uninitialized (it is not zeroed-out). While this design makes the allocation of memory quite fast, the allocated segment of memory might contain old data that is potentially sensitive. Using a Buffer created by Buffer.allocUnsafe() without completely overwriting the memory can allow this old data to be leaked when the Buffer memory is read.

While there are clear performance advantages to using Buffer.allocUnsafe(), extra care must be taken in order to avoid introducing security vulnerabilities into an application.

Buffers and Character Encodings#

Buffers are commonly used to represent sequences of encoded characters such as UTF8, UCS2, Base64 or even Hex-encoded data. It is possible to convert back and forth between Buffers and ordinary JavaScript string objects by using an explicit encoding method.

const buf = new Buffer('hello world', 'ascii');
console.log(buf.toString('hex'));
  // prints: 68656c6c6f20776f726c64
console.log(buf.toString('base64'));
  // prints: aGVsbG8gd29ybGQ=

The character encodings currently supported by Node.js include:

  • 'ascii' - for 7-bit ASCII data only. This encoding method is very fast and will strip the high bit if set.

  • 'utf8' - Multibyte encoded Unicode characters. Many web pages and other document formats use UTF-8.

  • 'utf16le' - 2 or 4 bytes, little-endian encoded Unicode characters. Surrogate pairs (U+10000 to U+10FFFF) are supported.

  • 'ucs2' - Alias of 'utf16le'.

  • 'base64' - Base64 string encoding. When creating a buffer from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC 4648, Section 5.

  • 'binary' - A way of encoding the buffer into a one-byte (latin-1) encoded string. The string 'latin-1' is not supported. Instead, pass 'binary' to use 'latin-1' encoding.

  • 'hex' - Encode each byte as two hexadecimal characters.

Buffers and TypedArray#

Buffers are also Uint8Array TypedArray instances. However, there are subtle incompatibilities with the TypedArray specification in ECMAScript 2015. For instance, while ArrayBuffer#slice() creates a copy of the slice, the implementation of Buffer#slice() creates a view over the existing Buffer without copying, making Buffer#slice() far more efficient.

It is also possible to create new TypedArray instances from a Buffer with the following caveats:

  1. The Buffer instances's memory is copied to the TypedArray, not shared.

  2. The Buffer's memory is interpreted as an array of distinct elements, and not as a byte array of the target type. That is, new Uint32Array(new Buffer([1,2,3,4])) creates a 4-element Uint32Array with elements [1,2,3,4], not a Uint32Array with a single element [0x1020304] or [0x4030201].

It is possible to create a new Buffer that shares the same allocated memory as a TypedArray instance by using the TypeArray objects .buffer property:

const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;

const buf1 = new Buffer(arr); // copies the buffer
const buf2 = new Buffer(arr.buffer); // shares the memory with arr;

console.log(buf1);
  // Prints: <Buffer 88 a0>, copied buffer has only two elements
console.log(buf2);
  // Prints: <Buffer 88 13 a0 0f>

arr[1] = 6000;
console.log(buf1);
  // Prints: <Buffer 88 a0>
console.log(buf2);
  // Prints: <Buffer 88 13 70 17>

Note that when creating a Buffer using the TypeArray's .buffer, it is not currently possible to use only a portion of the underlying ArrayBuffer. To create a Buffer that uses only a part of the ArrayBuffer, use the buf.slice() function after the Buffer is created:

const arr = new Uint16Array(20);
const buf = new Buffer(arr.buffer).slice(0, 16);
console.log(buf.length);
  // Prints: 16

The Buffer.from() and TypedArray.from() (e.g.Uint8Array.from()) have different signatures and implementations. Specifically, the TypedArray variants accept a second argument that is a mapping function that is invoked on every element of the typed array:

  • TypedArray.from(source[, mapFn[, thisArg]])

The Buffer.from() method, however, does not support the use of a mapping function:

Buffers and ES6 iteration#

Buffers can be iterated over using the ECMAScript 2015 (ES6) for..of syntax:

const buf = new Buffer([1, 2, 3]);

for (var b of buf)
  console.log(b)

// Prints:
//   1
//   2
//   3

Additionally, the buf.values(), buf.keys(), and buf.entries() methods can be used to create iterators.

The --zero-fill-buffers command line option#

Node.js can be started using the --zero-fill-buffers command line option to force all newly allocated Buffer and SlowBuffer instances created using either new Buffer(size) and new SlowBuffer(size) to be automatically zero-filled upon creation. Use of this flag changes the default behavior of these methods and can have a significant impact on performance. Use of the --zero-fill-buffers option is recommended only when absolutely necessary to enforce that newly allocated Buffer instances cannot contain potentially sensitive data.

$ node --zero-fill-buffers
> Buffer(5);
<Buffer 00 00 00 00 00>

Class: Buffer#

The Buffer class is a global type for dealing with binary data directly. It can be constructed in a variety of ways.

new Buffer(array)#

Allocates a new Buffer using an array of octets.

const buf = new Buffer([0x62,0x75,0x66,0x66,0x65,0x72]);
  // creates a new Buffer containing ASCII bytes
  // ['b','u','f','f','e','r']

new Buffer(buffer)#

Copies the passed buffer data onto a new Buffer instance.

const buf1 = new Buffer('buffer');
const buf2 = new Buffer(buf1);

buf1[0] = 0x61;
console.log(buf1.toString());
  // 'auffer'
console.log(buf2.toString());
  // 'buffer' (copy is not changed)

new Buffer(arrayBuffer)#

  • arrayBuffer - The .buffer property of a TypedArray or a new ArrayBuffer()

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;

const buf = new Buffer(arr.buffer); // shares the memory with arr;

console.log(buf);
  // Prints: <Buffer 88 13 a0 0f>

// changing the TypdArray changes the Buffer also
arr[1] = 6000;

console.log(buf);
  // Prints: <Buffer 88 13 70 17>

new Buffer(size)#

Allocates a new Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. If a size less than 0 is specified, a zero-length Buffer will be created.

Unlike ArrayBuffers, the underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

const buf = new Buffer(5);
console.log(buf);
  // <Buffer 78 e0 82 02 01>
  // (octets will be different, every time)
buf.fill(0);
console.log(buf);
  // <Buffer 00 00 00 00 00>

new Buffer(str[, encoding])#

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the strings character encoding.

const buf1 = new Buffer('this is a tést');
console.log(buf1.toString());
  // prints: this is a tést
console.log(buf1.toString('ascii'));
  // prints: this is a tC)st

const buf2 = new Buffer('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
  // prints: this is a tést

Class Method: Buffer.alloc(size[, fill[, encoding]])#

Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.

const buf = Buffer.alloc(5);
console.log(buf);
  // <Buffer 00 00 00 00 00>

The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. If a size less than 0 is specified, a zero-length Buffer will be created.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()][] for more information.

const buf = Buffer.alloc(5, 'a');
console.log(buf);
  // <Buffer 61 61 61 61 61>

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:

const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
  // <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>

Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.allocUnsafe(size)#

Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. If a size less than 0 is specified, a zero-length Buffer will be created.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use buf.fill(0) to initialize such Buffer instances to zeroes.

const buf = Buffer.allocUnsafe(5);
console.log(buf);
  // <Buffer 78 e0 82 02 01>
  // (octets will be different, every time)
buf.fill(0);
console.log(buf);
  // <Buffer 00 00 00 00 00>

A TypeError will be thrown if size is not a number.

Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.

Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.

Class Method: Buffer.allocUnsafeSlow(size)#

Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. If a size less than 0 is specified, a zero-length Buffer will be created.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use buf.fill(0) to initialize such Buffer instances to zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.

// need to keep around a few small chunks of memory
const store = [];

socket.on('readable', () => {
  const data = socket.read();
  // allocate for retained data
  const sb = Buffer.allocUnsafeSlow(10);
  // copy the data into the new allocation
  data.copy(sb, 0, 0, 10);
  store.push(sb);
});

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.byteLength(string[, encoding])#

Returns the actual byte length of a string. This is not the same as String.prototype.length since that returns the number of characters in a string.

Note that for 'base64' and 'hex', this function assumes valid input. For strings that contain non-Base64/Hex-encoded data (e.g. whitespace), the return value might be greater than the length of a Buffer created from the string.

Example:

const str = '\u00bd + \u00bc = \u00be';

console.log(`${str}: ${str.length} characters, ` +
            `${Buffer.byteLength(str, 'utf8')} bytes`);

// ½ + ¼ = ¾: 9 characters, 12 bytes

Class Method: Buffer.compare(buf1, buf2)#

Compares buf1 to buf2 typically for the purpose of sorting arrays of Buffers. This is equivalent is calling buf1.compare(buf2).

const arr = [Buffer('1234'), Buffer('0123')];
arr.sort(Buffer.compare);

Class Method: Buffer.concat(list[, totalLength])#

  • list <Array> List of Buffer objects to concat
  • totalLength <Number> Total length of the Buffers in the list when concatenated
  • Returns: <Buffer>

Returns a new Buffer which is the result of concatenating all the Buffers in the list together.

If the list has no items, or if the totalLength is 0, then a new zero-length Buffer is returned.

If totalLength is not provided, it is calculated from the Buffers in the list. This, however, adds an additional loop to the function, so it is faster to provide the length explicitly.

Example: build a single Buffer from a list of three Buffers:

const buf1 = new Buffer(10).fill(0);
const buf2 = new Buffer(14).fill(0);
const buf3 = new Buffer(18).fill(0);
const totalLength = buf1.length + buf2.length + buf3.length;

console.log(totalLength);
const bufA = Buffer.concat([buf1, buf2, buf3], totalLength);
console.log(bufA);
console.log(bufA.length);

// 42
// <Buffer 00 00 00 00 ...>
// 42

Class Method: Buffer.from(array)#

Allocates a new Buffer using an array of octets.

const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
  // creates a new Buffer containing ASCII bytes
  // ['b','u','f','f','e','r']

A TypeError will be thrown if array is not an Array.

Class Method: Buffer.from(arrayBuffer)#

  • arrayBuffer <ArrayBuffer> The .buffer property of a TypedArray or a new ArrayBuffer()

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;

const buf = Buffer.from(arr.buffer); // shares the memory with arr;

console.log(buf);
  // Prints: <Buffer 88 13 a0 0f>

// changing the TypedArray changes the Buffer also
arr[1] = 6000;

console.log(buf);
  // Prints: <Buffer 88 13 70 17>

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.

Class Method: Buffer.from(buffer)#

Copies the passed buffer data onto a new Buffer instance.

const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);

buf1[0] = 0x61;
console.log(buf1.toString());
  // 'auffer'
console.log(buf2.toString());
  // 'buffer' (copy is not changed)

A TypeError will be thrown if buffer is not a Buffer.

Class Method: Buffer.from(str[, encoding])#

  • str <String> String to encode.
  • encoding <String> Encoding to use, Default: 'utf8'

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.

const buf1 = Buffer.from('this is a tést');
console.log(buf1.toString());
  // prints: this is a tést
console.log(buf1.toString('ascii'));
  // prints: this is a tC)st

const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
  // prints: this is a tést

A TypeError will be thrown if str is not a string.

Class Method: Buffer.isBuffer(obj)#

Returns 'true' if obj is a Buffer.

Class Method: Buffer.isEncoding(encoding)#

Returns true if the encoding is a valid encoding argument, or false otherwise.

buf[index]#

The index operator [index] can be used to get and set the octet at position index in the Buffer. The values refer to individual bytes, so the legal value range is between 0x00 and 0xFF (hex) or 0 and 255 (decimal).

Example: copy an ASCII string into a Buffer, one byte at a time:

const str = "Node.js";
const buf = new Buffer(str.length);

for (var i = 0; i < str.length ; i++) {
  buf[i] = str.charCodeAt(i);
}

console.log(buf.toString('ascii'));
  // Prints: Node.js

buf.compare(otherBuffer)#

Compares two Buffer instances and returns a number indicating whether buf comes before, after, or is the same as the otherBuffer in sort order. Comparison is based on the actual sequence of bytes in each Buffer.

  • 0 is returned if otherBuffer is the same as buf
  • 1 is returned if otherBuffer should come before buf when sorted.
  • -1 is returned if otherBuffer should come after buf when sorted.
const buf1 = new Buffer('ABC');
const buf2 = new Buffer('BCD');
const buf3 = new Buffer('ABCD');

console.log(buf1.compare(buf1));
  // Prints: 0
console.log(buf1.compare(buf2));
  // Prints: -1
console.log(buf1.compare(buf3));
  // Prints: -1
console.log(buf2.compare(buf1));
  // Prints: 1
console.log(buf2.compare(buf3));
  // Prints: 1

[buf1, buf2, buf3].sort(Buffer.compare);
  // produces sort order [buf1, buf3, buf2]

buf.copy(targetBuffer[, targetStart[, sourceStart[, sourceEnd]]])#

Copies data from a region of this Buffer to a region in the target Buffer even if the target memory region overlaps with the source.

Example: build two Buffers, then copy buf1 from byte 16 through byte 19 into buf2, starting at the 8th byte in buf2.

const buf1 = new Buffer(26);
const buf2 = new Buffer(26).fill('!');

for (var i = 0 ; i < 26 ; i++) {
  buf1[i] = i + 97; // 97 is ASCII a
}

buf1.copy(buf2, 8, 16, 20);
console.log(buf2.toString('ascii', 0, 25));
  // Prints: !!!!!!!!qrst!!!!!!!!!!!!!

Example: Build a single Buffer, then copy data from one region to an overlapping region in the same Buffer

const buf = new Buffer(26);

for (var i = 0 ; i < 26 ; i++) {
  buf[i] = i + 97; // 97 is ASCII a
}

buf.copy(buf, 0, 4, 10);
console.log(buf.toString());

// efghijghijklmnopqrstuvwxyz

buf.entries()#

  • Returns: <Iterator>

Creates and returns an iterator of [index, byte] pairs from the Buffer contents.

const buf = new Buffer('buffer');
for (var pair of buf.entries()) {
  console.log(pair);
}
// prints:
//   [0, 98]
//   [1, 117]
//   [2, 102]
//   [3, 102]
//   [4, 101]
//   [5, 114]

buf.equals(otherBuffer)#

Returns a boolean indicating whether this and otherBuffer have exactly the same bytes.

const buf1 = new Buffer('ABC');
const buf2 = new Buffer('414243', 'hex');
const buf3 = new Buffer('ABCD');

console.log(buf1.equals(buf2));
  // Prints: true
console.log(buf1.equals(buf3));
  // Prints: false

buf.fill(value[, offset[, end]])#

Fills the Buffer with the specified value. If the offset and end are not given it will fill the entire Buffer. The method returns a reference to the Buffer so calls can be chained.

const b = new Buffer(50).fill('h');
console.log(b.toString());
  // Prints: hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh

buf.indexOf(value[, byteOffset][, encoding])#

Operates similar to Array#indexOf() in that it returns either the starting index position of value in Buffer or -1 if the Buffer does not contain value. The value can be a String, Buffer or Number. Strings are by default interpreted as UTF8. Buffers will use the entire Buffer (to compare a partial Buffer use buf.slice()). Numbers will be interpreted as unsigned 8-bit integer values between 0 and 255.

const buf = new Buffer('this is a buffer');

buf.indexOf('this');
  // returns 0
buf.indexOf('is');
  // returns 2
buf.indexOf(new Buffer('a buffer'));
  // returns 8
buf.indexOf(97); // ascii for 'a'
  // returns 8
buf.indexOf(new Buffer('a buffer example'));
  // returns -1
buf.indexOf(new Buffer('a buffer example').slice(0,8));
  // returns 8

const utf16Buffer = new Buffer('\u039a\u0391\u03a3\u03a3\u0395', 'ucs2');

utf16Buffer.indexOf('\u03a3',  0, 'ucs2');
  // returns 4
utf16Buffer.indexOf('\u03a3', -4, 'ucs2');
  // returns 6

buf.keys()#

  • Returns: <Iterator>

Creates and returns an iterator of Buffer keys (indices).

const buf = new Buffer('buffer');
for (var key of buf.keys()) {
  console.log(key);
}
// prints:
//   0
//   1
//   2
//   3
//   4
//   5

buf.length#

Returns the amount of memory allocated for the Buffer in number of bytes. Note that this does not necessarily reflect the amount of usable data within the Buffer. For instance, in the example below, a Buffer with 1234 bytes is allocated, but only 11 ASCII bytes are written.

const buf = new Buffer(1234);

console.log(buf.length);
  // Prints: 1234

buf.write('some string', 0, 'ascii');
console.log(buf.length);
  // Prints: 1234

While the length property is not immutable, changing the value of length can result in undefined and inconsistent behavior. Applications that wish to modify the length of a Buffer should therefore treat length as read-only and use buf.slice() to create a new Buffer.

var buf = new Buffer(10);
buf.write('abcdefghj', 0, 'ascii');
console.log(buf.length);
  // Prints: 10
buf = buf.slice(0,5);
console.log(buf.length);
  // Prints: 5

buf.readDoubleBE(offset[, noAssert])#

buf.readDoubleLE(offset[, noAssert])#

Reads a 64-bit double from the Buffer at the specified offset with specified endian format (readDoubleBE() returns big endian, readDoubleLE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

const buf = new Buffer([1,2,3,4,5,6,7,8]);

buf.readDoubleBE();
  // Returns: 8.20788039913184e-304
buf.readDoubleLE();
  // Returns: 5.447603722011605e-270
buf.readDoubleLE(1);
  // throws RangeError: Index out of range

buf.readDoubleLE(1, true); // Warning: reads passed end of buffer!
  // Segmentation fault! don't do this!

buf.readFloatBE(offset[, noAssert])#

buf.readFloatLE(offset[, noAssert])#

Reads a 32-bit float from the Buffer at the specified offset with specified endian format (readFloatBE() returns big endian, readFloatLE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

const buf = new Buffer([1,2,3,4]);

buf.readFloatBE();
  // Returns: 2.387939260590663e-38
buf.readFloatLE();
  // Returns: 1.539989614439558e-36
buf.readFloatLE(1);
  // throws RangeError: Index out of range

buf.readFloatLE(1, true); // Warning: reads passed end of buffer!
  // Segmentation fault! don't do this!

buf.readInt8(offset[, noAssert])#

Reads a signed 8-bit integer from the Buffer at the specified offset.

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Integers read from the Buffer are interpreted as two's complement signed values.

const buf = new Buffer([1,-2,3,4]);

buf.readInt8(0);
  // returns 1
buf.readInt8(1);
  // returns -2

buf.readInt16BE(offset[, noAssert])#

buf.readInt16LE(offset[, noAssert])#

Reads a signed 16-bit integer from the Buffer at the specified offset with the specified endian format (readInt16BE() returns big endian, readInt16LE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Integers read from the Buffer are interpreted as two's complement signed values.

const buf = new Buffer([1,-2,3,4]);

buf.readInt16BE();
  // returns 510
buf.readInt16LE(1);
  // returns 1022

buf.readInt32BE(offset[, noAssert])#

buf.readInt32LE(offset[, noAssert])#

Reads a signed 32-bit integer from the Buffer at the specified offset with the specified endian format (readInt32BE() returns big endian, readInt32LE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Integers read from the Buffer are interpreted as two's complement signed values.

const buf = new Buffer([1,-2,3,4]);

buf.readInt32BE();
  // returns 33424132
buf.readInt32LE();
  // returns 67370497
buf.readInt32LE(1);
  // throws RangeError: Index out of range

buf.readIntBE(offset, byteLength[, noAssert])#

buf.readIntLE(offset, byteLength[, noAssert])#

Reads byteLength number of bytes from the Buffer at the specified offset and interprets the result as a two's complement signed value. Supports up to 48 bits of accuracy. For example:

const buf = new Buffer(6);
buf.writeUInt16LE(0x90ab, 0);
buf.writeUInt32LE(0x12345678, 2);
buf.readIntLE(0, 6).toString(16);  // Specify 6 bytes (48 bits)
// Returns: '1234567890ab'

buf.readIntBE(0, 6).toString(16);
// Returns: -546f87a9cbee

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

buf.readUInt8(offset[, noAssert])#

Reads an unsigned 8-bit integer from the Buffer at the specified offset.

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

const buf = new Buffer([1,-2,3,4]);

buf.readUInt8(0);
  // returns 1
buf.readUInt8(1);
  // returns 254

buf.readUInt16BE(offset[, noAssert])#

buf.readUInt16LE(offset[, noAssert])#

Reads an unsigned 16-bit integer from the Buffer at the specified offset with specified endian format (readUInt16BE() returns big endian, readUInt16LE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Example:

const buf = new Buffer([0x3, 0x4, 0x23, 0x42]);

buf.readUInt16BE(0);
  // Returns: 0x0304
buf.readUInt16LE(0);
  // Returns: 0x0403
buf.readUInt16BE(1);
  // Returns: 0x0423
buf.readUInt16LE(1);
  // Returns: 0x2304
buf.readUInt16BE(2);
  // Returns: 0x2342
buf.readUInt16LE(2);
  // Returns: 0x4223

buf.readUInt32BE(offset[, noAssert])#

buf.readUInt32LE(offset[, noAssert])#

Reads an unsigned 32-bit integer from the Buffer at the specified offset with specified endian format (readUInt32BE() returns big endian, readUInt32LE() returns little endian).

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

Example:

const buf = new Buffer([0x3, 0x4, 0x23, 0x42]);

buf.readUInt32BE(0);
  // Returns: 0x03042342
console.log(buf.readUInt32LE(0));
  // Returns: 0x42230403

buf.readUIntBE(offset, byteLength[, noAssert])#

buf.readUIntLE(offset, byteLength[, noAssert])#

Reads byteLength number of bytes from the Buffer at the specified offset and interprets the result as an unsigned integer. Supports up to 48 bits of accuracy. For example:

const buf = new Buffer(6);
buf.writeUInt16LE(0x90ab, 0);
buf.writeUInt32LE(0x12345678, 2);
buf.readUIntLE(0, 6).toString(16);  // Specify 6 bytes (48 bits)
// Returns: '1234567890ab'

buf.readUIntBE(0, 6).toString(16);
// Returns: ab9078563412

Setting noAssert to true skips validation of the offset. This allows the offset to be beyond the end of the Buffer.

buf.slice([start[, end]])#

Returns a new Buffer that references the same memory as the original, but offset and cropped by the start and end indices.

Note that modifying the new Buffer slice will modify the memory in the original Buffer because the allocated memory of the two objects overlap.

Example: build a Buffer with the ASCII alphabet, take a slice, then modify one byte from the original Buffer.

const buf1 = new Buffer(26);

for (var i = 0 ; i < 26 ; i++) {
  buf1[i] = i + 97; // 97 is ASCII a
}

const buf2 = buf1.slice(0, 3);
buf2.toString('ascii', 0, buf2.length);
  // Returns: 'abc'
buf1[0] = 33;
buf2.toString('ascii', 0, buf2.length);
  // Returns : '!bc'

Specifying negative indexes causes the slice to be generated relative to the end of the Buffer rather than the beginning.

const buf = new Buffer('buffer');

buf.slice(-6, -1).toString();
  // Returns 'buffe', equivalent to buf.slice(0, 5)
buf.slice(-6, -2).toString();
  // Returns 'buff', equivalent to buf.slice(0, 4)
buf.slice(-5, -2).toString();
  // Returns 'uff', equivalent to buf.slice(1, 4)

buf.toString([encoding[, start[, end]]])#

Decodes and returns a string from the Buffer data using the specified character set encoding.

const buf = new Buffer(26);
for (var i = 0 ; i < 26 ; i++) {
  buf[i] = i + 97; // 97 is ASCII a
}
buf.toString('ascii');
  // Returns: 'abcdefghijklmnopqrstuvwxyz'
buf.toString('ascii',0,5);
  // Returns: 'abcde'
buf.toString('utf8',0,5);
  // Returns: 'abcde'
buf.toString(undefined,0,5);
  // Returns: 'abcde', encoding defaults to 'utf8'

buf.toJSON()#

Returns a JSON representation of the Buffer instance. JSON.stringify() implicitly calls this function when stringifying a Buffer instance.

Example:

const buf = new Buffer('test');
const json = JSON.stringify(buf);

console.log(json);
// Prints: '{"type":"Buffer","data":[116,101,115,116]}'

const copy = JSON.parse(json, (key, value) => {
    return value && value.type === 'Buffer'
      ? new Buffer(value.data)
      : value;
  });

console.log(copy.toString());
// Prints: 'test'

buf.values()#

  • Returns: <Iterator>

Creates and returns an iterator for Buffer values (bytes). This function is called automatically when the Buffer is used in a for..of statement.

const buf = new Buffer('buffer');
for (var value of buf.values()) {
  console.log(value);
}
// prints:
//   98
//   117
//   102
//   102
//   101
//   114

for (var value of buf) {
  console.log(value);
}
// prints:
//   98
//   117
//   102
//   102
//   101
//   114

buf.write(string[, offset[, length]][, encoding])#

Writes string to the Buffer at offset using the given encoding. The length parameter is the number of bytes to write. If the Buffer did not contain enough space to fit the entire string, only a partial amount of the string will be written however, it will not write only partially encoded characters.

const buf = new Buffer(256);
const len = buf.write('\u00bd + \u00bc = \u00be', 0);
console.log(`${len} bytes: ${buf.toString('utf8', 0, len)}`);
  // Prints: 12 bytes: ½ + ¼ = ¾

buf.writeDoubleBE(value, offset[, noAssert])#

buf.writeDoubleLE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 8
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeDoubleBE() writes big endian, writeDoubleLE() writes little endian). The value argument should be a valid 64-bit double. Behavior is not defined when value is anything other than a 64-bit double.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = new Buffer(8);
buf.writeDoubleBE(0xdeadbeefcafebabe, 0);

console.log(buf);
  // Prints: <Buffer 43 eb d5 b7 dd f9 5f d7>

buf.writeDoubleLE(0xdeadbeefcafebabe, 0);

console.log(buf);
  // Prints: <Buffer d7 5f f9 dd b7 d5 eb 43>

buf.writeFloatBE(value, offset[, noAssert])#

buf.writeFloatLE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 4
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeFloatBE() writes big endian, writeFloatLE() writes little endian). Behavior is not defined when value is anything other than a 32-bit float.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = new Buffer(4);
buf.writeFloatBE(0xcafebabe, 0);

console.log(buf);
  // Prints: <Buffer 4f 4a fe bb>

buf.writeFloatLE(0xcafebabe, 0);

console.log(buf);
  // Prints: <Buffer bb fe 4a 4f>

buf.writeInt8(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 1
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset. The value should be a valid signed 8-bit integer. Behavior is not defined when value is anything other than a signed 8-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

The value is interpreted and written as a two's complement signed integer.

const buf = new Buffer(2);
buf.writeInt8(2, 0);
buf.writeInt8(-2, 1);
console.log(buf);
  // Prints: <Buffer 02 fe>

buf.writeInt16BE(value, offset[, noAssert])#

buf.writeInt16LE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 2
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeInt16BE() writes big endian, writeInt16LE() writes little endian). The value should be a valid signed 16-bit integer. Behavior is not defined when value is anything other than a signed 16-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

The value is interpreted and written as a two's complement signed integer.

const buf = new Buffer(4);
buf.writeInt16BE(0x0102,0);
buf.writeInt16LE(0x0304,2);
console.log(buf);
  // Prints: <Buffer 01 02 04 03>

buf.writeInt32BE(value, offset[, noAssert])#

buf.writeInt32LE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 4
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeInt32BE() writes big endian, writeInt32LE() writes little endian). The value should be a valid signed 32-bit integer. Behavior is not defined when value is anything other than a signed 32-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

The value is interpreted and written as a two's complement signed integer.

const buf = new Buffer(8);
buf.writeInt32BE(0x01020304,0);
buf.writeInt32LE(0x05060708,4);
console.log(buf);
  // Prints: <Buffer 01 02 03 04 08 07 06 05>

buf.writeIntBE(value, offset, byteLength[, noAssert])#

buf.writeIntLE(value, offset, byteLength[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - byteLength
  • byteLength <Number> 0 < byteLength <= 6
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset and byteLength. Supports up to 48 bits of accuracy. For example:

const buf1 = new Buffer(6);
buf1.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf1);
  // Prints: <Buffer 12 34 56 78 90 ab>

const buf2 = new Buffer(6);
buf2.writeUIntLE(0x1234567890ab, 0, 6);
console.log(buf2);
  // Prints: <Buffer ab 90 78 56 34 12>

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Behavior is not defined when value is anything other than an integer.

buf.writeUInt8(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 1
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset. The value should be a valid unsigned 8-bit integer. Behavior is not defined when value is anything other than an unsigned 8-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = new Buffer(4);
buf.writeUInt8(0x3, 0);
buf.writeUInt8(0x4, 1);
buf.writeUInt8(0x23, 2);
buf.writeUInt8(0x42, 3);

console.log(buf);
  // Prints: <Buffer 03 04 23 42>

buf.writeUInt16BE(value, offset[, noAssert])#

buf.writeUInt16LE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 2
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeUInt16BE() writes big endian, writeUInt16LE() writes little endian). The value should be a valid unsigned 16-bit integer. Behavior is not defined when value is anything other than an unsigned 16-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = new Buffer(4);
buf.writeUInt16BE(0xdead, 0);
buf.writeUInt16BE(0xbeef, 2);

console.log(buf);
  // Prints: <Buffer de ad be ef>

buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);

console.log(buf);
  // Prints: <Buffer ad de ef be>

buf.writeUInt32BE(value, offset[, noAssert])#

buf.writeUInt32LE(value, offset[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - 4
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset with specified endian format (writeUInt32BE() writes big endian, writeUInt32LE() writes little endian). The value should be a valid unsigned 32-bit integer. Behavior is not defined when value is anything other than an unsigned 32-bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Example:

const buf = new Buffer(4);
buf.writeUInt32BE(0xfeedface, 0);

console.log(buf);
  // Prints: <Buffer fe ed fa ce>

buf.writeUInt32LE(0xfeedface, 0);

console.log(buf);
  // Prints: <Buffer ce fa ed fe>

buf.writeUIntBE(value, offset, byteLength[, noAssert])#

buf.writeUIntLE(value, offset, byteLength[, noAssert])#

  • value <Number> Bytes to be written to Buffer
  • offset <Number> 0 <= offset <= buf.length - byteLength
  • byteLength <Number> 0 < byteLength <= 6
  • noAssert <Boolean> Default: false
  • Returns: <Number> The offset plus the number of written bytes

Writes value to the Buffer at the specified offset and byteLength. Supports up to 48 bits of accuracy. For example:

const buf = new Buffer(6);
buf.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf);
  // Prints: <Buffer 12 34 56 78 90 ab>

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the Buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness.

Behavior is not defined when value is anything other than an unsigned integer.

buffer.INSPECT_MAX_BYTES#

Returns the maximum number of bytes that will be returned when buffer.inspect() is called. This can be overridden by user modules. See util.inspect() for more details on buffer.inspect() behavior.

Note that this is a property on the buffer module as returned by require('buffer'), not on the Buffer global or a Buffer instance.

Class: SlowBuffer#

Returns an un-pooled Buffer.

In order to avoid the garbage collection overhead of creating many individually allocated Buffers, by default allocations under 4KB are sliced from a single larger allocated object. This approach improves both performance and memory usage since v8 does not need to track and cleanup as many Persistent objects.

In the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using SlowBuffer then copy out the relevant bits.

// need to keep around a few small chunks of memory
const store = [];

socket.on('readable', () => {
  var data = socket.read();
  // allocate for retained data
  var sb = new SlowBuffer(10);
  // copy the data into the new allocation
  data.copy(sb, 0, 0, 10);
  store.push(sb);
});

Use of SlowBuffer should be used only as a last resort after a developer has observed undue memory retention in their applications.

new SlowBuffer(size)#

  • size Number

Allocates a new SlowBuffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is thrown. If a size less than 0 is specified, a zero-length SlowBuffer will be created.

The underlying memory for SlowBuffer instances is not initialized. The contents of a newly created SlowBuffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a SlowBuffer to zeroes.

const SlowBuffer = require('buffer').SlowBuffer;
const buf = new SlowBuffer(5);
console.log(buf);
  // <Buffer 78 e0 82 02 01>
  // (octets will be different, every time)
buf.fill(0);
console.log(buf);
  // <Buffer 00 00 00 00 00>

Child Process#

Stability: 2 - Stable

The child_process module provides the ability to spawn child processes in a manner that is similar, but not identical, to popen(3). This capability is primarily provided by the child_process.spawn() function:

const spawn = require('child_process').spawn;
const ls = spawn('ls', ['-lh', '/usr']);

ls.stdout.on('data', (data) => {
  console.log(`stdout: ${data}`);
});

ls.stderr.on('data', (data) => {
  console.log(`stderr: ${data}`);
});

ls.on('close', (code) => {
  console.log(`child process exited with code ${code}`);
});

By default, pipes for stdin, stdout and stderr are established between the parent Node.js process and the spawned child. It is possible to stream data through these pipes in a non-blocking way. Note, however, that some programs use line-buffered I/O internally. While that does not affect Node.js, it can mean that data sent to the child process may not be immediately consumed.

The child_process.spawn() method spawns the child process asynchronously, without blocking the Node.js event loop. The child_process.spawnSync() function provides equivalent functionality in a synchronous manner that blocks the event loop until the spawned process either exits or is terminated.

For convenience, the child_process module provides a handful of synchronous and asynchronous alternatives to child_process.spawn() and child_process.spawnSync(). Note that each of these alternatives are implemented on top of child_process.spawn() or child_process.spawnSync().

  • child_process.exec(): spawns a shell and runs a command within that shell, passing the stdout and stderr to a callback function when complete.
  • child_process.execFile(): similar to child_process.exec() except that it spawns the command directly without first spawning a shell.
  • child_process.fork(): spawns a new Node.js process and invokes a specified module with an IPC communication channel established that allows sending messages between parent and child.
  • child_process.execSync(): a synchronous version of child_process.exec() that will block the Node.js event loop.
  • child_process.execFileSync(): a synchronous version of child_process.execFile() that will block the Node.js event loop.

For certain use cases, such as automating shell scripts, the synchronous counterparts may be more convenient. In many cases, however, the synchronous methods can have significant impact on performance due to stalling the event loop while spawned processes complete.

Asynchronous Process Creation#

The child_process.spawn(), child_process.fork(), child_process.exec(), and child_process.execFile() methods all follow the idiomatic asynchronous programming pattern typical of other Node.js APIs.

Each of the methods returns a ChildProcess instance. These objects implement the Node.js EventEmitter API, allowing the parent process to register listener functions that are called when certain events occur during the life cycle of the child process.

The child_process.exec() and child_process.execFile() methods additionally allow for an optional callback function to be specified that is invoked when the child process terminates.

Spawning .bat and .cmd files on Windows#

The importance of the distinction between child_process.exec() and child_process.execFile() can vary based on platform. On Unix-type operating systems (Unix, Linux, OSX) child_process.execFile() can be more efficient because it does not spawn a shell. On Windows, however, .bat and .cmd files are not executable on their own without a terminal, and therefore cannot be launched using child_process.execFile(). When running on Windows, .bat and .cmd files can be invoked using child_process.spawn() with the shell option set, with child_process.exec(), or by spawning cmd.exe and passing the .bat or .cmd file as an argument (which is what the shell option and child_process.exec() do).

// On Windows Only ...
const spawn = require('child_process').spawn;
const bat = spawn('cmd.exe', ['/c', 'my.bat']);

bat.stdout.on('data', (data) => {
  console.log(data);
});

bat.stderr.on('data', (data) => {
  console.log(data);
});

bat.on('exit', (code) => {
  console.log(`Child exited with code ${code}`);
});

// OR...
const exec = require('child_process').exec;
exec('my.bat', (err, stdout, stderr) => {
  if (err) {
    console.error(err);
    return;
  }
  console.log(stdout);
});

child_process.exec(command[, options][, callback])#

  • command <String> The command to run, with space-separated arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • env <Object> Environment key-value pairs
    • encoding <String> (Default: 'utf8')
    • shell <String> Shell to execute the command with (Default: '/bin/sh' on UNIX, 'cmd.exe' on Windows, The shell should understand the -c switch on UNIX or /s /c on Windows. On Windows, command line parsing should be compatible with cmd.exe.)
    • timeout <Number> (Default: 0)
    • maxBuffer <Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed (Default: 200*1024)
    • killSignal <String> | <Integer> (Default: 'SIGTERM')
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
  • callback <Function> called with the output when process terminates
  • Returns: <ChildProcess>

Spawns a shell then executes the command within that shell, buffering any generated output.

Note: Never pass unsanitised user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.

const exec = require('child_process').exec;
exec('cat *.js bad_file | wc -l', (error, stdout, stderr) => {
  if (error) {
    console.error(`exec error: ${error}`);
    return;
  }
  console.log(`stdout: ${stdout}`);
  console.log(`stderr: ${stderr}`);
});

If a callback function is provided, it is called with the arguments (error, stdout, stderr). On success, error will be null. On error, error will be an instance of Error. The error.code property will be the exit code of the child process while error.signal will be set to the signal that terminated the process. Any exit code other than 0 is considered to be an error.

The stdout and stderr arguments passed to the callback will contain the stdout and stderr output of the child process. By default, Node.js will decode the output as UTF-8 and pass strings to the callback. The encoding option can be used to specify the character encoding used to decode the stdout and stderr output. If encoding is 'buffer', or an unrecognized character encoding, Buffer objects will be passed to the callback instead.

The options argument may be passed as the second argument to customize how the process is spawned. The default options are:

{
  encoding: 'utf8',
  timeout: 0,
  maxBuffer: 200*1024,
  killSignal: 'SIGTERM',
  cwd: null,
  env: null
}

If timeout is greater than 0, the parent will send the the signal identified by the killSignal property (the default is 'SIGTERM') if the child runs longer than timeout milliseconds.

The maxBuffer option specifies the largest amount of data (in bytes) allowed on stdout or stderr - if this value is exceeded then the child process is terminated.

Note: Unlike the exec() POSIX system call, child_process.exec() does not replace the existing process and uses a shell to execute the command.

child_process.execFile(file[, args][, options][, callback])#

The child_process.execFile() function is similar to child_process.exec() except that it does not spawn a shell. Rather, the specified executable file is spawned directly as a new process making it slightly more efficient than child_process.exec().

The same options as child_process.exec() are supported. Since a shell is not spawned, behaviors such as I/O redirection and file globbing are not supported.

const execFile = require('child_process').execFile;
const child = execFile('node', ['--version'], (error, stdout, stderr) => {
  if (error) {
    throw error;
  }
  console.log(stdout);
});

The stdout and stderr arguments passed to the callback will contain the stdout and stderr output of the child process. By default, Node.js will decode the output as UTF-8 and pass strings to the callback. The encoding option can be used to specify the character encoding used to decode the stdout and stderr output. If encoding is 'buffer', or an unrecognized character encoding, Buffer objects will be passed to the callback instead.

child_process.fork(modulePath[, args][, options])#

  • modulePath <String> The module to run in the child
  • args <Array> List of string arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • env <Object> Environment key-value pairs
    • execPath <String> Executable used to create the child process
    • execArgv <Array> List of string arguments passed to the executable (Default: process.execArgv)
    • silent <Boolean> If true, stdin, stdout, and stderr of the child will be piped to the parent, otherwise they will be inherited from the parent, see the 'pipe' and 'inherit' options for child_process.spawn()'s stdio for more details (default is false)
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
  • Returns: <ChildProcess>

The child_process.fork() method is a special case of child_process.spawn() used specifically to spawn new Node.js processes. Like child_process.spawn(), a ChildProcess object is returned. The returned ChildProcess will have an additional communication channel built-in that allows messages to be passed back and forth between the parent and child. See ChildProcess#send() for details.

It is important to keep in mind that spawned Node.js child processes are independent of the parent with exception of the IPC communication channel that is established between the two. Each process has its own memory, with their own V8 instances. Because of the additional resource allocations required, spawning a large number of child Node.js processes is not recommended.

By default, child_process.fork() will spawn new Node.js instances using the process.execPath of the parent process. The execPath property in the options object allows for an alternative execution path to be used.

Node.js processes launched with a custom execPath will communicate with the parent process using the file descriptor (fd) identified using the environment variable NODE_CHANNEL_FD on the child process. The input and output on this fd is expected to be line delimited JSON objects.

Note: Unlike the fork() POSIX system call, child_process.fork() does not clone the current process.

child_process.spawn(command[, args][, options])#

  • command <String> The command to run
  • args <Array> List of string arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • env <Object> Environment key-value pairs
    • stdio <Array> | <String> Child's stdio configuration. (See options.stdio)
    • detached <Boolean> Prepare child to run independently of its parent process. Specific behavior depends on the platform, see options.detached)
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
    • shell <Boolean> | <String> If true, runs command inside of a shell. Uses '/bin/sh' on UNIX, and 'cmd.exe' on Windows. A different shell can be specified as a string. The shell should understand the -c switch on UNIX, or /s /c on Windows. Defaults to false (no shell).
  • Returns: <ChildProcess>

The child_process.spawn() method spawns a new process using the given command, with command line arguments in args. If omitted, args defaults to an empty array.

Note: If the shell option is enabled, do not pass unsanitised user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.

A third argument may be used to specify additional options, with these defaults:

{
  cwd: undefined,
  env: process.env
}

Use cwd to specify the working directory from which the process is spawned. If not given, the default is to inherit the current working directory.

Use env to specify environment variables that will be visible to the new process, the default is process.env.

Example of running ls -lh /usr, capturing stdout, stderr, and the exit code:

const spawn = require('child_process').spawn;
const ls = spawn('ls', ['-lh', '/usr']);

ls.stdout.on('data', (data) => {
  console.log(`stdout: ${data}`);
});

ls.stderr.on('data', (data) => {
  console.log(`stderr: ${data}`);
});

ls.on('close', (code) => {
  console.log(`child process exited with code ${code}`);
});

Example: A very elaborate way to run 'ps ax | grep ssh'

const spawn = require('child_process').spawn;
const ps = spawn('ps', ['ax']);
const grep = spawn('grep', ['ssh']);

ps.stdout.on('data', (data) => {
  grep.stdin.write(data);
});

ps.stderr.on('data', (data) => {
  console.log(`ps stderr: ${data}`);
});

ps.on('close', (code) => {
  if (code !== 0) {
    console.log(`ps process exited with code ${code}`);
  }
  grep.stdin.end();
});

grep.stdout.on('data', (data) => {
  console.log(`${data}`);
});

grep.stderr.on('data', (data) => {
  console.log(`grep stderr: ${data}`);
});

grep.on('close', (code) => {
  if (code !== 0) {
    console.log(`grep process exited with code ${code}`);
  }
});

Example of checking for failed exec:

const spawn = require('child_process').spawn;
const subprocess = spawn('bad_command');

subprocess.on('error', (err) => {
  console.log('Failed to start subprocess.');
});

options.detached#

On Windows, setting options.detached to true makes it possible for the child process to continue running after the parent exits. The child will have its own console window. Once enabled for a child process, it cannot be disabled.

On non-Windows platforms, if options.detached is set to true, the child process will be made the leader of a new process group and session. Note that child processes may continue running after the parent exits regardless of whether they are detached or not. See setsid(2) for more information.

By default, the parent will wait for the detached child to exit. To prevent the parent from waiting for a given subprocess, use the subprocess.unref() method. Doing so will cause the parent's event loop to not include the child in its reference count, allowing the parent to exit independently of the child, unless there is an established IPC channel between the child and parent.

When using the detached option to start a long-running process, the process will not stay running in the background after the parent exits unless it is provided with a stdio configuration that is not connected to the parent. If the parent's stdio is inherited, the child will remain attached to the controlling terminal.

Example of a long-running process, by detaching and also ignoring its parent stdio file descriptors, in order to ignore the parent's termination:

const spawn = require('child_process').spawn;

const subprocess = spawn(process.argv[0], ['child_program.js'], {
  detached: true,
  stdio: 'ignore'
});

subprocess.unref();

Alternatively one can redirect the child process' output into files:

const fs = require('fs');
const spawn = require('child_process').spawn;
const out = fs.openSync('./out.log', 'a');
const err = fs.openSync('./out.log', 'a');

const subprocess = spawn('prg', [], {
 detached: true,
 stdio: [ 'ignore', out, err ]
});

subprocess.unref();

options.stdio#

The options.stdio option is used to configure the pipes that are established between the parent and child process. By default, the child's stdin, stdout, and stderr are redirected to corresponding subprocess.stdin, subprocess.stdout, and subprocess.stderr streams on the ChildProcess object. This is equivalent to setting the options.stdio equal to ['pipe', 'pipe', 'pipe'].

For convenience, options.stdio may be one of the following strings:

  • 'pipe' - equivalent to ['pipe', 'pipe', 'pipe'] (the default)
  • 'ignore' - equivalent to ['ignore', 'ignore', 'ignore']
  • 'inherit' - equivalent to [process.stdin, process.stdout, process.stderr] or [0,1,2]

Otherwise, the value of option.stdio is an array where each index corresponds to an fd in the child. The fds 0, 1, and 2 correspond to stdin, stdout, and stderr, respectively. Additional fds can be specified to create additional pipes between the parent and child. The value is one of the following:

  1. 'pipe' - Create a pipe between the child process and the parent process. The parent end of the pipe is exposed to the parent as a property on the child_process object as ChildProcess.stdio[fd]. Pipes created for fds 0 - 2 are also available as ChildProcess.stdin, ChildProcess.stdout and ChildProcess.stderr, respectively.
  2. 'ipc' - Create an IPC channel for passing messages/file descriptors between parent and child. A ChildProcess may have at most one IPC stdio file descriptor. Setting this option enables the ChildProcess.send() method. If the child writes JSON messages to this file descriptor, the ChildProcess.on('message') event handler will be triggered in the parent. If the child is a Node.js process, the presence of an IPC channel will enable process.send(), process.disconnect(), process.on('disconnect'), and process.on('message') within the child.
  3. 'ignore' - Instructs Node.js to ignore the fd in the child. While Node.js will always open fds 0 - 2 for the processes it spawns, setting the fd to 'ignore' will cause Node.js to open /dev/null and attach it to the child's fd.
  4. Stream object - Share a readable or writable stream that refers to a tty, file, socket, or a pipe with the child process. The stream's underlying file descriptor is duplicated in the child process to the fd that corresponds to the index in the stdio array. Note that the stream must have an underlying descriptor (file streams do not until the 'open' event has occurred).
  5. Positive integer - The integer value is interpreted as a file descriptor that is is currently open in the parent process. It is shared with the child process, similar to how Stream objects can be shared.
  6. null, undefined - Use default value. For stdio fds 0, 1 and 2 (in other words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the default is 'ignore'.

Example:

const spawn = require('child_process').spawn;

// Child will use parent's stdios
spawn('prg', [], { stdio: 'inherit' });

// Spawn child sharing only stderr
spawn('prg', [], { stdio: ['pipe', 'pipe', process.stderr] });

// Open an extra fd=4, to interact with programs presenting a
// startd-style interface.
spawn('prg', [], { stdio: ['pipe', null, null, null, 'pipe'] });

It is worth noting that when an IPC channel is established between the parent and child processes, and the child is a Node.js process, the child is launched with the IPC channel unreferenced (using unref()) until the child registers an event handler for the process.on('disconnect') event or the process.on('message') event.This allows the child to exit normally without the process being held open by the open IPC channel.

See also: child_process.exec() and child_process.fork()

Synchronous Process Creation#

The child_process.spawnSync(), child_process.execSync(), and child_process.execFileSync() methods are synchronous and WILL block the Node.js event loop, pausing execution of any additional code until the spawned process exits.

Blocking calls like these are mostly useful for simplifying general purpose scripting tasks and for simplifying the loading/processing of application configuration at startup.

child_process.execFileSync(file[, args][, options])#

  • file <String> The name or path of the executable file to run
  • args <Array> List of string arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • input <String> | <Buffer> The value which will be passed as stdin to the spawned process
      • supplying this value will override stdio[0]
    • stdio <String> | <Array> Child's stdio configuration. (Default: 'pipe')
      • stderr by default will be output to the parent process' stderr unless stdio is specified
    • env <Object> Environment key-value pairs
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
    • timeout <Number> In milliseconds the maximum amount of time the process is allowed to run. (Default: undefined)
    • killSignal <String> | <Integer> The signal value to be used when the spawned process will be killed. (Default: 'SIGTERM')
    • maxBuffer <Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed
    • encoding <String> The encoding used for all stdio inputs and outputs. (Default: 'buffer')
  • Returns: <Buffer> | <String> The stdout from the command

The child_process.execFileSync() method is generally identical to child_process.execFile() with the exception that the method will not return until the child process has fully closed. When a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. Note that if the child process intercepts and handles the SIGTERM signal and does not exit, the parent process will still wait until the child process has exited.

If the process times out, or has a non-zero exit code, this method will throw. The Error object will contain the entire result from child_process.spawnSync()

child_process.execSync(command[, options])#

  • command <String> The command to run
  • options <Object>
    • cwd <String> Current working directory of the child process
    • input <String> | <Buffer> The value which will be passed as stdin to the spawned process
      • supplying this value will override stdio[0]
    • stdio <String> | <Array> Child's stdio configuration. (Default: 'pipe')
      • stderr by default will be output to the parent process' stderr unless stdio is specified
    • env <Object> Environment key-value pairs
    • shell <String> Shell to execute the command with (Default: '/bin/sh' on UNIX, 'cmd.exe' on Windows, The shell should understand the -c switch on UNIX or /s /c on Windows. On Windows, command line parsing should be compatible with cmd.exe.)
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
    • timeout <Number> In milliseconds the maximum amount of time the process is allowed to run. (Default: undefined)
    • killSignal <String> | <Integer> The signal value to be used when the spawned process will be killed. (Default: 'SIGTERM')
    • maxBuffer <Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed
    • encoding <String> The encoding used for all stdio inputs and outputs. (Default: 'buffer')
  • Returns: <Buffer> | <String> The stdout from the command

The child_process.execSync() method is generally identical to child_process.exec() with the exception that the method will not return until the child process has fully closed. When a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. Note that if the child process intercepts and handles the SIGTERM signal and doesn't exit, the parent process will wait until the child process has exited.

If the process times out, or has a non-zero exit code, this method will throw. The Error object will contain the entire result from child_process.spawnSync()

Note: Never pass unsanitised user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.

child_process.spawnSync(command[, args][, options])#

  • command <String> The command to run
  • args <Array> List of string arguments
  • options <Object>
    • cwd <String> Current working directory of the child process
    • input <String> | <Buffer> The value which will be passed as stdin to the spawned process
      • supplying this value will override stdio[0]
    • stdio <String> | <Array> Child's stdio configuration. (Default: 'pipe')
    • env <Object> Environment key-value pairs
    • uid <Number> Sets the user identity of the process. (See setuid(2).)
    • gid <Number> Sets the group identity of the process. (See setgid(2).)
    • timeout <Number> In milliseconds the maximum amount of time the process is allowed to run. (Default: undefined)
    • killSignal <String> | <Integer> The signal value to be used when the spawned process will be killed. (Default: 'SIGTERM')
    • maxBuffer <Number> largest amount of data (in bytes) allowed on stdout or stderr - if exceeded child process is killed
    • encoding <String> The encoding used for all stdio inputs and outputs. (Default: 'buffer')
    • shell <Boolean> | <String> If true, runs command inside of a shell. Uses '/bin/sh' on UNIX, and 'cmd.exe' on Windows. A different shell can be specified as a string. The shell should understand the -c switch on UNIX, or /s /c on Windows. Defaults to false (no shell).
  • Returns: <Object>
    • pid <Number> Pid of the child process
    • output <Array> Array of results from stdio output
    • stdout <Buffer> | <String> The contents of output[1]
    • stderr <Buffer> | <String> The contents of output[2]
    • status <Number> The exit code of the child process
    • signal <String> The signal used to kill the child process
    • error <Error> The error object if the child process failed or timed out

The child_process.spawnSync() method is generally identical to child_process.spawn() with the exception that the function will not return until the child process has fully closed. When a timeout has been encountered and killSignal is sent, the method won't return until the process has completely exited. Note that if the process intercepts and handles the SIGTERM signal and doesn't exit, the parent process will wait until the child process has exited.

Note: If the shell option is enabled, do not pass unsanitised user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.

Class: ChildProcess#

Instances of the ChildProcess class are EventEmitters that represent spawned child processes.

Instances of ChildProcess are not intended to be created directly. Rather, use the child_process.spawn(), child_process.exec(), child_process.execFile(), or child_process.fork() methods to create instances of ChildProcess.

Event: 'close'#

  • code <Number> the exit code if the child exited on its own.
  • signal <String> the signal by which the child process was terminated.

The 'close' event is emitted when the stdio streams of a child process have been closed. This is distinct from the 'exit' event, since multiple processes might share the same stdio streams.

Event: 'disconnect'#

The 'disconnect' event is emitted after calling the ChildProcess.disconnect() method in the parent or child process. After disconnecting it is no longer possible to send or receive messages, and the ChildProcess.connected property is false.

Event: 'error'#

The 'error' event is emitted whenever:

  1. The process could not be spawned, or
  2. The process could not be killed, or
  3. Sending a message to the child process failed.

Note that the 'exit' event may or may not fire after an error has occurred. If you are listening to both the 'exit' and 'error' events, it is important to guard against accidentally invoking handler functions multiple times.

See also ChildProcess#kill() and ChildProcess#send().

Event: 'exit'#

  • code <Number> the exit code if the child exited on its own.
  • signal <String> the signal by which the child process was terminated.

The 'exit' event is emitted after the child process ends. If the process exited, code is the final exit code of the process, otherwise null. If the process terminated due to receipt of a signal, signal is the string name of the signal, otherwise null. One of the two will always be non-null.

Note that when the 'exit' event is triggered, child process stdio streams might still be open.

Also, note that Node.js establishes signal handlers for SIGINT and SIGTERM and Node.js processes will not terminate immediately due to receipt of those signals. Rather, Node.js will perform a sequence of cleanup actions and then will re-raise the handled signal.

See waitpid(2).

Event: 'message'#

The 'message' event is triggered when a child process uses process.send() to send messages.

subprocess.connected#

  • <Boolean> Set to false after .disconnect is called

The subprocess.connected property indicates whether it is still possible to send and receive messages from a child process. When subprocess.connected is false, it is no longer possible to send or receive messages.

subprocess.disconnect()#

Closes the IPC channel between parent and child, allowing the child to exit gracefully once there are no other connections keeping it alive. After calling this method the subprocess.connected and process.connected properties in both the parent and child (respectively) will be set to false, and it will be no longer possible to pass messages between the processes.

The 'disconnect' event will be emitted when there are no messages in the process of being received. This will most often be triggered immediately after calling subprocess.disconnect().

Note that when the child process is a Node.js instance (e.g. spawned using child_process.fork()), the process.disconnect() method can be invoked within the child process to close the IPC channel as well.

subprocess.kill([signal])