This module provides a "transport" for pino that simply forwards messages to kafka.
You should install pino-kafka globally for ease of use:
$ npm install --production -g pino-kafka
### or with yarn
$ yarn global add pino-kafkaThis library depends on node-rdkafka.
Have a look at node-rdkafka requirements.
Given an application foo that logs via pino, and a kafka broker listening on 10.10.10.5:9200 you would use pino-kafka as:
$ node foo | pino-kafka -b 10.10.10.5:9200Initialize pino-kafka and pass it to pino.
const pino = require('pino')
const pkafka = require('pino-kafka')
const logger = pino({}, pkafka({ brokers: "10.10.10.5:9200"}))
logger.info('hello world')--brokers(-b): broker list for kafka producer. Comma separated--defaultTopic(-d): default topic name for kafka.--timeout(-t): timeout for initial broker connection in milliseconds. Default 10000--echo(-e): echo the received messages to stdout. Default: false.--settings: path to config JSON file. Have a look at Settings JSON file section for details and examples--kafka.$config: any kafka configuration can be passed with prefixkafka. Please visit node-rdkafka configuration for available options. Note that only producer and global configuration properties will be used. Have a look at Kafka Settings section for details and examples
The --settings switch can be used to specify a JSON file that contains
a hash of settings for the application. A full settings file is:
{
"brokers": "10.6.25.11:9092, 10.6.25.12:9092",
"defaultTopic": "blackbox",
"kafka": {
"compression.codec":"none",
"enable.idempotence": "true",
"max.in.flight.requests.per.connection": 4,
"message.send.max.retries": 10000000,
"acks": "all"
}
}Note that command line switches take precedence over settings in a settings file. For example, given the settings file:
{
"brokers": "my.broker",
"defaultTopic": "test"
}And the command line:
$ yes | pino-kafka -s ./settings.json -b 10.10.10.11:9200The connection will be made to address 10.10.10.11:9200 with the default topic test.
You can pass node-rdkafka producer configuration by prefixing the property with kafka. For example:
$ yes | pino-kafka --kafka.retries=5 --kafka.retry.backoff.ms=500In the Setting JSON File you can use followings:
{
"kafka": {
"retries": "5",
"retry.backoff.ms": "500"
}
}Following will work also:
{
"kafka": {
"retries": "5",
"retry":{
"backoff": {
"ms": "500"
}
}
}
}You can access node-rdkafka producer from pino stream with _kafka.
For example:
const pino = require('pino')
const pkafka = require('pino-kafka')
const pKafkaStream = pkafka({ brokers: "10.10.10.5:9200"})
const logger = pino({}, pKafkaStream)
// From pino-kafka instance
pKafkaStream._kafka.getMetadata({}, (err, data)=> {
//...
})
// From logger
logger[pino.symbols.streamSym]._kafka.getMetadata({}, (err, data)=> {
//...
})For running tests make sure you installed dependencies with npm install or yarn and have a running kafka.
More easily, if you have docker and docker-compose installed, you can create one with following.
$ cd pino-kafka
$ docker-compose up -dLook at docker-compose file for more details.
After you all setup, just run test command with following:
$ npm run test
# or with yarn
$ yarn testNOTE: If you use your own kafka setup, you may need to change test configuration accordingly to your needs(ip, topic etc.)