Success! kcat is now built
Usage: ./kcat
kcat - Apache Kafka producer and consumer tool
https://github.com/edenhill/kcat
Copyright (c) 2014-2021, Magnus Edenhill
Version 1.7.0 (JSON, Avro, Transactions, IncrementalAssign, JSONVerbatim, librdkafka 1.7.0 builtin.features=snappy,ssl,sasl,regex,lz4,sasl_plain,sasl_scram,plugins,sasl_oauthbearer)
General options:
-C | -P | -L | -Q Mode: Consume, Produce, Metadata List, Query mode
-G
Expects a list of topics to subscribe to
-t <topic> Topic to consume from, produce to, or list
-p
-b
-D
a-z | \r | \n | \t | \xNN ..
Default: \n
-K
-c
-m
This limits how long kcat will block
while waiting for initial metadata to be
retrieved from the Kafka cluster.
It also sets the timeout for the producer's
transaction commits, init, aborts, etc.
Default: 5 seconds.
-F
file format is "property=value".
The KCAT_CONFIG=path environment can also be used, but -F takes precedence.
The default configuration file is $HOME/.config/kcat.conf
-X list List available librdkafka configuration properties
-X prop=val Set librdkafka configuration property.
Properties prefixed with "topic." are
applied as topic properties.
-X schema.registry.prop=val Set libserdes configuration property for the Avro/Schema-Registry client.
-X dump Dump configuration and exit.
-d
all,generic,broker,topic,metadata,feature,queue,msg,protocol,cgrp,security,fetch,interceptor,plugin,consumer,admin,eos,mock,assignor,conf
-q Be quiet (verbosity set to 0)
-v Increase verbosity
-E Do not exit on non-fatal error
-V Print version
-h Print usage help
Producer options:
-z snappy|gzip|lz4 Message compression. Default: none
-p -1 Use random partitioner
-D
-K
-k
If combined with -K, per-message keys
takes precendence.
-H
-l Send messages from a file separated by
delimiter, as with stdin.
(only one file allowed)
-T Output sent messages to stdout, acting like tee.
-c
-Z Send empty messages as NULL messages
file1 file2.. Read messages from files.
With -l, only one file permitted.
Otherwise, the entire file contents will
be sent as one single message.
-X transactional.id=.. Enable transactions and send all
messages in a single transaction which
is committed when stdin is closed or the
input file(s) are fully read.
If kcat is terminated through Ctrl-C
(et.al) the transaction will be aborted.
Consumer options:
-o
beginning | end | stored |
-
s@
e@
-e Exit successfully when last message received
-f
Takes precedence over -D and -K.
-J Output with JSON envelope
-s key=
-s value=
-s
Available deserializers (
<: little-endian,
>: big-endian (recommended),
b: signed 8-bit integer
B: unsigned 8-bit integer
h: signed 16-bit integer
H: unsigned 16-bit integer
i: signed 32-bit integer
I: unsigned 32-bit integer
q: signed 64-bit integer
Q: unsigned 64-bit integer
c: ASCII character
s: remaining data is string
$: match end-of-input (no more bytes remaining or a parse error is raised).
Not including this token skips any
remaining data after the pack-str is
exhausted.
avro - Avro-formatted with schema in Schema-Registry (requires -r)
E.g.: -s key=i -s value=avro - key is 32-bit integer, value is Avro.
or: -s avro - both key and value are Avro-serialized
-r
-D
-K
with specified delimiter.
-O Print message offset using -K delimiter
-c
-Z Print NULL values and keys as "NULL" instead of empty.
For JSON (-J) the nullstr is always null.
-u Unbuffered output
Metadata options (-L):
-t
Query options (-Q): : , timestamp Format string tokens: JSON message envelope (on one line) when consuming with -J: Consumer mode (writes messages to stdout): High-level KafkaConsumer mode: Producer mode (reads messages from stdin): Metadata listing: Query offset by timestamp:
-t
partition
Timestamp is the number of milliseconds
since epoch UTC.
Requires broker >= 0.10.0.0 and librdkafka >= 0.9.3.
Multiple -t .. are allowed but a partition
must only occur once.
%s Message payload
%S Message payload length (or -1 for NULL)
%R Message payload length (or -1 for NULL) serialized
as a binary big endian 32-bit signed integer
%k Message key
%K Message key length (or -1 for NULL)
%T Message timestamp (milliseconds since epoch UTC)
%h Message headers (n=v CSV)
%t Topic
%p Partition
%o Message offset
\n \r \t Newlines, tab
\xXX \xNNN Any ASCII character
Example:
-f 'Topic %t [%p] at offset %o: key %k: %s\n'
{ "topic": str, "partition": int, "offset": int,
"tstype": "create|logappend|unknown", "ts": int, // timestamp in milliseconds since epoch
"broker": int,
"headers": { "
"key": str|json, "payload": str|json,
"key_error": str, "payload_error": str, //optional
"key_schema_id": int, "value_schema_id": int //optional
}
notes:
- key_error and payload_error are only included if deserialization fails.
- key_schema_id and value_schema_id are included for successfully deserialized Avro messages.
kcat -b
or:
kcat -C -b ...
kcat -b
... | kcat -b
or:
kcat -P -b ...
kcat -L -b
kcat -Q -b broker -t