• /
  • Log in

Kafka monitoring integration

The New Relic Kafka on-host integration reports metrics and configuration data from your Kafka service. We instrument all the key elements of your cluster, including brokers (both ZooKeeper and Bootstrap), producers, consumers, and topics.

Read on to install the Kafka integration, and to see what data it collects. To monitor Kafka with our Java agent, see Instrument Kafka message queues.

Compatibility and requirements

Our integration is compatible with Kafka versions 0.8 or higher.

Before installing the integration, make sure that you meet the following requirements:

  • A New Relic account. Don't have one? Sign up for free! No credit card required.
  • If Kafka is not running on Kubernetes or Amazon ECS, you must install the infrastructure agent on a host that's running Kafka. Otherwise:
  • Java 8 or higher
  • JMX enabled on all brokers
  • Java-based consumers and producers only, and with JMX enabled
  • Total number of monitored topics must be fewer than 10000

For Kafka running on Kubernetes, see the Kubernetes requirements.

Prepare for the installation

Kafka is a complex piece of software that is built as a distributed system. For this reason, you’ll need to ensure that the integration can contact all the required hosts and services so the data is collected correctly.

Autodiscovery

Given the distributed nature of Kafka, the actual number and list of brokers is usually not fixed by the configuration, and it is instead quite dynamic. For this reason, the Kafka integration offers two mechanisms to perform automatic discovery of the list of brokers in the cluster: Bootstrap and Zookeeper. The mechanism you use depends on the setup of the Kafka cluster being monitored.

Bootstrap

With the bootstrap mechanism, the integration uses a bootstrap broker to perform the autodiscovery. This is a broker whose address is well known and that will be asked for any other brokers it is aware of. The integration needs to be able to contact this broker in the address provided in the bootstrap_broker_host parameter for bootstrap discovery to work.

Zookeeper

Alternatively, the Kafta integration can also talk to a Zookeeper server in order to obtain the list of brokers. To do this, the integration needs to be provided with the following:

  • The list of Zookeeper hosts to contact (zookeeper_hosts).
  • The proper authentication secrets to connect with the hosts.

Together with the list of brokers it knows about, Zookeeper will also advertise which connection mechanisms are supported by each broker.

You can configure the Kafka integration to try directly with one of these mechanisms with the preferred_listener parameter. If this parameter is not provided, the integration will try to contact the brokers with all the advertised configurations until one of them succeeds.

Tip

The integration will use Zookeeper only for discovering brokers and will not retrieve metrics from it.

Topic listing

To correctly list the topics processed by the brokers, the integration needs to contact brokers over the Kafka protocol. Depending on how the brokers are configured, this might require setting up SSL and/or SASL to match the broker configuration. The topics must have DESCRIBE access.

Broker monitoring (JMX)

The Kafka integration queries JMX, a standard Java extension for exchanging metrics in Java applications. JMX is not enabled by default in Kafka brokers, and you need to enable it for metrics collection to work properly. JMX requires RMI to be enabled, and the RMI port needs to be set to the same port as JMX.

You can configure JMX to use username/password authentication, as well as SSL. If such features have been enabled in the broker's JMX settings, you need to configure the integration accordingly.

If autodiscovery is set to bootstrap, the JMX settings defined for the bootstrap broker will be applied for all other discovered brokers, so the Port and other settings should be the same on all the brokers.

Important

We do not recommend enabling anonymous and/or unencrypted JMX/RMI access on public or untrusted network segments because this poses a big security risk.

Consumer offset

The offset of the consumer and consumer groups of the topics as well as the lag, can be retrieved as a KafkaOffsetSample with the CONSUMER_OFFSET=true flag but should be in a separate instance because when this flag is activated the instance will not collect other Samples.

Producer/consumer monitoring (JMX)

Producers and consumers written in Java can also be monitored to get more specific metadata through the same mechanism (JMX). This will generate KafkaConsumerSamples and KafkaProducerSamples. JMX needs to be enabled and configured on those applications where it is not enabled by default.

Non-Java producers and consumers do not support JMX and are therefore not supported by the Kafka integration.

Connectivity requirements

As a summary, the integration needs to be configured and allowed to connect to:

  • Hosts listed in zookeeper_hosts over the Zookeeper protocol, using the Zookeeper authentication mechanism (if autodiscover_strategy is set to zookeeper).
  • Hosts defined in bootstrap_broker_host over the Kafka protocol, using the Kafka broker’s authentication/transport mechanisms (if autodiscover_strategy is set to bootstrap).
  • All brokers in the cluster over the Kafka protocol and port, using the Kafka brokers' authentication/transport mechanisms.
  • All brokers in the cluster over the JMX protocol and port, using the authentication/transport mechanisms specified in the JMX configuration of the brokers.
  • All producers/consumers specified in producers and consumers over the JMX protocol and port, if you want producer/consumer monitoring. JMX settings for the consumer must be the same as for the brokers.

Important

For the cloud: By default, Security Groups (and their equivalents in other cloud providers) in AWS do not have the required ports open by default. JMX requires two ports in order to work: the JMX port and the RMI port. These can be set to the same value when configuring the JVM to enable JMX and must be open for the integration to be able to connect to and collect metrics from brokers.

Install and activate

To install the Kafka integration, choose your setup:

Additional notes:

Configure the integration

There are several ways to configure the integration, depending on how it was installed:

An integration's YAML-format configuration is where you can place required login credentials and configure how data is collected. Which options you change depend on your setup and preference. The entire environment can be monitored remotely or on any node in that environment.

The configuration file has common settings applicable to all integrations like interval, timeout, inventory_source. To read all about these common settings refer to our Configuration Format document.

Important

If you are still using our Legacy configuration/definition files please refer to this document for help.

Specific settings related to Kafka are defined using the env section of the configuration file. These settings control the connection to your Brokers, Zookeeper and JMX as well as other security settings and features. The list of valid settings is described in the next section of this document.

Important

The integration has two modes of operation, which are mutually exclusive: “Core” collection and "Consumer offset collection" controlled by the CONSUMER_OFFSET parameter:

These modes are separated because consumer offset collection takes a long time to run and has high performance requirements.

The values for these settings can be defined in several ways:

  • Adding the value directly in the config file. This is the most common way.
  • Replacing the values from environment variables using the {{}} notation. This requires infrastructure agent v1.14.0+. Read more here.
  • Using Secrets management. Use this to protect sensible information such as passwords to be exposed in plain text on the configuration file. For more information, see Secrets management.

Labels/Custom Attributes

Environment variables can be used to control config settings, such as your license key, and are then passed through to the Infrastructure agent. For instructions on how to use this feature, see Configure the Infrastructure agent.

You can further decorate your metrics using labels. Labels allow you to add key/value pairs attributes to your metrics which you can then use to query, filter or group your metrics on.
Our default sample config file includes examples of labels but, as they are not mandatory, you can remove, modify or add new ones of your choice.

labels:
env: production
role: kafka

For more about the general structure of on-host integration configuration, see Configuration.

Configure KafkaBrokerSample and KafkaTopicSample collection

The Kafka integration collects both Metrics(M) and Inventory(I) information. Check the Applies To column below to find which settings can be used for each specific collection:

Setting

Description

Default

Applies To

CLUSTER_NAME

user-defined name to uniquely identify the cluster being monitored. Required.

N/A

M/I

KAFKA_VERSION

The version of the Kafka broker you're connecting to, used for setting optimum API versions. It must match -or be lower than- the version from the broker.

Versions older than 1.0.0 may be missing some features.

Note that if the broker binary name is kafka_2.12-2.7.0 the Kafka api version to be used is 2.7.0, the preceding 2.12 is the Scala language version.

1.0.0

M/I

AUTODISCOVER_STRATEGY

the method of discovering brokers. Options are zookeeper or bootstrap.

zookeeper

M/I

METRICS

Set to true to enable Metrics only collection.

false

INVENTORY

Set to true to enable Inventory only collection.

false

Zookeeper autodiscovery arguments (only relevant when autodiscover_strategy is zookeeper):

Setting

Description

Default

Applies To

ZOOKEEPER_HOSTS

The list of Apache ZooKeeper hosts (in JSON format) that need to be connected.

If CONSUMER_OFFSET is set to false KafkaBrokerSamples and KafkaTopicSamples will be collected.

[]

M/I

ZOOKEEPER_AUTH_SCHEME

The ZooKeeper authentication scheme that is used to connect. Currently, the only supported value is digest. If omitted, no authentication is used.

N/A

M/I

ZOOKEEPER_AUTH_SECRET

The ZooKeeper authentication secret that is used to connect. Should be of the form username:password. Only required if zookeeper_auth_scheme is specified.

N/A

M/I

ZOOKEEPER_PATH

The Zookeeper node under which the Kafka configuration resides. Defaults to /.

N/A

M/I

PREFERRED_LISTENER

Use a specific listener to connect to a broker. If unset, the first listener that passes a successful test connection is used. Supported values are PLAINTEXT, SASL_PLAINTEXT, SSL, and SASL_SSL. Note: The SASL_* protocols only support Kerberos (GSSAPI) authentication.

N/A

M/I

Bootstrap broker discovery arguments (only relevant when autodiscover_strategy is bootstrap):

Setting

Description

Default

Applies To

BOOTSTRAP_BROKER_HOST

The host for the bootstrap broker.

If CONSUMER_OFFSET is set to false KafkaBrokerSamples and KafkaTopicSamples will be collected.

N/A

M/I

BOOTSTRAP_BROKER_KAFKA_PORT

The Kafka port for the bootstrap broker.

N/A

M/I

BOOTSTRAP_BROKER_KAFKA_PROTOCOL

The protocol to use to connect to the bootstrap broker. Supported values are PLAINTEXT, SASL_PLAINTEXT, SSL, and SASL_SSL.

Note the SASL_* protocols only support Kerberos (GSSAPI) authentication.

PLAINTEXT

M/I

BOOTSTRAP_BROKER_JMX_PORT

The JMX port to use for collection on each broker in the cluster.

Note that all discovered brokers should have JMX active on this port

N/A

M/I

BOOTSTRAP_BROKER_JMX_USER

The JMX user to use for collection on each broker in the cluster.

N/A

M/I

BOOTSTRAP_BROKER_JMX_PASSWORD

The JMX password to use for collection on each broker in the cluster.

N/A

M/I

JMX options (Applies to all JMX connections on the instance):

Setting

Description

Default

Applies To

KEY_STORE

The filepath of the keystore containing the JMX client's SSL certificate.

N/A

M/I

KEY_STORE_PASSWORD

The password for the JMX SSL key store.

N/A

M/I

TRUST_STORE

The filepath of the trust keystore containing the JMX server's SSL certificate.

N/A

M/I

TRUST_STORE_PASSWORD

The password for the JMX trust store.

N/A

M/I

DEFAULT_JMX_USER

The default user that is connecting to the JMX host to collect metrics. If the username field is omitted for a JMX host, this value will be used.

admin

M/I

DEFAULT_JMX_PASSWORD

The default password to connect to the JMX host. If the password field is omitted for a JMX host, this value will be used.

admin

M/I

TIMEOUT

The timeout for individual JMX queries in milliseconds.

10000

M/I

Broker TLS connection options (Needed if the broker protocol is SSL or SASL_SSL):

Setting

Description

Default

Applies To

TLS_CA_FILE

The certificate authority file for SSL and SASL_SSL listeners, in PEM format.

N/A

M/I

TLS_CERT_FILE

The client certificate file for SSL and SASL_SSL listeners, in PEM format.

N/A

M/I

TLS_KEY_FILE

The client key file for SSL and SASL_SSL listeners, in PEM format.

N/A

M/I

TLS_INSECURE_SKIP_VERIFY

Skip verifying the server's certificate chain and host name.

false

M/I

Broker SASL and Kerberos connection options (Needed if the broker protocol is SASL_PLAINTEXT or SASL_SSL):

Setting

Description

Default

Applies To

SASL_MECHANISM

The type of SASL authentication to use. Supported options are SCRAM-SHA-512, SCRAM-SHA-256, PLAIN, and GSSAPI.

N/A

M/I

SASL_USERNAME

SASL username required with the PLAIN and SCRAM mechanisms.

N/A

M/I

SASL_PASSWORD

SASL password required with the PLAIN and SCRAM mechanisms.

N/A

M/I

SASL_GSSAPI_REALM

Kerberos realm required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_SERVICE_NAME

Kerberos service name required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_USERNAME

Kerberos username required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_KEY_TAB_PATH

Kerberos key tab path required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_KERBEROS_CONFIG_PATH

Kerberos config path required with the GSSAPI mechanism.

/etc/krb5.conf

M/I

SASL_GSSAPI_DISABLE_FAST_NEGOTIATION

Disable FAST negotiation.

false

M/I

Broker Collection filtering:

Setting

Description

Default

Applies To

LOCAL_ONLY_COLLECTION

Collect only the metrics related to the configured bootstrap broker. Only used if autodiscover_strategy is bootstrap.

Environments that use discovery (e.g. Kubernetes) must be set to true because othwerwise brokers will be discovered twice: By the integration, and by the discovery mechanism, leading to duplicate data.

Note that activating this flag will skip KafkaTopicSample collection

false

M/I

TOPIC_MODE

Determines how many topics we collect. Options are all, none, list, or regex.

none

M/I

TOPIC_LIST

JSON array of topic names to monitor. Only in effect if topic_mode is set to list.

[]

M/I

TOPIC_REGEX

Regex pattern that matches the topic names to monitor. Only in effect if topic_mode is set to regex.

N/A

M/I

TOPIC_BUCKET

Used to split topic collection across multiple instances. Should be of the form <bucket number>/<number of buckets>.

1/1

M/I

COLLECT_TOPIC_SIZE

Collect the metric Topic size. Options are true or false, defaults to false.

This is a resource-intensive metric to collect, especially against many topics.

false

M/I

COLLECT_TOPIC_OFFSET

Collect the metric Topic offset. Options are true or false, defaults to false.

This is a resource-intensive metric to collect, especially against many topics.

false

M/I

Configure KafkaConsumerSample and KafkaProducerSample collection

The Kafka integration collects both Metrics(M) and Inventory(I) information. Check the Applies To column below to find which settings can be used for each specific collection:

Setting

Description

Default

Applies To

CLUSTER_NAME

user-defined name to uniquely identify the cluster being monitored. Required.

N/A

M/I

PRODUCERS

Producers to collect. For each provider a name, hostname, port, username, and password can be provided in JSON form. name is the producer’s name as it appears in Kafka. hostname, port, username, and password are the optional JMX settings and use the default if unspecified. Required to produce KafkaProducerSample.

Example: [{"name": "myProducer", "host": "localhost", "port": 24, "username": "me", "password": "secret"}]

[]

M/I

CONSUMERS

Consumers to collect. For each consumer a name, hostname, port, username, and password can be specified in JSON form. name is the consumer’s name as it appears in Kafka. hostname, port, username, and password are the optional JMX settings and use the default if unspecified. Required to produce KafkaConsumerSample.

Example: [{"name": "myConsumer", "host": "localhost", "port": 24, "username": "me", "password": "secret"}]

[]

M/I

DEFAULT_JMX_HOST

The default host to collect JMX metrics. If the host field is omitted from a producer or consumer configuration, this value will be used.

localhost

M/I

DEFAULT_JMX_PORT

The default port to collect JMX metrics. If the port field is omitted from a producer or consumer configuration, this value will be used.

9999

M/I

DEFAULT_JMX_USER

The default user that is connecting to the JMX host to collect metrics. If the username field is omitted from a producer or consumer configuration, this value will be used.

admin

M/I

DEFAULT_JMX_PASSWORD

The default password to connect to the JMX host. If the password field is omitted from a producer or consumer configuration, this value will be used.

admin

M/I

METRICS

Set to true to enable Metrics only collection.

false

INVENTORY

Set to true to enable Inventory only collection.

false

JMX SSL and timeout options (Applies to all JMX connections on the instance):

Setting

Description

Default

Applies To

KEY_STORE

The filepath of the keystore containing the JMX client's SSL certificate.

N/A

M/I

KEY_STORE_PASSWORD

The password for the JMX SSL key store.

N/A

M/I

TRUST_STORE

The filepath of the trust keystore containing the JMX server's SSL certificate.

N/A

M/I

TRUST_STORE_PASSWORD

The password for the JMX trust store.

N/A

M/I

TIMEOUT

The timeout for individual JMX queries in milliseconds.

10000

M/I

Configure KafkaOffsetSample collection

The Kafka integration collects both Metrics(M) and Inventory(I) information. Check the Applies To column below to find which settings can be used for each specific collection:

Setting

Description

Default

Applies To

CLUSTER_NAME

user-defined name to uniquely identify the cluster being monitored. Required.

N/A

M/I

KAFKA_VERSION

The version of the Kafka broker you're connecting to, used for setting optimum API versions. It must match -or be lower than- the version from the broker.

Versions older than 1.0.0 may be missing some features.

Note that if the broker binary name is kafka_2.12-2.7.0 the Kafka api version to be used is 2.7.0, the preceding 2.12 is the Scala language version.

1.0.0

M/I

AUTODISCOVER_STRATEGY

the method of discovering brokers. Options are zookeeper or bootstrap.

zookeeper

M/I

CONSUMER_OFFSET

Populate consumer offset data in KafkaOffsetSample if set to true.

Note that this option will skip Broker/Consumer/Producer collection and only collect KafkaOffsetSample

false

M/I

CONSUMER_GROUP_REGEX

regex pattern that matches the consumer groups to collect offset statistics for. This is limited to collecting statistics for 300 consumer groups.

Note: consumer_groups has been deprecated, use this argument instead. This option must be set when CONSUMER_OFFSET is true.

N/A

M/I

METRICS

Set to true to enable Metrics only collection.

false

INVENTORY

Set to true to enable Inventory only collection.

false

Zookeeper autodiscovery arguments (only relevant when autodiscover_strategy is zookeeper):

Setting

Description

Default

Applies To

ZOOKEEPER_HOSTS

The list of Apache ZooKeeper hosts (in JSON format) that need to be connected.

If CONSUMER_OFFSET is set to false KafkaBrokerSamples and KafkaTopicSamples will be collected.

[]

M/I

ZOOKEEPER_AUTH_SCHEME

The ZooKeeper authentication scheme that is used to connect. Currently, the only supported value is digest. If omitted, no authentication is used.

N/A

M/I

ZOOKEEPER_AUTH_SECRET

The ZooKeeper authentication secret that is used to connect. Should be of the form username:password. Only required if zookeeper_auth_scheme is specified.

N/A

M/I

ZOOKEEPER_PATH

The Zookeeper node under which the Kafka configuration resides. Defaults to /.

N/A

M/I

PREFERRED_LISTENER

Use a specific listener to connect to a broker. If unset, the first listener that passes a successful test connection is used. Supported values are PLAINTEXT, SASL_PLAINTEXT, SSL, and SASL_SSL. Note: The SASL_* protocols only support Kerberos (GSSAPI) authentication.

N/A

M/I

Bootstrap broker discovery arguments (only relevant when autodiscover_strategy is bootstrap):

Setting

Description

Default

Applies To

BOOTSTRAP_BROKER_HOST

The host for the bootstrap broker.

If CONSUMER_OFFSET is set to false KafkaBrokerSamples and KafkaTopicSamples will be collected.

N/A

M/I

BOOTSTRAP_BROKER_KAFKA_PORT

The Kafka port for the bootstrap broker.

N/A

M/I

BOOTSTRAP_BROKER_KAFKA_PROTOCOL

The protocol to use to connect to the bootstrap broker. Supported values are PLAINTEXT, SASL_PLAINTEXT, SSL, and SASL_SSL.

Note the SASL_* protocols only support Kerberos (GSSAPI) authentication.

PLAINTEXT

M/I

BOOTSTRAP_BROKER_JMX_PORT

The JMX port to use for collection on each broker in the cluster.

Note that all discovered brokers should have JMX active on this port

N/A

M/I

BOOTSTRAP_BROKER_JMX_USER

The JMX user to use for collection on each broker in the cluster.

N/A

M/I

BOOTSTRAP_BROKER_JMX_PASSWORD

The JMX password to use for collection on each broker in the cluster.

N/A

M/I

JMX SSL and timeout options (Applies to all JMX connections on an instance):

Setting

Description

Default

Applies To

KEY_STORE

The filepath of the keystore containing the JMX client's SSL certificate.

N/A

M/I

KEY_STORE_PASSWORD

The password for the JMX SSL key store.

N/A

M/I

TRUST_STORE

The filepath of the trust keystore containing the JMX server's SSL certificate.

N/A

M/I

TRUST_STORE_PASSWORD

The password for the JMX trust store.

N/A

M/I

DEFAULT_JMX_USER

The default user that is connecting to the JMX host to collect metrics. If the username field is omitted for a JMX host, this value will be used.

admin

M/I

DEFAULT_JMX_PASSWORD

The default password to connect to the JMX host. If the password field is omitted for a JMX host, this value will be used.

admin

M/I

TIMEOUT

The timeout for individual JMX queries in milliseconds.

10000

M/I

Broker TLS connection options (Needed if the broker protocol is SSL or SASL_SSL):

Setting

Description

Default

Applies To

TLS_CA_FILE

The certificate authority file for SSL and SASL_SSL listeners, in PEM format.

N/A

M/I

TLS_CERT_FILE

The client certificate file for SSL and SASL_SSL listeners, in PEM format.

N/A

M/I

TLS_KEY_FILE

The client key file for SSL and SASL_SSL listeners, in PEM format.

N/A

M/I

TLS_INSECURE_SKIP_VERIFY

Skip verifying the server's certificate chain and host name.

false

M/I

Broker SASL and Kerberos connection options (Needed if the broker protocol is SASL_PLAINTEXT or SASL_SSL):

Setting

Description

Default

Applies To

SASL_MECHANISM

The type of SASL authentication to use. Supported options are SCRAM-SHA-512, SCRAM-SHA-256, PLAIN, and GSSAPI.

N/A

M/I

SASL_USERNAME

SASL username required with the PLAIN and SCRAM mechanisms.

N/A

M/I

SASL_PASSWORD

SASL password required with the PLAIN and SCRAM mechanisms.

N/A

M/I

SASL_GSSAPI_REALM

Kerberos realm required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_SERVICE_NAME

Kerberos service name required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_USERNAME

Kerberos username required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_KEY_TAB_PATH

Kerberos key tab path required with the GSSAPI mechanism.

N/A

M/I

SASL_GSSAPI_KERBEROS_CONFIG_PATH

Kerberos config path required with the GSSAPI mechanism.

/etc/krb5.conf

M/I

SASL_GSSAPI_DISABLE_FAST_NEGOTIATION

Disable FAST negotiation.

false

M/I

Example configurations

Find and use data

Data from this service is reported to an integration dashboard.

Kafka data is attached to the following event types:

You can query this data for troubleshooting purposes or to create charts and dashboards.

For more on how to find and use your data, see Understand integration data.

Metric data

The Kafka integration collects the following metric data attributes. Each metric name is prefixed with a category indicator and a period, such as broker. or consumer..

KafkaBrokerSample event

Metric

Description

broker.bytesWrittenToTopicPerSecond

Number of bytes written to a topic by the broker per second.

broker.IOInPerSecond

Network IO into brokers in the cluster in bytes per second.

broker.IOOutPerSecond

Network IO out of brokers in the cluster in bytes per second.

broker.logFlushPerSecond

Log flush rate.

broker.messagesInPerSecond

Incoming messages per second.

follower.requestExpirationPerSecond

Rate of request expiration on followers in evictions per second.

net.bytesRejectedPerSecond

Rejected bytes per second.

replication.isrExpandsPerSecond

Rate of replicas joining the ISR pool.

replication.isrShrinksPerSecond

Rate of replicas leaving the ISR pool.

replication.leaderElectionPerSecond

Leader election rate.

replication.uncleanLeaderElectionPerSecond

Unclean leader election rate.

replication.unreplicatedPartitions

Number of unreplicated partitions.

request.avgTimeFetch

Average time per fetch request in milliseconds.

request.avgTimeMetadata

Average time for metadata request in milliseconds.

request.avgTimeMetadata99Percentile

Time for metadata requests for 99th percentile in milliseconds.

request.avgTimeOffset

Average time for an offset request in milliseconds.

request.avgTimeOffset99Percentile

Time for offset requests for 99th percentile in milliseconds.

request.avgTimeProduceRequest

Average time for a produce request in milliseconds.

request.avgTimeUpdateMetadata

Average time for a request to update metadata in milliseconds.

request.avgTimeUpdateMetadata99Percentile

Time for update metadata requests for 99th percentile in milliseconds.

request.clientFetchesFailedPerSecond

Client fetch request failures per second.

request.fetchTime99Percentile

Time for fetch requests for 99th percentile in milliseconds.

request.handlerIdle

Average fraction of time the request handler threads are idle.

request.produceRequestsFailedPerSecond

Failed produce requests per second.

request.produceTime99Percentile

Time for produce requests for 99th percentile.

topic.diskSize

In disk Topic size. Only present if COLLECT_TOPIC_SIZE is enabled.

topic.offset

Topic offset. Only present if COLLECT_TOPIC_OFFSET is enabled.

KafkaConsumerSample event

Metric

Description

consumer.avgFetchSizeInBytes

Average number of bytes fetched per request for a specific topic.

consumer.avgRecordConsumedPerTopic

Average number of records in each request for a specific topic.

consumer.avgRecordConsumedPerTopicPerSecond

Average number of records consumed per second for a specific topic in records per second.

consumer.bytesInPerSecond

Consumer bytes per second.

consumer.fetchPerSecond

The minimum rate at which the consumer sends fetch requests to a broke in requests per second.

consumer.maxFetchSizeInBytes

Maximum number of bytes fetched per request for a specific topic.

consumer.maxLag

Maximum consumer lag.

consumer.messageConsumptionPerSecond

Rate of consumer message consumption in messages per second.

consumer.offsetKafkaCommitsPerSecond

Rate of offset commits to Kafka in commits per second.

consumer.offsetZooKeeperCommitsPerSecond

Rate of offset commits to ZooKeeper in writes per second.

consumer.requestsExpiredPerSecond

Rate of delayed consumer request expiration in evictions per second.

KafkaProducerSample event

Metric

Description

producer.ageMetadataUsedInMilliseconds

Age in seconds of the current producer metadata being used.

producer.availableBufferInBytes

Total amount of buffer memory that is not being used in bytes.

producer.avgBytesSentPerRequestInBytes

Average number of bytes sent per partition per-request.

producer.avgCompressionRateRecordBatches

Average compression rate of record batches.

producer.avgRecordAccumulatorsInMilliseconds

Average time in ms record batches spent in the record accumulator.

producer.avgRecordSizeInBytes

Average record size in bytes.

producer.avgRecordsSentPerSecond

Average number of records sent per second.

producer.avgRecordsSentPerTopicPerSecond

Average number of records sent per second for a topic.

producer.AvgRequestLatencyPerSecond

Producer average request latency.

producer.avgThrottleTime

Average time that a request was throttled by a broker in milliseconds.

producer.bufferMemoryAvailableInBytes

Maximum amount of buffer memory the client can use in bytes.

producer.bufferpoolWaitTime

Faction of time an appender waits for space allocation.

producer.bytesOutPerSecond

Producer bytes per second out.

producer.compressionRateRecordBatches

Average compression rate of record batches for a topic.

producer.iOWaitTime

Producer I/O wait time in milliseconds.

producer.maxBytesSentPerRequestInBytes

Max number of bytes sent per partition per-request.

producer.maxRecordSizeInBytes

Maximum record size in bytes.

producer.maxRequestLatencyInMilliseconds

Maximum request latency in milliseconds.

producer.maxThrottleTime

Maximum time a request was throttled by a broker in milliseconds.

producer.messageRatePerSecond

Producer messages per second.

producer.responsePerSecond

Number of producer responses per second.

producer.requestPerSecond

Number of producer requests per second.

producer.requestsWaitingResponse

Current number of in-flight requests awaiting a response.

producer.threadsWaiting

Number of user threads blocked waiting for buffer memory to enqueue their records.

KafkaTopicSample event

Metric

Description

topic.diskSize

Current topic disk size per broker in bytes.

topic.partitionsWithNonPreferredLeader

Number of partitions per topic that are not being led by their preferred replica.

topic.respondMetaData

Number of topics responding to meta data requests.

topic.retentionSizeOrTime

Whether a partition is retained by size or both size and time. A value of 0 = time and a value of 1 = both size and time.

topic.underReplicatedPartitions

Number of partitions per topic that are under-replicated.

KafkaOffsetSample event

Metric

Description

consumer.offset

The last consumed offset on a partition by the consumer group.

consumer.lag

The difference between a broker's high water mark and the consumer's offset (consumer.hwm - consumer.offset).

consumer.hwm

The offset of the last message written to a partition (high water mark).

consumer.totalLag

The sum of lags across partitions consumed by a consumer.

consumerGroup.totalLag

The sum of lags across all partitions consumed by a consumerGroup.

consumerGroup.maxLag

The maximum lag across all partitions consumed by a consumerGroup.

Inventory data

The Kafka integration captures the non-default broker and topic configuration parameters, and collects the topic partition schemes as reported by ZooKeeper. The data is available on the Inventory UI page under the config/kafka source.

Troubleshooting

Troubleshooting tips:

Check the source code

This integration is open source software. That means you can browse its source code and send improvements or create your own fork and build it.

For more help

If you need more help, check out these support and learning resources:

Create issueEdit page
Copyright © 2021 New Relic Inc.