Install Kafka Lag Exporter on GCP with Aiven Broker

Mohammad Humayun Khan
4 min readMar 2, 2024

We’ll be doing kafka-lag-exporter setup with our Aiven brokers (secured with SASL over SSL) on an Ubuntu-based GCP instance.

Photo by Marc-Olivier Jodoin on Unsplash

Without wasting time, Let’s go through the steps:

1. Download and unpack the Kafka Lag Exporter from a GitHub release.

You can download the latest release of Kafka Lag Exporter from its GitHub page. Use the wget command to download it and tar command to unpack it.

wget https://github.com/lightbend/kafka-lag-exporter/releases/download/v0.6.8/kafka-lag-exporter-0.6.8.tgz
tar -xvf kafka-lag-exporter-0.6.8.tgz

2. Configure the Kafka Lag Exporter by editing the application.conf file.

Inside the Kafka Lag Exporter directory, there should be a file named application.conf. This file contains the configuration for Kafka Lag Exporter. Open this file in a text editor and modify the settings to match your Kafka cluster.

sudo nano kafka-lag-exporter-0.6.8/conf/application.conf

In this file, you need to specify the details of your Kafka cluster, such as the bootstrap brokers, security protocol, SASL mechanism, and the location of the truststore file. For generating the truststore file, check steps 4 and 5.

kafka-lag-exporter {
reporters.prometheus.port = 8000
clusters = [
{
name = "<obtain it from aiven console connection info>"
bootstrap-brokers = "<obtain it from aiven console connection info>"
consumer-properties = {
security.protocol="SASL_SSL"
sasl.mechanism="SCRAM-SHA-512"
sasl.jaas.config = "org.apache.kafka.common.security.scram.ScramLoginModule required username='<obtain it from aiven console connection info>' password='<obtain it from aiven console connection info>';"
ssl.truststore.location = "/home/ubuntu/kafka-lag-exporter/truststore.jks"
ssl.truststore.password = "<add password from step 5>"
ssl.endpoint.identification.algorithm = "" // to disable domain checking of host
}
admin-client-properties = {
security.protocol="SASL_SSL"
sasl.mechanism="SCRAM-SHA-512"
sasl.jaas.config = "org.apache.kafka.common.security.scram.ScramLoginModule required username='<obtain it from aiven console connection info>' password='<obtain it from aiven console connection info>';"
ssl.truststore.location = "/home/ubuntu/kafka-lag-exporter/truststore.jks"
ssl.truststore.password = "<add password from step 5>"
ssl.endpoint.identification.algorithm = "" // to disable domain checking of host
}
}
]
}

3. Edit the docker-compose.yaml file: Make sure it looks something like this.

version: '3'
services:
kafka-lag-exporter:
image: lightbend/kafka-lag-exporter:0.6.8
ports:
- "8000:8000"
volumes:
- /home/ubuntu/kafka-lag-exporter:/home/ubuntu/kafka-lag-exporter
- /home/ubuntu/kafka-lag-exporter/conf:/opt/docker/conf/

4. Place the truststore file and the CA certificate in the kafka-lag-exporter directory: The CA certificate is provided by Aiven and should be downloaded from the Aiven console. Copy and paste it by creating a file named ca.pem inside the kafka-lag-exporter directory.

5. Generate a truststore file from the CA certificate: You can use the keytool command (which comes with the JDK) to generate a truststore file from the CA certificate. Make sure to use the same JDK that is being used by the Kafka Lag Exporter. Otherwise, you’ll see errors.
Our Kafka Lag Exporter used java-8-openjdk-amd64so we used it.

/usr/lib/jvm/java-8-openjdk-amd64/bin/keytool -import -trustcacerts -keystore /path/to/your/keystore -storepass your-keystore-password -noprompt -alias your-alias -file /path/to/server/certificate

Replace /path/to/your/keystore with the path to your Java keystore, your-keystore-password with the password for your keystore, your-alias with a unique alias for the certificate, and /path/to/server/certificate with the path to the server’s certificate.
Now, This should be the directory structure:

ubuntu@testing-gcp:~/kafka-lag-exporter$ ls
Chart.yaml ca.pem conf docker-compose.yaml kafka-lag-exporter templates truststore.jks values.yaml
ubuntu@testing-gcp:~/kafka-lag-exporter$ ls conf
application.conf

6. Test the setup: Run the Kafka Lag Exporter using Docker Compose.

  • Start: sudo docker-compose up -d
  • Stop: sudo docker-compose down
  • To check the logs for errors and make sure everything works as expected: sudo docker-compose logs -f
  • If your setup is working fine, you can expect to see something like this:
kafka-lag-exporter_1  | 2024-03-02 07:42:38,332 INFO  c.l.k.ConsumerGroupCollector$ akka://kafka-lag-exporter/user/consumer-group-collector-host - Polling in 30 seconds 
kafka-lag-exporter_1 | 2024-03-02 07:43:08,349 INFO c.l.k.ConsumerGroupCollector$ akka://kafka-lag-exporter/user/consumer-group-collector-host - Collecting offsets
kafka-lag-exporter_1 | 2024-03-02 07:43:08,413 INFO c.l.k.ConsumerGroupCollector$ akka://kafka-lag-exporter/user/consumer-group-collector-host - Updating lookup tables
kafka-lag-exporter_1 | 2024-03-02 07:43:08,413 INFO c.l.k.ConsumerGroupCollector$ akka://kafka-lag-exporter/user/consumer-group-collector-host - Reporting offsets
kafka-lag-exporter_1 | 2024-03-02 07:43:08,414 INFO c.l.k.ConsumerGroupCollector$ akka://kafka-lag-exporter/user/consumer-group-collector-host - Clearing evicted metrics

Otherwise, check for errors and fix them.

7. Use Supervisor to manage the Docker Compose process: First, you need to install Supervisor (ignore if already installed). You can do this by running the following command in the terminal:

sudo apt-get install -y supervisor

Then, you need to create a Supervisor configuration file for Kafka Lag Exporter. The file should be located in the /etc/supervisor/conf.d/ directory and should look something like this:

[program:kafka-lag-exporter]
command=/usr/local/bin/docker-compose -f /path/to/kafka-lag-exporter/docker-compose.yaml up
directory=/path/to/kafka-lag-exporter
autostart=true
autorestart=true
stderr_logfile=/var/log/kafka-lag-exporter.err.log
stdout_logfile=/var/log/kafka-lag-exporter.out.log

Replace /path/to/kafka-lag-exporter with the actual path to your Kafka Lag Exporter directory.

Finally, you can update Supervisor and start Kafka Lag Exporter by running the following commands in the terminal:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start kafka-lag-exporter

To check for logs:

tail -n 100 /var/log/kafka-lag-exporter.out.log

tail -n 100 /var/log/kafka-lag-exporter.err.log (for errors)

8. Open the necessary ports in your GCP firewall: Opening 8000 and 27416 are required for Kafka Lag Exporter.

9. After these steps, you should have a running Kafka Lag Exporter that is managed by the Supervisor and accessible from the internet.

10. Prometheus Scrape Config for your exporter: Kafka Lag Exporter exposes a set of predefined metrics related to Kafka consumer group lag. These metrics are available at the /metrics endpoint. Your Prometheus scrape configuration for the Kafka Lag Exporter should look like this:

- job_name: 'kafka-lag-exporter'
metrics_path: '/metrics'
static_configs:
- targets: ['<kafka-lag-exporter-machine-ip>:8000']

This configuration tells Prometheus to scrape metrics from the Kafka Lag Exporter’s /metrics endpoint without any additional parameters. Save the changes and restart Prometheus:

sudo systemctl restart prometheus

Check if Prometheus is working fine after your change:

sudo systemctl status prometheus

I hope you find the info useful. Have a great day!

--

--