[转帖]Kafka Dashboard

kafka,dashboard · 浏览次数 : 0

小编点评

**Kafka Resource Usage and Consumer Lag Overview** **Overview** This dashboard provides an overview of Kafka resource usage and consumer lag. It uses the jmx_exporter to collect metrics from Kafka brokers and prometheus server. **Configuration** * `start.sh` script: * Sets the KAFKA_HOME environment variable to a path where Kafka will be installed. * Starts the Kafka server using the `kafka-server-start.sh` script with the `-daemon` flag to run in the background. * `prometheus.yml` configuration: * Specifies the job name as "kafka-jmx". * Specifies the metrics path as `/metrics`. * Specifies the metrics scheme as "http". * Adds a static configuration for the metrics target, which specifies the Kafka server's address and port. * Allows access to the Kafka metrics at `{Prometheus Server}6660/metrics`. **Key Points** * The dashboard collects metrics from Kafka brokers and Prometheus server. * The `start.sh` script is executed to start the Kafka server. * The `prometheus.yml` configuration specifies targets and a static config for the metrics target. * Access the Kafka metrics at `{Prometheus Server}6660/metrics`. **Additional Notes** * The `_sum` metric is missing from the configuration. * The `value` metric with `quantile` of "0.6" is used to calculate the 60th percentile. * The dashboard provides a high-level overview of Kafka resource usage and consumer lag.

正文

https://grafana.com/grafana/dashboards/18276-kafka-dashboard/

 

Kafka resource usage and consumer lag overview

  • Overview
  • Revisions
  • Reviews

使用jmx_exporter对kafka进行监控

Use jmx_exporter to collect Kafka metrics

导入jmx_prometheus_javaagent-0.18.0.jar和config.yml

Use jmx_prometheus_javaagent-0.18.0.jar and edit config.yml

windows启动kafka-javaagent startwithagent.bat的启动脚本

Edit kafka-javaagent startwithagent.bat for Windows system to start kafka, Such as:

SET KAFKA_HOME=E:\kafka_2.12-2.8.1
SET KAFKA_OPTS=-javaagent:%KAFKA_HOME%\jmx_prometheus_javaagent-0.18.0.jar=6660:%KAFKA_HOME%\config.yml

cd %KAFKA_HOME%\bin\windows

kafka-server-start.bat %KAFKA_HOME%\config\server.properties
PowerShell
 

config.yml内容(如果不添加任何指定监控项为默认值):

lowercaseOutputName: true

rules:
# Special cases and very specific rules
- pattern : kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value
  name: kafka_server_$1_$2
  type: GAUGE
  labels:
    clientId: "$3"
    topic: "$4"
    partition: "$5"
- pattern : kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>Value
  name: kafka_server_$1_$2
  type: GAUGE
  labels:
    clientId: "$3"
    broker: "$4:$5"
- pattern : kafka.coordinator.(\w+)<type=(.+), name=(.+)><>Value
  name: kafka_coordinator_$1_$2_$3
  type: GAUGE

# Generic per-second counters with 0-2 key/value pairs
- pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+), (.+)=(.+)><>Count
  name: kafka_$1_$2_$3_total
  type: COUNTER
  labels:
    "$4": "$5"
    "$6": "$7"
- pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+)><>Count
  name: kafka_$1_$2_$3_total
  type: COUNTER
  labels:
    "$4": "$5"
- pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*><>Count
  name: kafka_$1_$2_$3_total
  type: COUNTER

- pattern: kafka.server<type=(.+), client-id=(.+)><>([a-z-]+)
  name: kafka_server_quota_$3
  type: GAUGE
  labels:
    resource: "$1"
    clientId: "$2"

- pattern: kafka.server<type=(.+), user=(.+), client-id=(.+)><>([a-z-]+)
  name: kafka_server_quota_$4
  type: GAUGE
  labels:
    resource: "$1"
    user: "$2"
    clientId: "$3"

# Generic gauges with 0-2 key/value pairs
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Value
  name: kafka_$1_$2_$3
  type: GAUGE
  labels:
    "$4": "$5"
    "$6": "$7"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Value
  name: kafka_$1_$2_$3
  type: GAUGE
  labels:
    "$4": "$5"
- pattern: kafka.(\w+)<type=(.+), name=(.+)><>Value
  name: kafka_$1_$2_$3
  type: GAUGE

# Emulate Prometheus 'Summary' metrics for the exported 'Histogram's.
#
# Note that these are missing the '_sum' metric!
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Count
  name: kafka_$1_$2_$3_count
  type: COUNTER
  labels:
    "$4": "$5"
    "$6": "$7"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*), (.+)=(.+)><>(\d+)thPercentile
  name: kafka_$1_$2_$3
  type: GAUGE
  labels:
    "$4": "$5"
    "$6": "$7"
    quantile: "0.$8"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Count
  name: kafka_$1_$2_$3_count
  type: COUNTER
  labels:
    "$4": "$5"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*)><>(\d+)thPercentile
  name: kafka_$1_$2_$3
  type: GAUGE
  labels:
    "$4": "$5"
    quantile: "0.$6"
- pattern: kafka.(\w+)<type=(.+), name=(.+)><>Count
  name: kafka_$1_$2_$3_count
  type: COUNTER
- pattern: kafka.(\w+)<type=(.+), name=(.+)><>(\d+)thPercentile
  name: kafka_$1_$2_$3
  type: GAUGE
  labels:
    quantile: "0.$4"	
YAML
 

linux启动添加类似脚本或者改动启动文件: Edit start.sh:

KAFKA_HOME=/xxx/kafka_2.12-2.8.1
export KAFKA_OPTS=-javaagent:$KAFKA_HOME/jmx_prometheus_javaagent-0.18.0.jar=6660:$KAFKA_HOME/config.yml

$KAFKA_HOME/bin/kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties
Shell
 

添加配置至prometheus

Edit prometheus.yml

​ 在prometheus.yml文件中添加:

- job_name: "kafka-jmx"

  # metrics_path defaults to '/metrics'
  # scheme defaults to 'http'.

  static_configs:
    - targets: ["{Prometheus Server}:6660"]	
YAML
 

访问{Prometheus Server}6660/metrics查看指标
View http://{Prometheus Server}6660/metrics to checkout Kafka metrics

与[转帖]Kafka Dashboard相似的内容:

[转帖]Kafka Dashboard

https://grafana.com/grafana/dashboards/18276-kafka-dashboard/ Kafka resource usage and consumer lag overview Overview Revisions Reviews 使用jmx_exporter

[转帖]Kafka 基本概念大全

https://my.oschina.net/jiagoushi/blog/5600943 下面给出 Kafka 一些重要概念,让大家对 Kafka 有个整体的认识和感知,后面还会详细的解析每一个概念的作用以及更深入的原理 ・Producer:消息生产者,向 Kafka Broker 发消息的客户端

[转帖]Kafka 与RocketMQ 落盘机制比较

https://www.jianshu.com/p/fd50befccfdd 引言 前几期的评测中,我们对比了Kafka和RocketMQ的吞吐量和稳定性,本期我们要引入一个新的评测标准——软件可靠性。 何为“可靠性”? 先看下面这种情况:有A,B两辆越野汽车,在城市的周边地区均能很好应对泥泞的路况

[转帖]Kafka关键参数设置

https://www.cnblogs.com/wwcom123/p/11181680.html 生产环境中使用Kafka,参数调优非常重要,而Kafka参数众多,我们的java的Configuration代码中,经常设置的参数如下: Properties props = new Propertie

[转帖]kafka压测多维度分析实战

设置虚拟机不同的带宽来进行模拟压测 kafka数据压测 1、公司生产kafka集群硬盘:单台500G、共3台、日志保留7天。 1.1 版本:1.1.0 2、压测kafka。 2.1 使用kafka自带压测工具:bin/kafka-producer-perf-test.sh 命令参数解释: --num

[转帖]Kafka—配置SASL/PLAIN认证客户端及常用操作命令

介绍 SASL/PLAIN 是一种简单的 username/password安全认证机制,本文主要总结服务端开启该认证后,命令行客户端进行配置的操作流程。 配置 增加jaas.properties 在kafka的config目录下增加jaas.properties文件指定认证协议为SASL_PLAI

[转帖]kafka 配置认证与授权

https://www.cnblogs.com/yjt1993/p/14739130.html 本例不使用kerberos做认证,使用用户名和密码的方式来进行认证 1、服务端配置 1.0 配置server.properties 添加如下配置 #配置 ACL 入口类 authorizer.class.

[转帖]Kafka—配置SASL/PLAIN认证客户端及常用命令

https://www.jianshu.com/p/c1a02fb1779f 介绍 SASL/PLAIN 是一种简单的 username/password安全认证机制,本文主要总结服务端开启该认证后,命令行客户端进行配置的操作流程。 配置 增加jaas.properties 在kafka的confi

[转帖]kafka搭建kraft集群模式

kafka2.8之后不适用zookeeper进行leader选举,使用自己的controller进行选举 1.准备工作 准备三台服务器 192.168.3.110 192.168.3.111 192.168.3.112,三台服务器都要先安装好jdk1.8,配置好环境变量, 下载好kafka3.0.0

[转帖]Kafka高可用 — KRaft集群搭建

Apache Kafka Raft 是一种共识协议,它的引入是为了消除 Kafka 对 ZooKeeper 的元数据管理的依赖,被社区称之为 Kafka Raft metadata mode,简称 KRaft 模式。本文介绍了KRaft模式及三节点的 KRaft 集群搭建。 1 KRaft介绍 KR