windows 11 配置 kafka 使用SASL SCRAM

发布时间:2025-06-24 17:16:52  作者:北方职教升学中心  阅读量:585


1. 下载安装apache-zookeeper-3.9.2

配置 \conf\zoo.cfg
# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=D:/server/data/zookeeperdataLogDir=D:/server/data/zookeeperLog# admin管理端端口#admin.serverPort=8887# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge feature#autopurge.purgeInterval=1## Metrics Providers## https://prometheus.io Metrics Exporter#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider#metricsProvider.httpHost=0.0.0.0#metricsProvider.httpPort=7000#metricsProvider.exportJvmInfo=true
2. 启动 zkServer.cmd

3. 下载并重命名 kafka , kafka2.12.3.8.0

4. 编辑配置 \config\server.properties 以使用 SCRAM-SHA-256
# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at##    http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.## This configuration file is intended for use in ZK-based mode, where Apache ZooKeeper is required.# See kafka.server.KafkaConfig for additional details and defaults############################## Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=0############################# Socket Server Settings ############################## The address the socket server listens on. If not configured, the host name will be equal to the value of# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.#   FORMAT:#     listeners = listener_name://host_name:port#   EXAMPLE:#     listeners = PLAINTEXT://your.host.name:9092#listeners=PLAINTEXT://:9092# 启用 SASL 机制sasl.enabled.mechanisms=SCRAM-SHA-256# 配置 SASL 认证方式sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256security.inter.broker.protocol=SASL_PLAINTEXT# 启用用户认证的 Kafka 认证控制allow.everyone.if.no.acl.found=false# 配置 Kafka 用户的 JAAS 配置listener.name.sasl_plaintext.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \  username="kafka" \  password="kafkaAdmin#20240304";# 如果使用 SSL, 配置 SSL 文件路径(可选)# ssl.keystore.location=/path/to/keystore.jks# ssl.keystore.password=your_keystore_password# ssl.key.password=your_key_password# 设置Kafka 端口listeners=SASL_PLAINTEXT://127.0.0.1:9092advertised.listeners=SASL_PLAINTEXT://127.0.0.1:9092# Listener name, hostname and port the broker will advertise to clients.# If not set, it uses the value for "listeners".#advertised.listeners=PLAINTEXT://your.host.name:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the networknum.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/Onum.io.threads=8# The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files# 设置Kafka日志log.dirs=D:/server/data/kafka-logs# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.# This value is recommended to be increased for installations with data dirs located in RAID array.num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  ############################## The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk.# There are a few important trade-offs here:#    1. Durability: Unflushed data may be lost if you are not using replication.#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.# The settings below allow one to configure the flush policy to flush data after a period of time or# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# The minimum age of a log file to be eligible for deletion due to agelog.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining# segments drop below log.retention.bytes. Functions independently of log.retention.hours.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.#log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according# to the retention policieslog.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.#设置ZK地址zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=18000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.# The default value for this is 3 seconds.# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.group.initial.rebalance.delay.ms=0

配置 JAAS 文件 \kafka2.12.3.8.0\config\kafka_server_jaas.conf

KafkaServer {    org.apache.kafka.common.security.scram.ScramLoginModule required    username="kafka"    password="kafkaAdmin#20240304";};Client {   org.apache.kafka.common.security.scram.ScramLoginModule required   username="kafka"   password="kafkaAdmin#20240304";};

设置环境变量 (optional)

set KAFKA_OPTS=-Djava.security.auth.login.config=D:/server/kafka2.12.3.8.0/config/kafka_server_jaas.conf

为 kafka 用户创建 SCRAM-SHA-256 凭据

\kafka2.12.3.8.0\bin\windows\kafka-configs --zookeeper localhost:2181 --alter --add-config SCRAM-SHA-256=[iterations=4096,password=kafkaAdmin#20240304],SCRAM-SHA-512=[password=kafkaAdmin#20240304] --entity-type users --entity-name kafka

配置 Kafka 客户端 \kafka2.12.3.8.0\config\client.properties

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256

sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
  username="kafka" \
  password="kafkaAdmin#20240304";

5. 启动kafka kafka-server-start.bat ..\..\config\server.properties
6. 测试配置

新建主题

kafka-topics --create --topic test-topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1 --command-config ../../config/client.properties
 

新建生产者 并输入数据

kafka-console-producer --topic test-topic --bootstrap-server localhost:9092 --producer.config  ../../config/client.properties
 

新建消费者以显示生产者数据

kafka-console-consumer --topic test-topic --bootstrap-server localhost:9092 --from-beginning --consumer.config ../../config/client.properties
 

测试成功

7. 安装配置kafka UI 消息查看工具

下载git 代码打包生产 kafdrop-4.0.3-SNAPSHOT.jar (git 跳过 test), 路径D:\soft\kafka

配置UI客户端连接文件 kafka.properties

security.protocol=SASL_PLAINTEXTsasl.mechanism=SCRAM-SHA-256sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka" password="kafkaAdmin#20240304";

命令行启动UI

java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED -jar kafdrop-4.0.3-SNAPSHOT.jar  --kafka.brokerConnect=127.0.0.1:9092 --kafka.properties=kafka.properties

或者指定控制台端口

java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED -jar kafdrop-4.0.3-SNAPSHOT.jar  --kafka.brokerConnect=localhost:9092 --server.port=9999 --management.server.port=9999 --kafka.properties=kafka.properties
 

登录控制台查看 URL: http://localhost:9000/ 可查看先前 topic: test-topic

windows11 WSL 配置后只能WSL下使用,主机 SCRAM-SHA-256 配置访问不成功,故放弃.

看来WSL只能简单使用一些工具,多安全网络相关配置不适用