首页 > 作文

kafka的高可用情况下,挂掉一个节点,为什么消费者消费不到数据了

更新时间:2023-04-08 23:44:09 阅读: 评论:0

1.假设有kafka集群,3个broker

     kafka集群kafka01  kafka02   kafka03

2.创建topic test (分区3 副本3)

kafka-topics.sh --create --topic 'test' --zookeeper 'hadoop01:2181,hadoop02:2181,hadoop03:2181'  --partitions 3 --replication-factor 3

3.场景

3.1 生产者生产数据

kafka-console-producer.sh --broker-list 'hadoop01:9092,hadoop02:9092,hadoop03:9092' --topic 'test'   .......data - Cluster ID: qdP2jzDLRcautzTjQ4Lvfg12>22

3.2 消费者消费数据

消费者组 groupid:2222 正在消费topic:test的数据

kafka-console-consumer.sh --topic 'test' --bootstrap-rver   kafka01:9092,kafka02:9092,kafka03:9092  --group 2222 2020-11-12T17:11:15,594 INFO [main] org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=console-consumer-11985] Retting offt for partition flinkkafka333-2 to offt 8.1222

此时kafka01挂掉了,继续生产数据

kafka-console-producer.sh --broker-list 'hadoop01:9092,hadoop02:9092,hadoop03:9092' --topic 'test'   .......data - Cluster ID: qdP2jzDLRcautzTjQ4Lvfg3344

消费者消费不了数据,并报下列告警日志

2020-11-12T17:11:43,568 WARN [kafka-coordinator-heartbeat-thread | console-consumer-11985] org.apa甲午中日战争时间che.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=console-consumer-11985] Connection to node 2147483646 (/192.168.70.115:9092) could not be established. Broker may not be available.2020-11-12T17:11:43,569 INFO [kafka-coordinator-heartbeat-thread | console-consumer-11985] org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=console-consumer-11985] Group coordinator 192.168.70.115:9092 (id: 2147483646 rack: null) is unavailable or invalid, will attempt rediscovery2020-11-12T17:11:43,572 INFO [main] org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=console-consumer-11985] Discovered group coordinator 192.168.70.115:9092 (id: 2147483646 rack: null)

4. 为什么会出现上述情况?

4.1 为什么消费者不能够消费topic:test数据?

kafak的topic __consumer_offts默认副本为1,而恰好其0-49共50个分区全部在kafka01上,此时消费者组groupid:2222 找不到 __consumer_offts自己的offt了

[root@hadoop01 kafka-logs]# lscdn_events-0               __consumer_offts-21  __consumer_offts-36  __consumer_offts-6cleaner-offt-checkpoint  __consumer_offts-22  __consumer_offts-37  __宣传委员工作职责consumer_offts-7__consumer_offts-0       __consumer_offts-23  __consumer_offts-38  __consumer_offts-8__consumer_offts-1       __consumer_offts-24  __consumer_offts-39  __consumer_offts-9__consumer_offts-10      __consumer_offts-25  __consumer_offts-4   kafka-test-0__consumer_offts-11      __consumer_offts-26  __consumer_offts-40  log-start-offt-checkpoint__consumer_offts-12      __consumer_offts-27  __consumer_offts-41  meta.properties__consumer_offts-13      __consumer_offts-28  __consumer_offts-42  mysqlSinkTest-0__consumer_offts-14      __consumer_offts-29  __consumer_offts-43  recovery-point-offt-checkpoint__consumer_offts-15      __consumer_offts-3   __consumer_offts-44  replication-offt-checkpoint__consumer_offts-16      __consumer_offts-30  __consumer_offts-45  test_log-0__consumer_offts-17      __consumer_offts-31  __consumer_offts-46  wordCount_input-1__consumer_offts-18      __consumer_offts-32  __consumer_offts-47  wordCount_output-1__consumer_offts-19      __consumer_offts-33  __consumer_offts-48__consumer_offts-2       __consumer_offts-34  __consumer_offts-49__consumer_offts-20      __consumer_offts-35  __consumer_offts-5

4.2 为什么生产者能够发送数据到topic:test

由于topic:test的副本为3,即使在kafka01上的分区副本挂掉了,在kafka02,kafka03上还有其副本,故往 topic:test发送数据是能够成功的.

4.3 重启kafka01,消费者可消费数据

kafka01 重启后,消费者重新消费到了数据

2020-11-12T17:05:28,033 INFO [main] org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=2222] Discovered group coordinator 192.168.70.115:9092 (id: 2147483646 rack: null)2020-11-12T17:05:28,034 INFO [main] org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=2222] (Re-)joining group2020-11-12T17:05:28,141 INFO [main] org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=2222] (Re-)joining group2020-11-12T17:05:28,243 INFO [main] org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=2222] (Re-)joining group2020-11-12T17:05:28,250 INFO [main] org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=2222] (Re-)joining group2020-11-12T17:05:38,254 INFO [main] org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=2222] Successfully joined group with generation 42020-11-12T17:05:38,255 INFO [main] org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=2222] Setting newly assigned partitions: flinkkafka333-1, flinkkafka333-2, flinkkafka333-0     3344

5.解决

kafka rver.properties中默认配置

############################# Internal Topic Settings  ############################## The replication factor for the group metadata internal topics "__consumer_offts" and "__transaction_state"# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.offts.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1

将配置修改为

############################# Internal Topic Settings  ############################## The replication factor for the group metadata internal topics "__consumer_offts" and "__transaction_state"# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.# 以下这些参数,共同影响集群的高可用性# topic:__consumer_offts的副本数,默认1offts.topic.replication.factor=3# topic:__transaction_state的副本数,默认1transaction.state.log.replication.factor=3# topic:__transaction_state ISR中最小同步副本数,默认2transaction.state.log.min.isr=2# min.insync.replicas这个参数设定ISR中的最小副本数是多少,默认值为1,当且仅当offts.commit.required.acks参数设置为-1时,此参数才生效。如果ISR中的副本数少于min.insync.replicas配置的数量时,客户端会返回异常:org.apache.kafka.common.errors.NotEnoughReplicasExceptoin: Messages are rejected since there are fewer in-sync replicas than required。 #min.insync.replicas=2# 每个follow从leader拉取消息进行同步数据拉取线程数,默认1,配置多可以提高follower的I/O并发度,单位时间内leader持有更多请求,相应负载会增大,需要根据机器硬件资源做权衡num.replica.fetchers=2# 创建topic的默认副本数,默认值1,可根据实际情况创建topic时指定default.replication.factor=2#创建topic的默认分区数,默认值1,可根据实际情况创建topic时指定num.partitions=2

6.重启kafka

[root@hadoop01 log]# kafka-topics.sh --describe --zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181 --topic __consumer_off回家作文600字ts2020-11-12T18:13:09,825 INFO [main] kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Connected.Topic:__consumer_offtsPartitionCount:50ReplicationFactor:1Configs:gment.bytes=104857600,cleanup.policy=compact,compression.type=producerTopic: __consumer_offtsPartition: 0Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 1Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 2Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 3Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 4Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsP女兵体检artition: 5Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 6Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 7Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 8Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 9Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 10Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 11Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 12Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 13Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 14Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 15Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 16Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 17Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 18Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 19Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 20Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 21Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 22Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 23Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 24Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 25Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 26Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 27Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 28Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 29Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 30Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 3超越自我1Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 32Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 33Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 34Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 35Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 36Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 37Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 38Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 39Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 40Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 41Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 42Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 43Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 44Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 45Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 46Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 47Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 48Leader: -1Replicas: 1Isr: 1Topic: __consumer_offtsPartition: 49Leader: -1Replicas: 1Isr: 1

发现不起作用,这是因为如果 __consumer_offts topic已经存在,修改副本数是没有效果的,此时只能够手动增加__consumer_offts的副本数
具体查看:

Kafka动态增加Topic的副本/d/file/titlepic/10315176.html 
        

本文发布于:2023-04-08 23:44:07,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/zuowen/53fccd239ccbe4cb69b1b221cf0d2f24.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

本文word下载地址:kafka的高可用情况下,挂掉一个节点,为什么消费者消费不到数据了.doc

本文 PDF 下载地址:kafka的高可用情况下,挂掉一个节点,为什么消费者消费不到数据了.pdf

标签:副本   数据   消费者   分区
相关文章
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图