I develop production kafka environment with this formation: 3 ZK server, 3 Kafka brokers and Two kafka connect. I put my tmp file side-by-side with my kafka main folder. And I run it in remote ubuntu environment but not in docker.
When i operate my kafka operation, i experienced error which inform my disk are consumed too much. I check my kafka tmp folder that the size is about almost 2/3 of my disk size, which turn off my kafka cluster.
I have inspect for each kafka log_folder and found this:
- 25
connect_offset
from workers no.1 @21MB for each one
- 25
connect_offset2
from workers no.2 @21MB for each one
- 25
connect_status
from workers no.1 @21MB for each one
- 25
connect_status2
from workers no.2 @21MB for each one
- 50
__consumer_offset
from both workers @21MB for each one
- topics offset @21Mb for each one per topics, which I have 2 topics so I have 6 topics offset
The problem is the number of __consumer_offset is consume more disk than the other offset, and my kafka_config cannot handle it. This is the example of my kafka_configuration:
broker.id=101
port=9099
listeners=PLAINTEXT://0.0.0.0:9099
advertised.listeners=PLAINTEXT://127.0.0.1:9099
num.partitions=3
offsets.topic.replication.factor=3
log.dir=/home/xxx/tmp/kafka_log1
log.cleaner.enable=true
log.cleanup.policy=delete
log.retention.bytes=1073741824
log.segment.bytes=1073741824
log.retention.check.interval.ms=60000
message.max.bytes=1073741824
zookeeper.connect=xxx:2185,xxx:2186,xxx:2187
zookeeper.connection.timeout.ms=7200000
session.time.out.ms=30000
delete.topic.enable=true
And for each topics, this is the config:
kafka-topics.sh -create --zookeeper xxx:2185,xxx:216,xxx:2187 --replication-factor 3 --partitions 3 --topic $topic_name --config cleanup.policy=delete --config retention.ms=86400000 --config min.insync.replicas=2 --config compression.type=gzip
And the connect config like this (connect config share identical config except port and offset and status config.):
bootstrap.servers=XXX:9099,XXX:9098,XXX:9097
group.id=XXX
key.converter.schemas.enable=true
value.converter.schemas.enable=true
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
offset.storage.topic=connect-offsets
offset.storage.replication.factor=3
config.storage.topic=connect-configs
config.storage.replication.factor=3
status.storage.topic=connect-status
status.storage.replication.factor=3
offset.flush.timeout.ms=300000
rest.host.name=xxx
rest.port=8090
connector.client.config.override.policy=All
producer.max.request.size=1073741824
producer.ack=all
producer.enable.idempotence=true
consumer.max.partition.fetch.bytes=1073741824
consumer.auto.offset.reset=latest
consumer.enable.auto.commit=true
consumer.max.poll.interval.ms=5000000
plugin.path=/xxx/connectors
It's very obvious that according to several documentation, Kafka doesn't need large disk space (the largest recorded tmp is 36 GB).
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…