Overview

Kafka's defaults are conservative. This config targets a 3-broker cluster handling millions of events per day in a fintech environment - durability and ordering are prioritised over raw speed.

server.properties

# ── Broker identity ───────────────────────────────────────────
broker.id=0                         # unique per broker: 0, 1, 2
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://broker0.internal:9092

# ── Zookeeper / KRaft ─────────────────────────────────────────
# For Kafka 3.3+ KRaft mode (no ZooKeeper):
# process.roles=broker,controller
# node.id=0
# controller.quorum.voters=0@broker0:9093,1@broker1:9093,2@broker2:9093

# ── Log retention ─────────────────────────────────────────────
log.retention.hours=168             # 7 days
log.retention.bytes=107374182400    # 100 GB per partition
log.segment.bytes=1073741824        # 1 GB segments
log.cleanup.policy=delete

# ── Replication (durability) ──────────────────────────────────
default.replication.factor=3
min.insync.replicas=2               # acks=all requires 2 of 3 replicas
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
unclean.leader.election.enable=false   # never elect out-of-sync replica

# ── Network & IO ──────────────────────────────────────────────
num.network.threads=6               # = number of vCPUs
num.io.threads=12                   # = 2x vCPUs for disk-heavy workloads
socket.send.buffer.bytes=1048576    # 1 MB
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600  # 100 MB max message

# ── Flush & compression ───────────────────────────────────────
compression.type=lz4                # broker-level recompression
log.flush.interval.messages=50000
log.flush.interval.ms=1000

# ── Auto topic creation (disable for production) ──────────────
auto.create.topics.enable=false

Producer config (acks=all, idempotent)

bootstrap.servers=broker0:9092,broker1:9092,broker2:9092
acks=all
enable.idempotence=true
max.in.flight.requests.per.connection=5
retries=2147483647
compression.type=lz4
batch.size=65536
linger.ms=5

Consumer config

bootstrap.servers=broker0:9092,broker1:9092,broker2:9092
group.id=my-consumer-group
auto.offset.reset=earliest
enable.auto.commit=false           # manual commit for exactly-once
max.poll.records=500
fetch.min.bytes=1048576            # 1 MB - wait before returning records
fetch.max.wait.ms=500

JVM flags (KAFKA_HEAP_OPTS)

export KAFKA_HEAP_OPTS="-Xmx6g -Xms6g"
export KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20"