Kafka Consumer - Kafka commitID; unknown


#1

Hello Friends,

Can someone help me with this Kafka Issue using Cloudera Version=2.6.0-cdh5.12.0:
When I am trying to publish messages with below steps:

//to create a topic:
kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic ayx # creates topic ayx
kafka-topics --list --zookeeper localhost:2181 #checks what topics are created

18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-573.el6.x86_64
18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Client environment:user.name=cloudera
18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/cloudera
18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/bin
18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=quickstart.cloudera:2181 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@5d9f6996
18/06/14 01:09:01 INFO zkclient.ZkClient: Waiting for keeper state SyncConnected
18/06/14 01:09:01 INFO zookeeper.ClientCnxn: Opening socket connection to server quickstart.cloudera/10.0.2.15:2181. Will not attempt to authenticate using SASL (unknown error)
18/06/14 01:09:01 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.0.2.15:33976, server: quickstart.cloudera/10.0.2.15:2181
18/06/14 01:09:01 INFO zookeeper.ClientCnxn: Session establishment complete on server quickstart.cloudera/10.0.2.15:2181, sessionid = 0x163f7af328701e3, negotiated timeout = 30000
18/06/14 01:09:01 INFO zkclient.ZkClient: zookeeper state changed (SyncConnected)
Digan
__consumer_offsets
alteryx
**ayx**
ayx1
18/06/14 01:09:01 INFO zkclient.ZkEventThread: Terminate ZkClient event thread.
18/06/14 01:09:01 INFO zookeeper.ClientCnxn: EventThread shut down
18/06/14 01:09:01 INFO zookeeper.ZooKeeper: Session: 0x163f7af328701e3 closed

//trying to publish messages and getting below error:
kafka-console-producer --broker-list localhost:9092 --topic ayx
[cloudera@quickstart bin]$ kafka-console-producer --broker-list localhost:9092 --topic ayx
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/kafka/libs/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/kafka/libs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/06/14 01:12:38 INFO producer.ProducerConfig: ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.id = console-producer
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 1000
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 1500
retries = 3
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer

    18/06/14 01:12:38 INFO utils.AppInfoParser: Kafka version : 0.10.2-kafka-2.2.0
    18/06/14 01:12:38 INFO utils.AppInfoParser: Kafka commitId : unknown

kafka server,properties:
broker.id=0
delete.topic.enable=true
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://localhost:9092

log.dirs=/tmp/kafka-logs

num.partitions=1

num.recovery.threads.per.data.dir=1

Screenshot of kafka-logs:

Thanks in Advance


#2

@koushikmln will help you to solve your issue


#3

Hello Digan,

The commit-id here is not the commit offset which kafka uses to keep a track of the messages in your topic.

Its the commit id of the kafka source from which it was built. It is not an error but just an info message. You should still be able to send messages in the kafka-console-producer. You can check the messages using the kafka-console-consumer.

Regards,
Koushik