Error while using couchbase Kafka connector

I am trying to run the latest couchbase kafka connector samples from the below mentioned link.

However when i run the kafka producer, i get to see the following error.

INFO - Client environment:java.io.tmpdir=/var/folders/4w/xs06ksps51xb5ygrwtrr9xg80000gq/T/
INFO - Client environment:java.compiler=
INFO - Client environment:os.name=Mac OS X
INFO - Client environment:os.arch=x86_64
INFO - Client environment:os.version=10.11.2
INFO - Client environment:user.name=kadhambari
INFO - Client environment:user.home=/Users/kadhambari
INFO - Client environment:user.dir=/Users/kadhambari/Documents/couchbase-kafka-connector/samples/producer
INFO - Initiating client connection, connectString=192.168.244.52 sessionTimeout=4000 watcher=org.I0Itec.zkclient.ZkClient@5f282abb
INFO - Opening socket connection to server 192.168.244.52/192.168.244.52:2181. Will not attempt to authenticate using SASL (unknown error)
INFO - Socket connection established to 192.168.244.52/192.168.244.52:2181, initiating session
INFO - Session establishment complete on server 192.168.244.52/192.168.244.52:2181, sessionid = 0x15262921fcd0002, negotiated timeout = 4000
INFO - zookeeper state changed (SyncConnected)
INFO - Verifying properties
INFO - Property key.serializer.class is overridden to kafka.serializer.StringEncoder
*INFO - Property metadata.broker.list is overridden to *
INFO - Property serializer.class is overridden to example.SampleEncoder
INFO - Connected to Node 192.168.244.94
ERROR - Error while subscribing to bucket config stream.

Please help me understand what could be causing this issue.

1 Like

seems like it is unable to connect to couchbase, could you check your configuration? and/or provide your settings here?

The addresses hardcoded in the main(), so you have to either change them, or check that you have started couchbase and kafka from env scripts

Thank you so much for assisting @avsej. Here are a few more points i would like to share. I am not using vagrant. Hence i have hardcoded the IPAddress. The couchbase server and kafka is up and running. Finally i was able to connect to couchbase from kafka when i ran Data,Index and query services on the same node. However i still face the same issue when i connect to a cluster where data,index,query services run on different nodes. Following is the configuration i currently have for the cluster

192.168.244.94 - Data
192.168.244.117 - Index/Query

Is there anything else i need to do in order to make it work??

Hi @kadhambari,
The Kafka Connector works with DCP streams, so mutations on documents. I would only expect the Kafka Connector to work against nodes that are running the data service but not on nodes that have the data service disabled. Any changes to documents will take place in the data service, so you should not actually need to hook up Kafka to any nodes that are not running the data service.
Best,
-Will

1 Like

Thanks for pointing this out @WillGardella. Yes it does make complete sense to hook up kafka to a data node.

But If you look at my error log, I have hooked up kafka to a node(192.168.244.94) where the data service runs. I did also try to connect kafka to a node(192.168.244.117) where Index/query service was running to verify if that was causing the issue. However, I ended up getting the same error on both the scenarios.

Do you have a bucket called default on the Couchbase Server you’re trying to connect to?

Is the bucket you’re trying to connect to empty? I’m not sure what will happen if you try to connect to the empty bucket with the Kafka Producer - the only way I have ever run it is to start with the generator and make sure that can write data into a bucket (or start with a bucket that has data in it).

You can verify your ability to connect by trying the generator first and seeing if that works.

Is running vagrant with the pre-configured images an option for you? It’s an easy way to see if this will work for any scenario, and you can use Virtual Box to inspect the Couchbase image that you’re trying to connect to.

No i do not have a default bucket in the cluster. We have a set of buckets which we created manually and they are not empty. I don’t think this has anything to do with an empty bucket because when I had the nodes run all the services, I was able to successfully connect and stream data. I happen to face this issue only when I configure the nodes to handle specific services.

As per your suggestion, I shall verify the same scenario with the vagrant set up and get back to you on that.

@WillGardella As per your suggestion, I ran vagrant with the preconfigured image. It created a couch base cluster with a single node where data service was running. It worked perfectly fine with the application. But once I added another node where index and query service ran, kafka connector started throwing the same error as mentioned in my previous post. So these are my observations.

1.Single node running data service alone- works fine
2. Any no of nodes with all the services configured on each of those nodes- works fine
3. Many nodes with data and index/query configured on different nodes - does not seem to work.

It will be helpful if you could give us some insight on what could be causing this issue.

I filed a ticket for this to investigate further.