Hi @avsej / Team,
I have installed kafka-connect-couchbase connector 3.0.0 and running it in standalone mode. All my logs were syncing properly to my kafka-topic but suddenly it stops working and nothing was there is logs too.
When i try to restart it, it gives me out of memory exception for JAVA Heap Space, As my bucket has around 10 lac documents and when i will restart the connector, it will try to pick up all the documents.
What settings do i need to change to avoid out of menmory exception?
Also why do my connector stops working after running properly for some days?
Please help me on these issues.
Thanks in Advance!
could you show logs from the couchbase connector?
These are the logs of my connector, when i am restarting it
[2017-12-04 10:39:54,595] INFO Connected to Node /10.4.0.155 (com.couchbase.client.dcp.conductor.DcpChannel:120)
[2017-12-04 10:39:54,729] INFO Reflections took 2719 ms to scan 68 urls, producing 3813 keys and 27176 values (org.reflections.Reflections:229)
[2017-12-04 10:39:54,792] INFO Poll returns 16 result(s) (com.couchbase.connect.kafka.CouchbaseSourceTask:170)
[2017-12-04 10:39:54,846] INFO Poll returns 86 result(s) (com.couchbase.connect.kafka.CouchbaseSourceTask:170)
[2017-12-04 10:39:56,976] INFO Poll returns 22380 result(s) (com.couchbase.connect.kafka.CouchbaseSourceTask:170)
[2017-12-04 10:39:57,822] INFO Poll returns 61672 result(s) (com.couchbase.connect.kafka.CouchbaseSourceTask:170)
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "kafka-producer-network-thread | producer-2"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "pool-3-thread-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "kafka-producer-network-thread | producer-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "pool-1-thread-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "pool-1-thread-2"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "org.eclipse.jetty.server.session.HashSessionManager@43599640Timer"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "qtp770189387-19"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "qtp770189387-20"
Oh, you said you are using 3.0.0. Could you try more recent release? There were a lot of bug fixes since then: https://developer.couchbase.com/documentation/server/5.0/connectors/kafka-3.2/release-notes.html
I am using Couchbase Server version 4.6.2-3905. Is it compatible with kafka-connector 3.2.1?
yes. 3.2.1 is the latest stable version of the connector.
Hi Manya,
Upgrading to 3.2.1 like @avsej suggested is a great idea. If you’re still seeing the issue after the upgrade…
What is the value of your use_snapshots
config property? If it’s true
, the connector might be running out of memory because the first snapshot consists of most of the database change history, and the connector is trying to buffer all of it it memory. If that’s the case, I’d recommend setting use_snapshots
to false
while the connector catches up. Let it run for about a minute afterwards (or whatever you have your offset.flush.interval.ms
set to) so the Kafka Connect framework saves the source offsets. Then you should be able to switch use_snapshots
back to true
.
Thanks,
David
Thanks @avsej/@david.nault
I will upgrade my version to 3.2.1
Though i need to upgrade my couchbase for it, as connector 3.2.1 is not available with couchbase 4.6.2
What do you mean by “is not available with couchbase 4.6.2”. We don’t bundle connectors with server distribution.
Hi David,
I know this is an old topic but still I would like to ask that… how can I stream out all the records in one bucket to create the first snapshot? I have already tried to restart connector, restart zookeeper+kafka server and connector, it all doesn’t work… I have also tried to set the use_snapshots to true also not helping…
The kafka-couchbase version I’m using is 3.1.3.
Thank you very much.
Not error… I just want to create the first snapshot, that contains all ids from the target bucket. Is there any way to do this?
When I restart the connector now, it will just stream out 2 records.
Thanks.
Try setting these config properties and restarting the connector. Does that give you the result you want?
use_snapshots=false
couchbase.stream_from=BEGINNING
Thanks,
David
It should not be necessary to restart the Kafka server. You can try if all else fails, though.
Thanks,
David
Hi @david.nault
I tried both way, restarting kafka server and restart connector only, with the config, but still not able to source out all documents.
Thanks.