This afternoon I have started getting a lot of exceptions in Java on a test server where I use the Java SDK (v.2.7.3) to connect to a Couchbase cluster (v. 6.0 - Community edition).
After having done some research I found that the errors were related to a a query on records with some embedded images. This means that the documents in average size were ~100KB. So if I query these types of documents I found that the query runs out of memory if it returns around 6-700 documents or more.
I’m about to refine the queries to avoid this. However, I’m curious if there is a way to adjust the server or the SDK to not hit this boundery?
Just to let you know - improvements in this area is one of the big new features of the next generation of the Java SDK, which we’re busy polishing up currently. It’s going to support backpressure, which will automatically adjust the rate to allow your application to consume the incoming query data at the rate it’s capable of, and prevent these kinds of OOM errors.
Does that mean that the result of a N1QL query is not loaded in-memory anymore?
We currently work around it by letting N1QL queries return only ids (meta().id ) and then built an Iterable which fetches small chunks of those ids using multi-get. That way only a smaller amount of data is kept in memory.
Yes, that’s right. To be more explicit, we will have 3 Java API variants, one of which will provide an interface based around reactive streams from Project Reactor. This one will ensure that only small amounts of the N1QL query are kept around in memory at a time. If your app can’t keep up with the incoming data, then backpressure will ensure that we stop reading from the server until it can catch up.
Your workaround is good, but soon you will not need it anymore