We are testing the Couchbase analytics service and ran into an issue. When making several requests within a short period of time (1 second), the requests go into a queue, causing the execution time to increase to unacceptable levels. We tried increasing the RAM and CPU limits, and also used clustering (3 servers), but this did not help.
Our test query involves a collection with 130,000 documents, joined with a collection of 2,000 documents, then joined with another collection of 200 documents, and includes sorting. We noticed that the query execution time significantly increases when using JOIN, regardless of the size of the joined collection.
Could someone explain how the Couchbase analytics service handles request queues? How is the queue formed, and what can be done to improve the performance of the service? Also, are there any possible mistakes in configuring RAM, CPU limits, or clustering?
Use one ‘cluster’ for all the queries. The SDK will round-robin the requests to the analytics nodes and use multiple connections to nodes as needed.
The more concurrent queries, the longer the response time. To decrease the response time, investigate optimizing the query and increasing the number of analytics nodes. The “Explain” button on the Analytics Workbench will show the analytics query plan which may be useful for optimizing.
Since [ $alias ] is always a single element, it seems like CONTAINS could be used instead of ARRAY_INTERSECT (?)
Requests using multiple connections are more experimental, aimed at understanding the flow of handling Couchbase queries for analytics. Using multiple connections has almost no effect on query execution time. Optimizing queries: in real queries, a data array will be used, and a single value is only used for testing.
During testing and experiments, it was noticed that Couchbase processes 2-3 specified queries in parallel. A cluster of 3 nodes was created, rebalanced, and RAM/CPU limits and quotas were increased, but the changes only made things worse.
The topic of this discussion is to understand how the analytics service processes queries and to identify errors when scaling (changing RAM/CPU quotas, adding nodes).
There’s an assumption that Couchbase evaluates the resources required for processing incoming queries before execution. It calculates how many queries can be executed in parallel based on the RAM/CPU quotas, and the remaining queries are added to a queue. The mechanism for calculating the number of workers, the required RAM for executing a query, and the distribution of load between cluster nodes is unclear. We need to figure this out to identify the reasons for the lack of improvement from scaling.
This blog post will be helpful here. The amount of concurrent queries the Analytics service will execute is going to be determined by a combination of your cluster resources and the complexity of the queries.
Scaling up RAM, nodes and core counts should increase the concurrency possible, and I am surprised you found that not to be the case.
There are tunables that will increase the number of query workers (coresMultiplier) and ones that will reduce the amount of RAM allocated to a query, but per the blog post, this can backfire as it’s forcing the cluster to do more work than it may be capable of. Something to experiment with in your testing I’d say.