Hi,
I’m currently running some tests on Couchbase, and I wanted to know if the number I got while doing N1QL requests are what I should expect, or if I should be getting some better number.
Using an average cluster, I’m getting at most 2500 N1QL requests by second (around 2 ms response time for each) before capping the servers CPU. Is that normal ?
I’m running Couchbase Entreprise Edition, version 4.5. The client code is in C#, using the official Couchbase SDK.
I’m running a test cluster on MS Azure, with 6 nodes:
- 2 nodes “data” (16 cores / 40 Go)
- 1 node “data + query” (8 cores / 24 Go)
- 3 nodes “index + query” (2 * 8 cores / 24 Go, 1 * 16 cores / 40 Go)
All data is stored on SSD drives.
The cluster contains 40 millions documents, for a total size of approximatly 100 Go.
When testing key / value access, I’m able to get more than 180K reads by second at 30% CPU, with an average response time under 0.2 ms.
I’m using indexes, with the given definitions:
CREATE INDEX idxType ON catalog(type)
CREATE INDEX idxEan ON catalog(ean) WHERE (type = "Product")
(“catalog” is the bucket name)
The test request looks like that:
select catalog.* from catalog where type = 'Product' and ean = '{ean}'
where the “ean” field value change for each request.
I expected a performance hit when going from key access to N1QL, but I’m surprised that it’s so huge - in fact, I’m so surprised that I think I did something wrong
I tried to change to topology of the cluster (adding more query servers, for example), but to no avail - each query server seems to be able to handle around 700 / 800 N1QL queries by second at best, which seems to be very low. The bottleneck is very clearly the query nodes CPU.
I have tried using EXPLAIN
, but it does not give me much - the idxEan
is used, as expected.
I also tried to create an index containing the ean
field and the doc ID, and to select only the doc ID, but I did not notice any significant performance improvement.
I’m at loss about what I could try next…
If any needed information is needed to understand the case; please tell me, I will be happy to provide it.
Thanks for you help,
Quentin.