Unexplained difference between the elapsed time client side and the metrics

Hi

I’m still doing some tests with Couchbase, and I’m running into another issue that I can’t explain.
I’ve already detailed most of my configuration in another thread: https://www.couchbase.com/forums/t/n1ql-performances-are-2-5k-requests-by-second-at-2ms-each-an-expected-average

For this current test, I’m running a very simple client code, that I can give in its entirety here:

            var bucket = new Cluster(new ClientConfiguration()
            {                
                Servers = new List<Uri>
                    {
                        new Uri("http://172.25.83.136:8091/pools")
                    }
            }).OpenBucket("catalog");

            for (var i = 0; i < 15; i++)
            {
                // N1QL Request
                var query = new QueryRequest()
                    .Statement($"select catalog.* from catalog where type = 'Product' and ean = '9782742715572' limit 1")
                    .Metrics(true);

                var sw = Stopwatch.StartNew();
                var result = bucket.Query<Product>(query);
                Console.Write($"watch: {sw.Elapsed.TotalMilliseconds} ms / metrics: {result.Metrics.ElaspedTime} | {result.Metrics.ExecutionTime}");

                // For comparison, I'm doing a GET request on the same document
                var id = result.First().DocId;
                sw.Restart();
                var doc = bucket.Get<Product>($"Product,,{id}");
                Console.WriteLine($" - watch GET: {sw.Elapsed.TotalMilliseconds} ms");
             
                // To be sure that the same document is read and processed
                if (doc.Value.DocId != id)
                {
                    throw new Exception();
                }
            }

Here is the result of running that test:

watch: 143.1797 ms / metrics: 7.1795ms | 7.0735ms - watch GET: 36.0626 ms
watch: 10.8588 ms / metrics: 7.912117ms | 7.799014ms - watch GET: 0.7447 ms
watch: 47.3252 ms / metrics: 6.21117ms | 6.000764ms - watch GET: 0.7551 ms
watch: 9.6933 ms / metrics: 6.806686ms | 6.692383ms - watch GET: 0.8618 ms
watch: 46.9134 ms / metrics: 5.82196ms | 5.769658ms - watch GET: 0.9948 ms
watch: 10.9039 ms / metrics: 8.229678ms | 8.154777ms - watch GET: 0.7501 ms
watch: 8.9132 ms / metrics: 6.212758ms | 6.149458ms - watch GET: 0.8672 ms
watch: 48.1615 ms / metrics: 6.498861ms | 6.451461ms - watch GET: 0.8513 ms
watch: 46.723 ms / metrics: 5.558152ms | 5.48675ms - watch GET: 0.8383 ms
watch: 9.2646 ms / metrics: 6.501361ms | 6.435361ms - watch GET: 0.6954 ms
watch: 46.583 ms / metrics: 5.327646ms | 5.274444ms - watch GET: 0.7096 ms
watch: 7.475 ms / metrics: 5.151701ms | 5.096201ms - watch GET: 0.8223 ms
watch: 46.6962 ms / metrics: 5.5509ms | 5.5134ms - watch GET: 0.8153 ms
watch: 47.4582 ms / metrics: 5.9329ms | 5.9043ms - watch GET: 0.7256 ms
watch: 47.4179 ms / metrics: 5.7938ms | 5.7344ms - watch GET: 0.8471 ms

That timing pattern is very common (and consistent): some of the requests will end in less than 10 ms, and most of the rest is in 45 / 50 ms. The elapsed time according to the metrics is constant, around 6 ms.

There’s multiple things that I don’t understand here:

  • What is taking 2 to 4 ms in the “quick” requests ?
  • Why do I have so much “long” requests ?
  • Why is the response time not constant client side, when the request metrics say that the server side time is constant ?

For each N1QL request, I’m also running a GET request. Those are running in constant time, in around 0.8ms. The fact that I’m able to fetch and deserialize a document in a negligible time is (I think) proof that this is not where the problem lie.

I tried to do the same thing with the Java client, and I got roughly the same result. Interestingly enough, the timing are more regular with the Java client: 1 out of 3 request is done in around 50ms, and the the other in less than 10.

I tried to change the cluster topology, and to move the indexes (I still have two, one for the “type” field, and one for the “ean” field), but without noticing any significant change.
The baseline index topology is to have the “index type” on one server, and the “index ean” on another.

The only topology change that bring a noticeably different outcome is if I remove (failover) the only node without any index. In that case, especially with the java client, I go from 1 request out of 3 at 50ms to 1 request out of 2 at 50ms (and I have no idea why).

If any of you have any clue to what is happening to my requests, it would be great :slight_smile:
Thanks for your help,
Regards

If you run this query in the Query Workbench, does it always return the same result?

The best index in this case is on (ean, type). Have you tried running that way?

If you do an EXPLAIN on this query, what index is being selected to run the query?

The obvious difference is your app-local Stopwatch measurements include, in addition to the metric timing:

  • The time to encode the request (Couchbase SDK)
  • Time to write to the local network socket (OS/local hardware)
  • Time “on the wire” (in-flight to the server)
  • Time on the server’s network receive patch (remote server hardware/OS)

… then the exact same in reverse for the server-sent response.

The server can only start its stopwatch when it receives the request from the Operating System, and it has to stop it’s stopwatch once it passes it to the OS to send.

You do give timings for a K/V GET request, but that’s a much simpler (and hence faster) path - while the network costs may be similar, the cost in the SDK to encode / decode the request / response are likely quite different.

Another experiment you could do would be a raw HTTP request (for the N1QL query) - and maybe a tcpdump of the request so you can see the network timings - that would remove any application-side (SDK) costs.

If you run this query in the Query Workbench, does it always return the same result?

Yes.

The best index in this case is on (ean, type). Have you tried running that way?

No, I did not try that. But in that case, I won’t get the whole document, right ?

If you do an EXPLAIN on this query, what index is being selected to run the query?

idxEan , as expected.

You would. The index stores the index key and the document id. But the query plan will include a document fetch after the index probe to get the document itself.

The obvious difference is your app-local Stopwatch measurements include, in addition to the metric timing:
[…]
You do give timings for a K/V GET request, but that’s a much simpler (and hence faster) path - while the network costs may be similar, the cost in the SDK to encode / decode the request / response are likely quite different.

I expected to have a difference, but not a big one like that.
The full round trip for a GET request is less than 1ms. Assuming that the compute time server side is negligible, it means that at worst the network / cpu overhead client side is at 1ms. I can accept that a N1QL request will take more time to write to the socket and take more processing client side, but at best 2 to 4 times more than a GET request ?

The other thing that is really bugging me is the regularity at which I manage to have different response time (with the java client, not the C# one). When I have 3 query+index node, I have precisely 1 request out of 3 that is slow, and when I have 1 nodes, I have 1 our of 2…

I’ll try to see if I can correlate the “slow” requests with a call to a specific node.

@quentin.a ,

When bucket.Get is used you are doing direct connection to KV and gets the data, it is faster.

As @johan_larson, N1QL needs to do multiple operations. Also whole data needs to do multiple network transfers ( KV–> N1QL Query node–> Client)

Based on the your SDK program i have following suggestions to improve N1QL query timings.

  1. Use prepare statements instead of preparing every time
  2. As your doing limit 1 and if you already know the document id use USE KEYS to avoid index scan this same as bucket.Get using N1QL.
    SELECT RAW catalog FROM catalog USE KEYS […]
  3. If you need to use IndexScan create the following index
    CREATE INDEX ix1 ON catalog(ean) WHERE type = ‘Product’;
  4. Set pretty=false in QueryService settings this will reduce results size by approximately 2/3. By default N1QL sends the results pretty formatting.

Or if you literally just want a single key and the catalog element in the JSON doc then just use K/V Sub-document API (SUBDOC_GET) - that’ll always be faster.