@Han_Chris1 query performance is usually determined by the server side, not so much by the client settings. Can you tell us more about the query itself, how long it runs, did you tune the indexes for it?
Also your config properties are not a good idea. Do not tune the idle http connection timeout, that won’t help you. Also if you want to tune the request tracer, do it via the ThresholdRequestTracerConfig setting, not manually providing one (the null in the builder gives you a hint that something is a bit odd there ;))
So, I would not start with tuning the env but rather tuning your query.
My query is quite simple using key, it takes around 7ms only when I execute from couchbase UI.
But when I query using my java code, it takes around 100-200 ms per request.
Okay I can remove that configs, but is there any other config I can set in my code ?
@Han_Chris1 can you show us the code how you actually perform the query and do something with the results? Also, do you take JVM warmup into account ?( so run a couple hundreds of those queries before measuring vs. just one?)
Also note that if the query above is really exactly that, you should use KV operations instead if you already know the IDs.
Actually I have serveral N1QL query,not only using KV. I try using this query to avoid issue from index part.
I’m not taking JVM warmup into account, even if I hit manual single request using postman also have around that 100-200ms
I’m using quarkus framework to expose rest API
the process flow is like this :
init singleton connection during postConstruct then call query for each request
@Han_Chris1 it’s a little hard to say in isolation - would you be able to provide a quarkus project that demonstrates the issue which I can use to reproduce locally?
@Han_Chris1 it must be something environmental. I cloned your repository, and the only thing I changed was to change the properties file to point to a node on localhost and different bucket name / user - I also had to change the query from myBucket to my bucket name…
I started the app with mvn compile quarkus:dev
One thing I noted is that quarkus opens the connections lazily, so the first query really takes longer until the client is fully bootstrapped - maybe there is a way in quarkus to load the resources eagerly?
Once the first request wen through, I used the wrk benchmarking tool. one thread, one connection - to test the latency. Note that my bucket was empty, since I wanted to make sure there isn’t much contributing to the perf on the SDK side (vs. i.e a longer running n1ql query).
Thanks for for your quick response.
Yes, it tries to do init connection through first request only, after that it will use the same connection.
So, do you mean that there’s no issue with my code?
I’m afraid issue with the latency.
But I’ve tried to deploy inside the same segment server and I assume there’s no connectivity issue
Still curious, since I need to expose my REST API to do N1QL query using sync method,
What’s the best practice to be set on maxHttpConnections or other configs to optimize the performance?
The max http connections really comes down to what kinds of queries you run. If you have more long running queries they might end up using more connections at the same time and it can make sense to bump it up… But note that this is more a server side question too, since just bumping up the connections does not help you if you are limited on the server query processing latency,. I would stick with the default first and if that does not meet your performance criteria AND you know that this is the bottleneck, then tune it higher. It also depends on how many app servers, how many query nodes etc you have since the sum of it also matters. If queries are too slow I would first try to add more query nodes to the cluster to speed up the parallel processing.
The first query takes 100-200ms. So all other does not take so long except when you have in one second no more request. Then again if you execute the query it takes 100-200ms but all other request are fast. It is because you have a idle_http_connection_timeout is default to 1000 (1 second). When you recieve every second requests your queries are fast. You can set the idle_http_connection_timeout to 5000 (5 seconds) then when you make a first query it takes 100-200ms but all other queries in that 5 seconds are fast, when you recieve every second request it is fast too only WHEN YOU RECIEVE NO MORE REQUEST IN THAT 5 seconds then it recreates the connection but only for the first query
If I recall correctly, that closing of the HTTP connection after 1 second was a mitigation to the Slowloris attack, which affects any HTTP TLS connection. With Capella, I don’t think you can ‘prevent’ this. It’s not a function of the node SDK, but something the cluster-side does to drop the connection. We drop the SDK connection slightly quicker to avoid lots of noise in the logs.
What you could do is issue a keepalive request of some sort on a periodic basis. A healthcheck ping against the query service would do this.
For a production application, it might not be worth it since you’ll likely have many requests happening.