Hi,
Below is my document.
{
“_class”: “xxxxxxxxxxx”,
“effectiveDateTime”: 1556770291352,
“entity”: {
“phase”: “COMPLEX”,
“locationDSL”: {
“lookupType”: “SEARCH”,
“dsl”: “API”,
“parameters”: {
“stores”: [
"GB:xxxxxx”
]
},
“dataType”: “LOCATION”
},
“stores”: [
"GB:xxxxxx”
],
“shortDescription”: “xxxxxx”,
“domainEvents”: ,
“endDateTime”: 5852131200000,
“rewardRules”: [
{
“applyTo”: {
“conditions”: [
“1”
],
“cheapest”: “FALSE”
},
“discount”: xxxxx,
“_class”: “xxxxxx”,
“type”: “xxxxxx”
}
],
“condition”: {
“productDSL”: {
“lookupType”: “SEARCH”,
“dsl”: “API”,
“parameters”: {
“gtins”: [
“Xxxxxxx”
]
},
“dataType”: “PRODUCT”
},
“productGtins”: [
“Xxxxxxx”
],
“cheapest”: “FALSE”,
“requiredQuantityMin”: “xxxxxx”,
“_class”: “xxxxxxx”,
“id”: “1”,
“type”: “xxxxxxx”
},
“startDateTime”: 5851958400000,
“name”: “xxxxxxx”,
“id”: “xxxx-xxxx-xxxx”,
“state”: “xxx”
}
}
I’m running the below query on this document(s)
select * from abcdef where _class=‘xxxxxxxx’ AND
effectiveDateTime <= STR_TO_MILLIS(now_str()) AND ANY store IN entity.stores SATISFIES store = ‘GB:xxxx’ END AND ANY gtin IN entity.condition.productGtins SATISFIES gtin = ‘xxxxxxxxx’ END
There is an index defined as below
CREATE INDEX xxxxxxx
ON
abcdef
(effectiveDateTime,DISTINCT ARRAY g FOR g IN entity.condition.productGtins END,entity.stores)
WHERE _class
=“xxxxxxx”
using GSI with { “num_replica”:1 };
Problem I’m facing is that the query response times seems to increase with the increase in number of documents. Query response times seems to be around 10ms if the bucket has 1000 document, and for a sample bucket size of 30000 this query seems to take 30 ms.
Please can someone suggest what should be done to reduce the query response times as the no of documents increase?
I’m running 3 Data nodes and 4 query nodes.
Thanks
Vishnu