I am developing an integration solution for a small bank with 800 thousands of data in oracle database and excel files. i decided to use Couchbase 4.1 as integration database. in testing area i have a Node with 10GB of Ram And 2 CPU Zeon 64x VM. i wrote a small test program to load data to couchbase server, everything is good and writing performance was good. But when i want to query converted data with some aggregation like Max, Min everything is out of work! CPU usage is over 90% and Memory is up to 80%!! and query time is over 1minute!!! here is my document structure :
{
“region”: “45”,
“branchCode”: “16”,
“pursuitCode”: 94068952380,
“facilitiesID”: 1011778,
“requesterType”: “1”,
“nationalID”: 2751906532,
“registrationNo”: null,
“registrationDate”: null,
“registrationLocationCode”: null,
“requestType”: “1”,
“amount”: 150000000,
“currencyCode”: “060”,
“lastState”: “1”,
“proceeds”: “0”,
“year”: “”,
“issuer”: “Hossein”,
“guid”: “513aaecf-e732-4b03-96d5-81ffcff83c97”,
“issueDate”: “1394/03/11”,
“isFiltered”: “false”,
“title”: “-”,
“groupId”: “da3d8cc1-7cf8-a949-d5bf-40a4d94691dc”,
“type”: “pursuitsBank”
}
I create two index on type and issuer fields. but query time is the same…
please help me to find a suitable way to quering.
tanx
can you post you N1QL and explain [select Statement] result here?
1 Like
queries with explain will help. typically if your query times did not change with suitable indexes, it points to the fact that queries may not be using the indexes. You can tell which index is being used by looking at the index scan operator in explain.
thanks
-cihan