Hi All,
We are seeing a comparatively low ops/second (1k) while insert 1000000 records to a 3 node cluster.
These are the node cluster details
3 * (4 cores, 5GB ram, Centos VMs)
Bucket settings,
Are we supposed to do any tuneup on this ?
This is on a Java client using the following item writer for Spring batch.
protected void writeItems(List<? extends T> items) {
if(CollectionUtils.isEmpty(items)) {
logger.warn("no items to write to couchbase. list is empty or null");
}else {
if(delete) {
couchbaseOperations.remove(items);
}else if(overrideDocuments) {
couchbaseOperations.save(items);
}else {
couchbaseOperations.insert(items);
}
}
}
The document is a very simple one like this,
{
"part1": "ssjku",
"amount": "10",
"json_attributes": "{\"pqr\":\"Jane\",\"asdff\":\"Doe\"}",
"type": "order",
"part2": "jjjmki",
"quantity": 30
}
Are we missing something or need to do any optimization on this ?
Any of your help on the matter is really appreciated.