We have 3 nodes with 16GB memory and 100GB disk for data.
Each node hosts around 30M documents
each documents has 6 fields or so.
We have one view, no index
function (doc,meta) {
if (doc.className == “something”) {
emit(doc.ui, {ui: doc.ui, ts: doc.ts}
}
}
Looking at the memory consumption the meta data (other data) seems to explode
Is it rather weird or as expected, i.e. we are really running low in ressources?
I also start getting Memory warnings: Metadata overhead warning. Over 50% of RAM allocated to bucket “dump” on node “172.x.x.x” is taken up by keys and metadata.
I looks like you don’t have replicas configured; and assuming a key length of say 20 bytes that would come out at approximately:
30,000,000 * (20 + 100)
~= 3.6GB per node.
Given you have ~11GB per-node Bucket Quota (or which ~50% is currently used), my rough numbers would put you at around 60% of RAM used for metadata.
This isn’t necessarily an issue; assuming your dataset isn’t growing; it’s just highlighting that a large percentage is used for storing metadata. Couple of suggestions:
Increase the bucket quota to make more available for values.
Look into using Full Eviction instead of Value Eviction; note there’s trade-offs in each mode.