I encountered the following error when I inserted huge amount of documents.
[07:47:01] - Hard Out Of Memory Error. Bucket “default” on node 172.31.9.215 is full. All memory allocated to this bucket is used for metadata.
[07:47:01] - Hard Out Of Memory Error. Bucket “default” on node 172.31.15.241 is full. All memory allocated to this bucket is used for metadata.
[07:47:01] - Hard Out Of Memory Error. Bucket “default” on node 172.31.9.214 is full. All memory allocated to this bucket is used for metadata.
[07:47:01] - Hard Out Of Memory Error. Bucket “default” on node 172.31.15.242 is full. All memory allocated to this bucket is used for metadata.
A ‘hard out of memory’ happens when you’ve tried to insert more items than will fit in the system’s metadata memory quota. No data would be lost, unless you weren’t error handling results trying to put items in.
You may want to revisit the sizing of your cluster. Couchbase has a mode for full ejection too, which would limit the capacity only by what the disk will hold. The docs have a description of the options.
Please post a new topic so the questions may be found by others. Quick answers:
Lower priority buckets would get less priority in scheduling IO than higher priority ones
You can have as many index services as you need, that depends more on your dataset, rate of change and expected workload. See the docs for some pointers.
You mean to size it? You may need to experiment a bit or review the sizing docs. Size is partially determined by your workload.
Also, note if you either have or are considering an Enterprise subscription, the Couchbase folks can possibly help analyze to size your deployment.
I will post an new topic.
I have a new question about my question.
Can N1QL index has a replication ? I think index for keys of document has a replication ?
Is that true ?
Out of interest what kind of sets/second where you achieving when you were inserting your data?
It is possible that your sets/second were far outstripping the ~3000mutations/second limit of the indexer and as a result your items were being queued in the projectors DCP queue, resulting in the hard OOM error that you saw.
This could also be why using a new bucket without indexes on helped as it did not need to project the changes to the indexer.