[Couchbase 2.1.1 community edition (build-764) 64bit, Windows]
I have set up a Couchbase server on my local machine, and have a bucket with half a million records, with an average size of approximately 2k. Then I wrote a client app, to test performance.
The data in the bucket is 385MB on disk, and I have 1GB of RAM configured for the cache, so I was kind of assuming that the entire database would be in the cache.
After thoroughly warming the cache, I’m still seeing a large amount of cache misses, and was wondering if anyone has any ideas why that might be.
I’m reading approximately 8,000 documents a second, which is pretty good, but I notice, that when there are no cache misses, it can easily go up to between 15,000 and 20,000 documents a second. If I could sustain that level of performance, it would be great.
Here is an image showing the performance monitor, during a run of the test client…
If you look at the graph for memory used it is 819MB. The low water mark is 805MB. So couchbase in ejecting some of the value from memory of your documents(Key and meta are still in memory). The graph of active doc resident % or (% of item you have that have meta/key/value in memory) is 83.4%. If you add another node or Allocate more memory to the bucket and get your working set (your data into memory) below the low water mark you will eventually be at 100% of data in memory as you fetch them from disk.
okay dokey, thanks househippo, so, basically, the 819MB isn’t really a true reflection of the amount of memory being used, since it doesn’t include the internal data-structures to manage the data, which includes, as you say, indices and meta-data. i’m fairly new to CB, so is there a way to see that additional memory usage?
I was definitely going to try increasing the memory available , but wanted to understand why the 1GB was insufficient when the high water mark was 819MB.
It makes sense though, and I’ll do as you suggest.