Hi all.
I use couchbase4.0’s Counter to count sum of some data.Here is my enviroment:
These is 5 machine in the couchbase cluster and I create a couchbase bucket with 2 replica. the bucket is 20GB RAM per node.
My app is about 15000qps - 20000qps Counter write to couchbase.
When it runs. the couchbase has a very high cpu cost ( 500% for memcached and 150% for beam.smp). It shouldn’t be so high for memcached. How can I figure it out ?
Hi @yqylovy,
You can start out by running the steps described in the Couchbase Server docs troubleshooting.
http://developer.couchbase.com/documentation/server/4.1/troubleshooting/troubleshooting-intro.html
Even if you don’t have Couchbase support, you can look through your own logs yourself as a first step, and that gets them all into one place (a collectinfo zip file). The page I linked above goes into what the various logs are and you can search them for errors, if any.
Also, if you don’t have a need to be on 4.0 you might want to upgrade to 4.1. It’s possible that you have an issue that was fixed.
The only other thing I notice is that running 2 replicas on a 5 node cluster is a bit high. You may get better performance by running 1 replica instead.
Edit: Just jumped back in to correct my earlier message - if you’re using CE, 4.1 is not available yet so that advice wouldn’t be of any use to you. My apologies.
@WillGardella, thanks a lot. I will try it later.
I am using CE.With 2 replicas,I want to make the cluster is ok even 2 machine is down.
Is there a benchmarking for CE4.0? so i can compare with mine.
Thank you again!
No, unfortunately we don’t have benchmarks for CE 4.0. If it’s not too difficult, you might look at the performance with a single replica and see if that makes a difference. It may be totally irrelevant.
Also note that there is “cbc pillowfight” in the libcouchbase distribution and Roadrunner for Java. These aren’t so much benchmarks as they are validation tools, but they may be useful starting points for you @yqylovy.
Roadrunner in particular was written to help people validate a cluster set up if they’re not sure if the problem is in the cluster or in their own application code. You can use it as a “known good” workload generator. It’s a bit dated, but useful.