We are seeing failures on our mobile clients during synchronization when targeting large numbers of channels. If a response does not come back in less than 10 seconds off of the _changes request, sync will fail repeatedly and never actually starts or completes. What is the solution to this?
Thats not a lot of documents for the _changes to process.
Whats the longest time it takes to get a response back from _changes feed if you just hit the rest end point as a user through a browser?
Also in the _changes feed request do you see Couchbase server getting lots of keys GET()s?
if so you might want to leverage SG channel cache more.
In the bucket which I am most worried about doesnt have many channels. So I can even keep 100k in the cache.
But this
I will need to keep in mind in one of the bucket which will keep chat data between some 5000 users. The chat data will only be of few weeks. then we will purge that so the volume per channel there too wont be much. And of course it will always have continuous sync so I guess this optimisation will be the best thing.
in the following
I have used only channel_cache_max_length: and increased the value to 100k. I have read others meaning on the documentation but are still not clear to me. If possible can you let me know which ones will be best help in pull replication.
Another thing is right now I have 9 buckets but in production we will have only 4-5. Is it ok that all buckets are pointed to each sync gateway. And then load balance sync gateways using NGNIX or AWS elastic load balancer.
Another point is what if we have sync gateway running on docker which is behind Kubernetes, This can give us way more flexibility of populating new nodes.