The client is constantly communicating with the cluster and as the cluster changes, those changes are being updated in the client. If all nodes in the cluster are down, the next operation (get, set, etc) will fail with an exception.
Creating the client doesn’t actually “create the connection”, it initializes a connection pool internally based off your configuration which is by default a minimum of 10 connections and will grow to 20 based off usage demands. It also creates a “streaming connection” to the client which picks up in changes to the clusters (nodes added, removed, moved vbuckets, etc), which are then reflected back in client if a change is detected. When this happens the connection pool is recreated based off of the new cluster configuration.
So the answer is, no you do not need to recreate the client if a change occurs in the cluster. In fact, the client should be a long lived object that is created when the process or application starts up and destroyed when the process or application shuts down.
If initially i have the CouchBaseServer service stopped and I run the code provided operations like “store” or “get” fails with and exception and returns false.
If later I start CouchBaseServer and I try to “store” or “get” with the same client instance the result is the same, fails with the same exception. I need to recreate the instance of CouchbaseClient.
Yeah that is a completely different scenario that I am not sure how the client would handle it. The client assumes that at least one node of cluster is always up and running.
I am curious, in what use case would the cluster be brought down and then brought back up?