Couchbase Server on Kubernetes. Which one?

Hello everyone,

I’m trying to setup a Kubernetes Couchbase (Community 4.1) cluster running on Google Cloud Platform and decided to look for some guides/tutorials that would help kickstart things.

However, I’ve come across two different articles about running CB on K8 and both have different approaches to things as mentioned in their repos README, both of them are from Couchbase or Couchbase employees.

While one of them is on Kubernetes site, the other only mentions Amazon AWS but both point to the same repo.
The last commits also have about a 4 months difference.

So, GCP/K8 article is here, and repo instructions here
Couchbase official repo here.

So… Which one is it? Do I need to setup a etcd node or not??

Help please?

@celso.santos Please refer to the article on Kubernetes site as that has the latest information and shows how to setup a scalable cluster without installing any additional components.

I’ll work on syncing the official repo as well.

1 Like

Hello @arungupta, thank you for your help. It’s confusing to have things in two different places.

@celso.santos agreed, we will get that fixed!

Hello @arungupta,

I’m having some difficulties/questions which I hope you’d be able to clear quicky:

My cluster is running on GCP, not problems there.
I can also access couchbase’s webUI without any problems.

However, I’m having a couple of issues which I’ve been unable to get past:
1 - Developing locally and trying to connect/query my GCP hosted DB doesn’t give a connection error, however, when trying to fetch data, I never get a response (eventually the request times out)
1.1 - I’d like to develop locally by using a remote DB, so I guess I needed to add those firewall rules
2 - Application services running on GCP on a different cluster, do get data back, but the request takes more than 30s to respond.

I’ve opened just port 8091 and also all other ports mentioned when we install Couchbase locally:

Please note that you have to update your firewall configuration to
allow connections to the following ports:
4369, 8091 to 8094, 9100 to 9105, 9998, 9999, 11209 to 11211,
11214, 11215, 18091 to 18093, and from 21100 to 21299.

How can I solve 1 and 2?
This is presenting a big issue in my development/staging environments right now and I should be going to production by the end of next week.

Best Regards,

Hello @arungupta,

Thank you so much for your post on the Couchbase cluster in Kubernetes, I had been using the solution that uses etcd before but, as it introduces a single point of failure your post came in the perfect time for me to move onto a more stable solution.

I do have a question though. If I understand correctly, you create a Service, which links to the Master RC,
but the workers won’t listen behind this service. What happens when the Master dies? If it cannot be restarted immediately by the kube cluster then the CB cluster will be effectively down for the application. would it be a
good idea to put all workers behind that same service and let anyone answer requests? Or am I missing something?

Thanks again!
Laura

I too have the same question as you.

I think this may be one of the causes of some of the issues I’m having with long reply times as posted on Connection issues on Google Cloud Platform

@celso.santos Client SDK downloads the cluster map and directly communicates with the server that has the data. So the “master” is only for bootstrap and viewing the state of the cluster. But in order for a client application to communicate with the cluster, the app needs to be uploaded to the Kubernetes cluster as well. This way any cluster map downloaded to the client will be accessible.

kubernetes-java-sample/maven at master · arun-gupta/kubernetes-java-sample · GitHub is a sample that shows how to use a Spring Boot application to access a Couchbase node running in Kubernetes. I’ll update this sample to be used as a cluster this week.

On 30s to respond, is the client application running in the same GCP data center? What language is being used to access this application?

@lauraherrera If the master dies then the Kubernetes cluster will restart because it is inside a Replication Controller. The client application anyway downloads the cluster map and communicates directly with the server where the data is stored. So “master” is only for bootstrapping the cluster but is not a SPOF in this case.

Did you face any issue with this setup?

Hi @arungupta,

Thanks for your reply, so far I haven’t had any problem with the setup, but as with the previous cluster it happened to us that kubernetes wasn’t able to restart the master for an external reason, then i am worried this may lead to downtime in my application as all communication with the CB cluster is done via the master. One of the advantages of CB as I understand, is that my application could make requests to any CB node, making use of CB high availability.

Laura

@lauraherrera You are right, “master” is only for bootstrapping the cluster. Communication to the cluster does not have to go through the master. Client SDK has a cluster map and knows the exact server in the cluster where the data exists, or need to go, and communicates with it directly. That’s why its required that the client application has access to all nodes in the cluster as opposed to just the master.

Hi @arungupta,

I see, it’s only that all of my microservices talk to eachother using a service, so what I did is to create another service that can talk to all CB nodes. That seems to be working ok.

Now as a test, i have just killed the master node. Kubernetes restarted it as expected, but now, the cluster doesn’t contain the rest of the CB nodes… only the newly created master. Any ideas? I guess when a new master gets created, the existing nodes should re-run their configuration to be added to the “new” cluster?

Ta
Laura

@lauraherrera [quote=“lauraherrera, post:12, topic:10221, full:true”]
Now as a test, i have just killed the master node. Kubernetes restarted it as expected, but now, the cluster doesn’t contain the rest of the CB nodes… only the newly created master. Any ideas? I guess when a new master gets created, the existing nodes should re-run their configuration to be added to the “new” cluster?
[/quote]

How do you know that cluster doesn’t contain rest of the CB nodes? Are other nodes talking to “master” Couchbase using the service name?

Hi @arungupta,

No, I am querying the CB-cli like:

couchbase-cli server-list --cluster=my-cluster:8091 -u my-user -p my-pw

When i run that command in the new master i get only itself as the server-list:
ns_1@new-master-ip new-master-ip:8091 healthy active

Hello @arungupta, yes, the services are running in the same data-center, just a different cluster (I want to keep DB and application in separate clusters).

@lauraherrera I think I may face the same issue. Did you have to add the worker nodes to the cluster manually? I had to add them manually on the AdminUI (which was weird) and I think that may be the cause of your issue: nodes don’t get automatically added to a cluster

thanks @celso.santos,

I am not able to access the GUI, this is by design. Everything in my environment has to happen automatically, I cannot
change anything manually using the GUI

If I try to add the existing node to the “new cluster” manually, using the cli I get an error:

Error: Failed to add server old-node-ip:8091: Prepare join failed. Node is already part of cluster.

also … from the GUI

Warning – Adding a server to this cluster means all data on that server will be removed.

@arungupta, do you have any ideas to have the dying pods re-joining the existing cluster?

Thanks a lot
Laura

@celso.santos @lauraherrera Just wanted to let you know that I’ll be looking at this thread today/tomorrow. Got a working setup as described at couchbase-kubernetes/cluster at master · arun-gupta/couchbase-kubernetes · GitHub.

Hello @arungupta,

Thank you. I’m already using that sample in my cluster.
Is there any other article that deals with persistent storage?

At the moment I’m only using one persistent disk attached to my master, but I suspect that this won’t be enough in case of failure of the master node, since the workers do not have any information regarding persistence.

Also, seems like traditional persistent storage (at least on Google) can only be written by one node.
Don’t the worker nodes need to write to disk as well?

If so, I think the correct path to use persistent storage on a couchbase cluster is to use GlusterFS, since common persistent disks do not support writes from multiple pods

Are you able to confirm this information?

Best Regards,

@celso.santos I wrote a blog about using Amazon EBS with Kubernetes at Stateful Containers on Kubernetes using Persistent Volume and Amazon EBS - The Couchbase Blog. Does that help?