The operator seems to create a couple of different cluster ports for the server cluster cb
and cb-srv
Can anyone tell us what the difference is between these two ports? If we build an API microservice that runs in the same k8s cluster which would it use – and what is the other one for?
$ minikube service list
|-------------|------------------------------|--------------------------------|
| NAMESPACE | NAME | URL |
|-------------|------------------------------|--------------------------------|
| default | cb | No node port |
| default | cb-srv | No node port |
| default | cb-ui | http://172.16.129.128:31562 |
| | | http://172.16.129.128:30703 |
Fairly simple! The plain cb service references all nodes in the cluster and is used to establish stable DNS names for the nodes. The cb-srv service only references the nodes running the data service and creates an SRV record for it. This can be used as a stable couchbase://cb-srv.default.svc in your client connection string which does service discovery and client bootstrap via the data service.
Magic!
Simon,
Is there a way to properly reach the cluster from outside k8s using just a NodePort
? We’re doing development and have a k8s 3-node cluster spun up. We’re trying to write some integration tests against it outside of the cluster using the Golang SDK. We exposed the cb
using a NodePort and can reach the endpoint from outside the cluster, but when we use the SDK, we get operation has timed out
similar to this SO article: https://stackoverflow.com/q/49197790
Is there a way around this?
No that won’t be possible with public addressing, the node IP reported will always be a private IP… Even if the node did have a public IP we’d need to do a cluster level nodes lookup which is an admin level privilege and everyone would complain That and enabling TLS would be next to impossible, we’d not allow plaintext on the net.
You should be able to establish a VPN tunnel between your client and kube node network. That would work using node IPs, so long as they are routable. You seem to have already managed this. Have you enable the spec.exposedFeatures: client option yet? That would populate the service discovery stuff in couchbase correctly for the client.
See https://docs.couchbase.com/operator/1.2/network-requirements.html#out-of-cluster-networking-with-private-ip-based-addressing. That should give you what you need.
Hello Simon. Thank you for the response. I think maybe I wasn’t clear. I’m not trying to expose the cb cluster service externally on the public Internet. For that, the LoadBalancer
type would work fine. The Out-of-cluster Networking with Private IP-based Addressing
article is something I already have setup, I believe. I can, for example, reach the cb-ui
web app from outside the cluster just fine – as it has a NodePort
, and I created a NodePort
for the cb
service as well. When we connect from the Go SDK, however, it fails b/c it cannot directly address each individual node.
What we have is a minikube setup that has a 3-node cluster inside it. We want to run golang integration tests for our API against the minikube cluster (outside the cluster on the same localhost), i.e. minikube service cb
command will print the NodePort
endpoint that is addressable from outside the cluster.
I haven’t looked at spec.exposedFeatures: client option – maybe that is the piece I’m missing? I’ll look into that.
Hi Simon, we enabled spec.exposedFeatures
for both client
and xdcr
and here’s a look at the output from minikube service list
.
In order for us to talk to the couch cluster from the golang SDK, can we just expose the cb-srv
service via a NodePort
?
Don’t forget the routes from your host into the minikube node network (next-hop 192.169.99.100). You may need a return route added with minkube ssh.
So the SRV stuff is internal to the k8s cluster, unless you forward all DNS traffic to the kubedns server… (ugh!) To initiate a connection it’s best to follow the xdcr guide to give you a connection string https://docs.couchbase.com/operator/1.2/xdcr.html#xdcr-to-a-different-kubernetes-cluster-with-overlay-networking-2. Yeah, not the nicest but I’m hoping to make it better one day, hands are tied by the ever shifting sands of Kubernetes
We did get it to work by enabling the xdcr
service in the cluster config. That was the missing piece of the puzzle. Thank you for the help and explanation.
1 Like
In all honesty we’ll probably go all in on services and do this by default (making it unconfigurable to the end user) in order to accommodate service meshes in the near future, if technically feasible that is