Hi ,
I am trying to get rid of the issue of IP address constantly changing for the CB nodes in cluster .(see below image)
I am trying to make the connection for the SDK directly using DNS or hostname . So not sure what would be that and how to use that .
I have 4 node cluster in Kubernetes and I have used the default Operator and cluster helm package to install it .
I have NO hostname and domain or sub-domain specification in my cluster yaml file.
I assume I need to use some headless service domain address but how do I achieve that .
Read this and not sure how do I apply this concept : DNS for Services and Pods | Kubernetes
My connections string for connecting bucket and opening it looks like below : (this IP address is fake one(for security) and this is one of the IP address from node 1 (exposed ports for 11210 data service)
COUCHBASE_CONNSTR = “couchbase://10.80.67.123:30853”
I have headless service running in kubernetes like below
cb-revstrat-ilcb-srv ClusterIP None 11210/TCP,11207/TCP
When I look at the one of the node from Admin UI console I see ::
below is first couple of lines when editing the existing cluster ::
apiVersion: couchbase.com/v1
kind: CouchbaseCluster
metadata:
creationTimestamp: “2020-04-03T19:03:51Z”
generation: 94
name: cb-revstrat-ilcb
namespace: bi-cb
resourceVersion: “215400320”
selfLink: /apis/couchbase.com/v1/namespaces/bi-cb/couchbaseclusters/cb-revstrat-ilcb
uid: cb4e0097-3015-47c1-860e-c0c27c95dca3
spec:
adminConsoleServiceType: NodePort
adminConsoleServices:
- data
authSecret: youthful-mule-cb-revstrat-ilcb
baseImage: couchbase/server
buckets:- compressionMode: passive
conflictResolution: seqno
enableFlush: true
evictionPolicy: fullEviction
ioPriority: high
memoryQuota: 128
name: default
replicas: 1
type: couchbase
cluster:
analyticsServiceMemoryQuota: 1024
autoFailoverMaxCount: 3
autoFailoverOnDataDiskIssues: true
autoFailoverOnDataDiskIssuesTimePeriod: 120
autoFailoverServerGroup: false
autoFailoverTimeout: 120
clusterName: “”
dataServiceMemoryQuota: 2048
eventingServiceMemoryQuota: 1024
indexServiceMemoryQuota: 2048
indexStorageSetting: plasma
I look at up below post from @simon.murray however can’t figured out what network change I be needed and ask to Network team ::
Now the problem with using node ports is that if a node goes away or changes address, the clients will break. If the pod that generates the node port goes away, the clients will break. As you are using IP addresses and NodePorts you cannot encrypt the traffic. Be aware of these limitations.
The correct way to connect will be described in the upcoming Operator 2.0 documentation. The short version is that your clients talk to a DNS server that forwards the DNS zone %namespace%.svc.cluster.local to the remote Kubernetes DNS server where the cluster lives. The remote cluster must be using flat networking (no overlays). The client can then connect to couchbase://%clustername%.%namespace%.svc (and must have at least a cluster.local search domain configured for its stub resolver). This gives you high-availability, service discovery and the option of using TLS.