Hi CB Team
I did single node data load using cbimport in simple Docker container for Couchbase and it was successful .
Now we have 4 node clusters in K8s platform and I get list of Nodeports with ‘kubectl get svc’ and see list of IP’s from ‘kubectl describe pods | grep IP’ .
My problem is same cbimport doesn’t work same way that single node did .
What is the recommended approach to use cbimport in k8s ? And currently I am stuck on nodeportIP definition (whether I need to use this only IP or include the port from kubernetes as well) . And how can use multiple couchbase node in cluster to do cbimport ?
this is simple cbimport command I am doing .
Mostly correct We add an SRV record for you to do service discovery, so you just need to change your connection string to read couchbase://${clustername}-srv.${namespace}.svc. Full documentation is provided here https://docs.couchbase.com/operator/1.2/couchbase-clients.html
Thanks Simon . Still needs little clarity on the cluster connection definition .
This is what cluster I am in :
kubectl get cbc
NAME AGE
tccb-cluster 28d
This is what showing in Web Admin Console:
Name: tccb-cluster-0000.tccb-cluster.bi-cb.svc
Name: tccb-cluster-0001.tccb-cluster.bi-cb.svc
Name: tccb-cluster-0002.tccb-cluster.bi-cb.svc
This is what namespaces:
SELECT * FROM system:namespaces
id name
default default
I used below and it doesn’t work :
cluster = Cluster(‘couchbase://tccb-cluster-srv.bi-cb.svc.cluster.local’)
I am not sure as per the documents what format(cluster_name) means :
Above works if my client / App PODS in same K8s Cluster …
But my question surrounds if this POD is killed then 0000 will not more be assigned by K8s and it is spin up something with the name 0009 . then my cluster connection will fail correct ?
How to handle that situation ?
Also I am looking more for generic cluster definition outside the POD access from different K8s cluster as well . How can this be happen ?
With above Cluster definition N1QL works but for outside K8s cluster access I need to expose port 8093 for query service no ?
@arungupta@geraldss can you please help on N1QL query access as well for K8s infrastructure ?
Hi
I think its not working for me for my client :
Do you mind to send me the details one how the container service looks like for you ?
Is it appearing like below as headless service ? when you execute ‘kubectl get svc’ ?
cb-example-srv.default.svc.cluster.local
For my case this is what shows in ‘kubectl get svc’
tccbadc-cluster-srv ClusterIP None 11210/TCP,11207/TCP
And inside my client POD I am using dig as below and I assume this is not successful . So what I am missing here ?
Here you can see the base domain is definitely cluster.local for this particular cluster. If nothing seems out of the ordinary I’d consult whoever provides your Kubernetes DNS to check that SRV records are in fact created for you like they should be.