Node Map from server to an external client returns INTERNAL DNS names

HI:

I am attempting to connect to a couchbase server from a client OUTSIDE the cluster. I have successfully deploy external-dns so that my kubernetes couchbase service correctly returns the IP addresses of the pods. (I’m using VPC-CNI so these IP’s are reachable directly from outside the cluster).

The PROBLEM is that I cannot retrieve data because when the client first connects (using the correct IP it got from external-dns) to the cluster, the cluster returns the NODE MAP of the DATA pods, but they’re NOT IP addresses. Instead they’re DNS names in the form clustername-0000.srv.cluster.local. The client doesn’t know what IP addresses go with that name, so the connection fails.

How can I get the returned Node Map from the server to be IP address and NOT DNS names like clustername-0000.svc.cluster.local

Here’s what’s happening:
client (10.1.0.10) looks up clustername.myexternaldomain.com from my external DNS

DNS server for clustername.myexternaldomain.com returns 3 reachable pod addresses:
10.1.0.1, 10.1.0.2, 10.1.0.3

client connects to 10.1.0.1 - gets node (of pods) map.

Node map includes clustername-0000.srv.cluster.local, clustername-0001.srv.cluster.local, clustername-0002.srv.cluster.local (these names are NOT resolvable by the client)

Connection fails.

Hey David,

Does your connection string indicate that you need to use external addresses, and does each Couchbase pod have the external DNS alternate address applied to the pod?

Thanks,

Justin Ashworth

HI Justin:
I’m not quite sure what the answer for the first part of your question is. Because of vpc-cni - each pod is directly reachable from outside the cluster. As for the 2nd part, I’m guessing not - I’m not sure how to do that (or even that this was possible).

Regards,
Dave

also - i just realized I mis-typed: when I said “nodes” I meant “pods” - my couchbase pods all have IP addresses from a subnet that’s reachable outside the cluster.

Hey David,

The Operator will add the alt addresses as necessary when configured correctly. I would recommend reading through this tutorial

Thanks,

-Justin

Thanks for your response, Justin, however, the link you sent is for public networking which is not what I’m doing here.

I already read your documents and they hadn’t answered my question. I wouldn’t waste your time asking a question without reading the documentation. I’m hoping you can help.

First off I’m using VPC-CNI, which is an EKS feature which assigns routeable IP addresses directly to pods - so I do not need nodeports. In fact, if I enable spec.networking.exposedfeatures[client], the operator incorrectly publishes the nodeport service for the node with both the kubernetes server node as well as the pod IP. (This is incorrect, since the kubernetes server node ip can’t be used, and you don’t need it if you have the pod IP (courtesy of vpc-cni).

So anyway, To to make my query simple: when I execute this:

./couchbase-cli server-list -c cbinternal.e2.mydomain.com -u some user -p somepassword (yes this works because i’m using external-dns on the service, which correctly returns the pod IP addresses and publishes them to my public DNS, R53, which are reachable because vpc-cni.)

I get this:
ns_1@e2-0000.e2.default.svc e2-0000.e2.default.svc:8091 healthy active
ns_1@e2-0001.e2.default.svc e2-0001.e2.default.svc:8091 healthy active
ns_1@e2-0002.e2.default.svc e2-0002.e2.default.svc:8091 healthy active

When I need this:

ns_1@e2-0000.e2.default.svc [ip of pod e2-0000]:8091 healthy active
ns_1@e2-0001.e2.default.svc [ip of e2-0001]:8091 healthy active
ns_1@e2-0002.e2.default.svc [ip of pod e2-0002]:8091 healthy active

Or simply: I need the couchbase mapping to return IP addresses and not DNS names.

Thanks for your help,
David

After making the initial connection, the couchbase SDKs - will.use the alternate addresses (if present) → external → nodesExt to make requests.
SDK Doctor is useful for verifying and troubleshooting

Thanks so much for this - extremely helpful. sdk-doctor correctly parsed the DNS without issue! Looks like the root cause is developers using a python connector that is old.

1 Like