We have couchbase community edition installed on two different servers in two different AWS accounts. When we are trying to do a xdcr between those two servers, the remote added successfully but not the replication. Below are the errors I’m seeing when trying to start the replication although we have opened all the traffic ports between those two servers.
“Pipeline did not start in a timely manner, possibly due to busy source or target. Will try again…”
“RuntimeCtx:Execution timed out”
Not sure what’s happening, the same replication is working fine if we try the replication inside the same VPC with private ips. Please help me in solving this issue.
From your description of the problem, I am assuming that you used the public ip address of the ec2 instance when you added the XDCR remote cluster. If you look in the goxdcr.log (for example, /opt/couchbase/var/lib/couchbase/logs/goxdcr.log on Linux) on the source cluster, you’ll probably see WARN and ERRO messages for GOXDCR.RemClusterSvc saying that target nodes are not accessible, and you’ll probably see the private ip’s listed (even though you used the public ip when you added the XDCR remote). The public ip is used to connect to the remote node/cluster for bootstrapping (to get info on all the nodes in the cluster), and the information the source cluster got were the private ip’s of the target cluster nodes.
If you are on a version of Couchbase Server that supports alternate addresses, you can set the public ip of the nodes as the alternate addresses on the target cluster nodes, and when you use the public ip for the XDCR remote, the alternate addresses will be used.
Alternate addresses docs links are below. Note that you do not need to use the --ports option for the couchbase-cli or alternate-port-number for services options for the REST API commands if you just want the same port numbers mapped to the alternate address (this is the easiest – if you choose to map to different port numbers on the alternate address, you’ll need to configure port forwarding also using OS commands). Before and after setting alternate addresses, you can check the node info using – curl -X GET -u Administrator:password http://<localhost_or_ip_or_hostname>:8091/pools/default/nodeServices – after setting the alternate address (using either the couchbase-cli or REST api) on the node, you’ll see the public ip listed as the alternate address.
Another solution would be to setup VPC peering between the two private cloud networks so that they can communicate with each other – https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
I’m facing problem here, the xdcr with public ip as alternate address working great if the source and target bucket names are different only. If the source and target bucket names are same, then I’m seeing an error goxdcr.log “Source nozzles have been closed” “Pipelines stopped”.
Is there anything we can do with this issue? I want to use same bucket name in both source and target.