I am using helm/kubernetes to deploy a couchbase cluster (operator v2.0.2).
Today I had an incident where I lost all nodes of the couchbase cluster, the only thing I could get was a snapshot from the PV in AWS. The cluster consists of 3 nodes (using the default helm chart config).
Looking inside those volumes, I come to understand that couchbase store the files in .couch files.
I found a tool called cbtransfer that can consume the couchstore files but I am hitting an error:
Sorry to hear that you ran into a problem. The command provided looks correct, the error message is saying that there is no *.couch.* files in the XYZ folder. It would be best to double check that the bucket name and path are correct (I suspect you have already done this).
To debug this could you share the output of ls -l /mnt/cb_data/ and /mnt/cb_data/XYZ please.
From cb1 and cb2, I got all of my data back, I am very happy about it, I just need to fix why couchbase thinks it is a binary document, coz it is not, it is a plain json.
I am using version 6.5.1.
Update: Well, finally I restored everything with the help of couch_dbdump I was able to dump all .couche files to json and simply re-insert them.
Hi @Noteworthy I had a look into the issue. I appreciate you have found a workaround but just in case this happens again I can explain what happened and how to get around it.
The reason the documents are being transfered as binary is that the documents in the couchstore files where compressed. Due to a bug in that version of cbtransfer it incorrectly restores compressed documents as binary, you can get around it by using the -x uncompress=1 so the command would look like: