Couchbase Autonomous Operator backup restore does not restore bucket data

Hello,

I am running a couchbase cluster (CB server - Enterprise Edition 7.1.3) on our Kubernetes. I am trying to get the couchbase backup restore work using the Autonomous operator (v2.4.0) based on the instructions in Configure Automated Backup and Restore | Couchbase Docs but unfortunately the back up restore does not restore any of the data. We have 3 buckets with some amount of data.

Steps:

  1. I applied the CouchbaseCluster changes

Cluster:

apiVersion: couchbase.com/v2
kind: CouchbaseCluster
metadata:
  name: app-cb
  namespace: default
spec:
  backup:
    managed: true
    image: couchbase/operator-backup:1.3.2
    serviceAccountName: couchbase-backup
    ....
  1. I applied the CouchbaseBackup and i could see 2 cronjob (full and incremental), pvc created and a backup pod created and completed

Backup:

apiVersion: couchbase.com/v2
kind: CouchbaseBackup
metadata:
  name: app-cb-backup
spec:
  strategy: full_incremental
  full:
    schedule: "18 10 * * 5"
  incremental:
    schedule: "00 16 * 0,1,2,3,4,6"
  size: 256Gi
  storageClassName: managed-premium
  backoffLimit: 2
  backupRetention: "720h"
  autoScaling:
    thresholdPercent: 20
    incrementPercent: 20
    limit: 512Gi
  1. After that, i got the repo name using “kubectl describe CouchbaseBackup” (At this state, i had one full backup)
 Repo:           app-cb-2023-06-02T10_18_24
  Running:        false
Events:
  Type    Reason           Age   From  Message
  ----    ------           ----  ----  -------
  Normal  BackupStarted    2m          Backup `app-cb-backup` started
  Normal  BackupCompleted  60s         Backup `app-cb-backup` completed
  1. I removed some documents from some buckets.

  2. Finally, i tried to apply the following “CouchbaseBackupRestore”,

apiVersion: couchbase.com/v2
kind: CouchbaseBackupRestore
metadata:
  name: app-cb-restore
spec:
  backup: app-cb-backup
  repo: app-cb-2023-06-02T10_18_24
  start:
      int: 1
  end:
      int: 1

I could see a restore pod created and completed as well.

When i checked the couchbase buckets (None of the removed documents got restored).

Could anyone please advice how to resolve this problem ?

Thank you,
Deiv

Hi @Deivarayan, I suspect this might be more of a side-effect of the way that backup/restore works, rather than a specific Kubernetes/Operator issue.

If you delete a document, there will be a tombstone with a higher Revision ID present than the copy of the document that you have in the backup, and by default a restore job won’t overwrite “newer” data.

This can be forced by using force-updates from cbbackupmgr restore (couchbasebackuprestores.spec.forceUpdates in the Operator), but be aware that this might overwrite other data unexpectedly. If you know that there is a small subset of docs, you can also use filter-keys and filter-values to fine-tune what gets restored.

See cbbackupmgr restore and CouchbaseBackupRestore Resource for more details of all available options.