CouchbaseBackup doesn't create backup cronjob

Azure Kubernetes Services 1.19.11
Couchbase Operator 2.2.1
Couchbase Server 7.0.2

I have my cluster configured with this:

  backup:
    managed: true
    serviceAccountName: couchbase-backup
    resources:
      requests:
        cpu: 100m
        memory: 100Mi 
    selector:
      matchLabels:
        cluster: cb

When i create couchbase backup resource:

apiVersion: couchbase.com/v2
kind: CouchbaseBackup
metadata:
  name: cb-backup
spec:
  strategy: full_incremental
  full:
    schedule: "0 3 * * 0" 
  incremental:
    schedule: "0 3 * * 1-6" 
  size: 200Gi

It doesn’t create the backup cronjob.

Describe of couchbasebackup:

Name:         cb-backup
Namespace:    test-tdm
Labels:       <none>
Annotations:  <none>
API Version:  couchbase.com/v2
Kind:         CouchbaseBackup
Metadata:
  Creation Timestamp:  2021-11-17T15:56:46Z
  Generation:          1
  Managed Fields:
    API Version:  couchbase.com/v2
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:backoffLimit:
        f:backupRetention:
        f:failedJobsHistoryLimit:
        f:full:
          .:
          f:schedule:
        f:incremental:
          .:
          f:schedule:
        f:logRetention:
        f:size:
        f:strategy:
        f:successfulJobsHistoryLimit:
        f:threads:
    Manager:         kubectl-create
    Operation:       Update
    Time:            2021-11-17T15:56:46Z
  Resource Version:  154627225
  Self Link:         /apis/couchbase.com/v2/namespaces/test-tdm/couchbasebackups/cb-backup
  UID:               f47b80de-a9fd-43dc-b891-893d576aef18
Spec:
  Backoff Limit:              2
  Backup Retention:           720h
  Failed Jobs History Limit:  3
  Full:
    Schedule:  0 3 * * 0
  Incremental:
    Schedule:                     0 3 * * 1-6
  Log Retention:                  168h
  Size:                           200Gi
  Strategy:                       full_incremental
  Successful Jobs History Limit:  3
  Threads:                        1
Events:                           <none>

Operator logs:

{"level":"info","ts":1637159487.8080425,"logger":"main","msg":"couchbase-operator","version":"2.2.1 (build 126)","revision":"b75530987818a959ec1f8984da92b5e2d3f615f7"}
{"level":"info","ts":1637159488.8587165,"msg":"Throttling request took 1.02714051s, request: GET:https://10.0.0.1:443/apis/cert-manager.io/v1beta1?timeout=32s\n"}
{"level":"info","ts":1637159488.9145124,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"0.0.0.0:8383"}
{"level":"info","ts":1637159488.9155128,"msg":"attempting to acquire leader lease  test-tdm/couchbase-operator...\n"}
{"level":"info","ts":1637159488.915511,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
{"level":"info","ts":1637159506.341229,"msg":"successfully acquired lease test-tdm/couchbase-operator\n"}
{"level":"info","ts":1637159506.341445,"logger":"controller","msg":"Starting EventSource","controller":"couchbase-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1637159506.4423802,"logger":"controller","msg":"Starting Controller","controller":"couchbase-controller"}
{"level":"info","ts":1637159506.442424,"logger":"controller","msg":"Starting workers","controller":"couchbase-controller","worker count":4}
{"level":"info","ts":1637159506.4425676,"logger":"cluster","msg":"Watching new cluster","cluster":"test-tdm/cb"}
{"level":"info","ts":1637159523.465988,"logger":"cluster","msg":"Couchbase client starting","cluster":"test-tdm/cb"}
{"level":"info","ts":1637159523.4660935,"logger":"cluster","msg":"Janitor starting","cluster":"test-tdm/cb"}
{"level":"info","ts":1637159523.4663966,"logger":"cluster","msg":"Running","cluster":"test-tdm/cb"}

I am unable to get into the operator and get more logs/profiling because:

kubectl exec -it  couchbase-operator-6cbbf7b55d-4b6s6 -n test-tdm -- sh
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "1acbbb37e2beecf543ddec7d1a1de93793c3052b2c2baa35c717ed015ed8309d": OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "sh": executable file not found in $PATH: unknown

Could it be as simple as a configuration error?

Labels:       <none>

Also I note you are using Operator 2.2 with Server 7. You’ll probably want to pick up the compatible backup image.

Yes, that’s it. All working now and thanks for the backup compatibility tip.