I’m trying to do a backup to S3 using a CouchbaseBackup item.
I’m on EKS.
I’ve added in a secret for my S3.
My backup object looks like:
apiVersion: couchbase.com/v2
kind: CouchbaseBackup
metadata:
name: newdev
spec:
strategy: full_incremental
full:
# schedule: “0 3 * * 0”
schedule: “25 19 * * *”
incremental:
schedule: “0 3 * * 1-6”
size: 20Gi
objectStore:
secret: flipt-couchbase-backup-qa
uri: s3://mybucket
storageClassName: ebs-sc #csi provider
The container log shows this:
File “/usr/local/lib/python3.8/dist-packages/kubernetes/client/models/v1_pod_condition.py”, line 219, in type │
│ raise ValueError( │
│ ValueError: Invalid value for type
(PodReadyToStartContainers), must be one of [‘ContainersReady’, ‘Initialized’, ‘PodScheduled’, ‘Ready’] │
│ Stream closed EOF for default/newdev-full-28788205-8qcng (cbbackupmgr-full) │
So it seems that the container dies in an instant after starting.
Has anyone see anything like this?
Hey David,
Unfortunately, not a lot to go on on here. Could you tell me what version of operator and operator-backup you are using?
Thanks,
Justin Ashworth
Engineering Manager, Cloud Native Team
Hi Justin:
Operator: image: couchbase/operator:2.7.0
Backup: image: couchbase/operator-backup:1.3.2
I’m running EKS with vpc-cni, aws-csi, and have successfully created a 3-node cluster. with 100g attached EBS disks. I set up an ingress to get to the console, and can manually add records. What I’m trying to do is restore from an existing backup repo from another existing CB cluster in order to migrate to the operator-based cluster.
EKS version 1.30
Here’s the cluster yaml - it seems to work fine - i have 4 nodes with 3 CB servers (one extra node for the backup since anti-affinity is set on)
apiVersion: couchbase.com/v2
kind: CouchbaseCluster
metadata:
name: newdev
spec:
image: couchbase/server:7.6.0
antiAffinity: true
security:
adminSecret: cb-example-auth
buckets:
managed: true
networking:
exposeAdminConsole: true
adminConsoleServices:
- data
servers:
- size: 3
name: whole_enchilada
services:
- data
- index
- query
- search
volumeMounts:
default: couchbase
enableOnlineVolumeExpansion: false
onlineVolumeExpansionTimeoutInMins: 10
volumeClaimTemplates:
- metadata:
name: couchbase
spec:
storageClassName: ebs-sc
resources:
requests:
storage: 100Gi
backup:
managed: true
image: couchbase/operator-backup:1.3.2
tolerations:
- effect: “”
key: “”
operator: Exists
securityContext:
fsGroup: 8453
I’ve posted the backup yaml above and below is my restore yaml. The pods fail immediately as soon as the containers start with the error:
ValueError: Invalid value for type
(PodReadyToStartContainers), must be one of [‘ContainersReady’, ‘Initialized’, ‘PodScheduled’, ‘Ready’]
restore yaml:
apiVersion: couchbase.com/v2
kind: CouchbaseBackupRestore
metadata:
name: my-restore
spec:
backup: my-backup
backoffLimit: 1
start:
str: oldest
end:
str: latest
objectStore:
secret: flipt-couchbase-backup-qa
uri: s3://flipt-couchbase-backup-qa
stagingVolume:
size: 20Gi
storageClassName: ebs-sc
Hey David,
This looks like a Kubernetes client version issue with that version of Operator Backup. In 1.29, the “PodReadyToStartContainers” status was added, and it appears that the Kubernetes Client in 1.3.2 isn’t compatible with that status.
Try updating to at least 1.3.7 which was the operator backup version released with the 2.6.x release.
1 Like
That was the ticket! Thanks so much for your assistance.