Couchbase on Kubernetes - persistent storage issue

I’m running couchbase:enterprise-4.6.2 cluster in Kubernetes 1.7.9 using statefulset. All works fine except I can’t get persistent storage to work. Specifically we use Rook (https://rook.io) as our storage service. Here is a part of K8s yaml that defines volume mounts:

    volumeMounts:
    - name: couchbase-data
      mountPath: /opt/couchbase/var/lib/couchbase/data

volumeClaimTemplates:
- metadata:
    name: couchbase-data
    labels:
      app: couchbase
spec:
  accessModes: [ "ReadWriteOnce" ]
  storageClassName: rook-block-couchbase
  resources:
    requests:
      storage: 512Mi

NOTE: i’m mounting /opt/couchbase/var/lib/couchbase/data instead of /opt/couchbase/var as the second mentioned was causing couchbase to fail at start when using volume mounts.

During CB cluster bootstrap the persistent volume (PV) and persistent volume claim (PVC) gets created, for 3 node cluster it creates 3x from each:

tomasv@ThinkPad[±|feature U:1 ✗]:~/git/couchbase $ kg pv; kg pvc

NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-34d3b8b8-ebbd-11e7-ba11-005056a22d34 512Mi RWO Retain Bound default/couchbase-data-couchbase-0 rook-block-couchbase 32m
pvc-8edcc228-ebbd-11e7-ba11-005056a22d34 512Mi RWO Retain Bound default/couchbase-data-couchbase-1 rook-block-couchbase 30m
pvc-d40a3bfc-ebbd-11e7-ba11-005056a22d34 512Mi RWO Retain Bound default/couchbase-data-couchbase-2 rook-block-couchbase 28m

NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
couchbase-data-couchbase-0 Bound pvc-34d3b8b8-ebbd-11e7-ba11-005056a22d34 512Mi RWO rook-block-couchbase 32m
couchbase-data-couchbase-1 Bound pvc-8edcc228-ebbd-11e7-ba11-005056a22d34 512Mi RWO rook-block-couchbase 30m
couchbase-data-couchbase-2 Bound pvc-d40a3bfc-ebbd-11e7-ba11-005056a22d34 512Mi RWO rook-block-couchbase 28m

Scenario is as follows:

  1. Create sample docs in my_bucket
  2. Delete the CB cluster - PV and PVC do persist - that is expected
  3. Re-create CB cluster again - during bootstrap i can see that each CB node/pod mounts same volume as its predecessor node/pod (e.g.: SuccessfulMountVolume MountVolume.SetUp succeeded for volume “pvc-34d3b8b8-ebbd-11e7-ba11-005056a22d34” ). In other words the volume mount matches previous cluster bootstrap. Now i would expect to see previously created docs in my_bucket again, but that is not happening :frowning:

I tested whether data persists by creating dummy files on each volume (/opt/couchbase/var/lib/couchbase/data/tomas0 -> couchbase-data-couchbase-0 ; /opt/couchbase/var/lib/couchbase/data/tomas1 -> couchbase-data-couchbase-1 ;
/opt/couchbase/var/lib/couchbase/data/tomas2 -> couchbase-data-couchbase-2) and whether volumes are remounted to correct CB nodes after cluster rebuild. This has proven to be correct.

Thus I don’t know why CB cluster does not see docs that persists in /opt/couchbase/var/lib/couchbase/data in next cluster build.

I’ve tested this using local storage with same results.

Persistent Volume definition looks like this:

apiVersion: v1
kind: PersistentVolume
 metadata:
  name: couchbase-data-0
labels:
type: local
spec:
 storageClassName: couchbase-data
 capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/pv-0

Couchbase seem to have issues bootstraping when each CB node/pod holds its own data folder? I wonder what would be the difference if all CB nodes share one data volume - would that work?