Couchbase Operator would not create CouchbaseCluster in version 1.2.0

Installed 1.2.0 Couchbase Operator on 1.17 Kuberenetes using Helm v3 as follows:
helm install --atomic --generate-name --namespace kube-system couchbase/couchbase-operator

Then attempted to create a cluster using:

apiVersion: couchbase.com/v1
kind: CouchbaseCluster
metadata:
  name: development-default
  namespace: development
spec:
  baseImage: couchbase/server
  version: community-6.0.0
  paused: true
  antiAffinity: true
  authSecret: default-couchbase-credentials
  exposeAdminConsole: false
  exposedFeatures:
    - xdcr
    - admin
    - client
  softwareUpdateNotifications: true
  securityContext:
    runAsUser: 1000
    runAsNonRoot: true
    fsGroup: 1000
  disableBucketManagement: false
  logRetentionTime: 150h
  logRetentionCount: 20
  cluster:
    clusterName: "Default Development Couchbase Cluster on Sigma"
    dataServiceMemoryQuota: 1024
    indexServiceMemoryQuota: 512
    searchServiceMemoryQuota: 256
    eventingServiceMemoryQuota: 256
    analyticsServiceMemoryQuota: 1024
    indexStorageSetting: memory_optimized
    autoFailoverTimeout: 30
    autoFailoverMaxCount: 3
    autoFailoverOnDataDiskIssues: true
    autoFailoverOnDataDiskIssuesTimePeriod: 30
    autoFailoverServerGroup: false
  buckets:
    - name: default
      type: couchbase
      memoryQuota: 1024
      replicas: 0
      ioPriority: high
      evictionPolicy: fullEviction
      conflictResolution: seqno
      enableFlush: true
      enableIndexReplica: true
      compressionMode: passive
  servers:
    - size: 4
      name: all-services
      services:
        - data
        - index
        - query
        - search
        - eventing
        - analytics
      pod:
        resources:
          limits:
            cpu: 1
            memory: 2Gi
        automountServiceAccountToken: true

Admission logs:

I0305 17:09:13.434724 1 admission.go:300] couchbase-operator-admission 1.2.0 (release)
I0305 17:33:38.774828 1 admission.go:176] mutating couchbasecluster
I0305 17:34:01.110171 1 admission.go:176] mutating couchbasecluster
I0305 17:34:10.647532 1 admission.go:176] mutating couchbasecluster
I0305 17:34:31.492119 1 admission.go:176] mutating couchbasecluster
I0305 17:34:44.521982 1 admission.go:176] mutating couchbasecluster
I0305 17:34:50.597039 1 admission.go:176] mutating couchbasecluster
I0305 17:34:50.599308 1 admission.go:118] validating couchbasecluster
I0305 17:40:31.058709 1 admission.go:176] mutating couchbasecluster
I0305 17:40:31.061416 1 admission.go:118] validating couchbasecluster

Operator logs:

time=“2020-03-05T17:09:14Z” level=info msg=“couchbase-operator v1.2.0 (release)” module=main
time=“2020-03-05T17:09:14Z” level=info msg=“Obtaining resource lock” module=main
time=“2020-03-05T17:09:14Z” level=info msg=“Starting event recorder” module=main
time=“2020-03-05T17:09:14Z” level=info msg=“Attempting to be elected the couchbase-operator leader” module=main
time=“2020-03-05T17:09:14Z” level=info msg=“I’m the leader, attempt to start the operator” module=main
time=“2020-03-05T17:09:14Z” level=info msg=“Creating the couchbase-operator controller” module=main
time=“2020-03-05T17:09:14Z” level=info msg=“Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"couchbase-operator", UID:"ebb76e65-f614-49f5-89ea-e00ca36dc0cd", APIVersion:"v1", ResourceVersion:"7567927", FieldPath:""}): type: ‘Normal’ reason: ‘LeaderElection’ couchbase-operator-1583428104-couchbase-operator-57667fdb4pcnkk became leader” module=event_recorder
time=“2020-03-05T17:09:24Z” level=info msg=“CRD initialized, listening for events…” module=controller

I may be doing something wrong, but the Admission Controller or the Operator should at least give me a hint here. :slight_smile:

  1. Somehow it solely operates on the namespace where it has been deployed. I found no configuration whatsoever to watch other namespaces.
  2. It is not for community edition, and this fact is not clearly stated, only in the Overview, but no one reads/finds an overview. :smiley:

Indeed, most people wouldn’t use something that had access to every secret on the cluster. Also consider a bug during upgrade when controlling the whole cluster, you could potentially destroy all databases running there. So it’s by design for your security and safety.

The EE requirement is actually more in your face in the upcoming 2.0 documentation, already well ahead of you. I’ll also have a check to see if we state the scoping requirements, probably not in the right place.

Sure I can think of a bug in the Operator that would simply destroy all clusters, but I think it would be fairly easy to wall-off against that. A feature to watch certain namespaces would be useful. Most operators do provide such a configuration parameter.

About the documentation, indeed, technical people usually check the URL to see if there is anything about “enterprise” or a number for versioning. I think it should be emphasized at many places that it would not work with CE.

Anyway, we may provision clusters automatically using Flux with HelmReleases, so an operator is a “would like to have” for us.

I know we can watch all namespaces, so it may end up being a supported option later on. I shall add a note to the work item to provide filtering, that’s a good idea :smiley:

I guess we should make a note on the helm docs that it’s EE only given that’s your preferred deployment option. Thanks for the feedback.