By default, all resources in Kubernetes cluster are created in a default namespace. A pod will run with unbounded CPU and memory requests/limits. A Kubernetes namespace allows to partition created resources into a logically named group. Each namespace provides:
- a unique scope for resources to avoid name collisions
- policiesto ensure appropriate authority to trusted users
- ability to specify constraints for resource consumption
This allows a Kubernetes cluster to share resources by multiple groups and provide different levels of QoS each group. Resources created in one namespace are hidden from other namespaces. Multiple namespaces can be created, each potentially with different constraints.
Default Kubernetes Namespace
By default, each resource created by user in Kubernetes cluster runs in a default namespace, called default
.
1 2 3 4 |
./kubernetes/cluster/kubectl.sh get namespace NAME LABELS STATUS AGE default Active 1m kube-system Active 1m |
Any pod, service or replication controller will be created in this namespace. kube-system
namespace is reserved for resources created by the Kubernetes cluster. More details about the namespace can be seen:
1 2 3 4 5 6 7 8 9 10 11 |
./kubernetes/cluster/kubectl.sh describe namespaces default Name: default Labels: Status: Active No resource quota. Resource Limits Type Resource Min Max Request Limit Limit/Request ---- -------- --- --- ------- ----- ------------- Container cpu - - 100m - - |
This description shows resource quota (if present), as well as resource limit ranges. So let’s create a Couchbase replication controller as:
1 |
./kubernetes/cluster/kubectl.sh run couchbase --image=arungupta/couchbase |
Check the existing replication controller:
1 2 3 |
./kubernetes/cluster/kubectl.sh get rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE couchbase couchbase arungupta/couchbase run=couchbase 1 5m |
By default, only resources in user namespace are shown. Resources in all namespaces can be shown using --all-namespaces
option:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
./kubernetes/cluster/kubectl.sh get rc --all-namespaces NAMESPACE CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE default couchbase couchbase arungupta/couchbase run=couchbase 1 5m kube-system heapster-v11 heapster gcr.io/google_containers/heapster:v0.18.4 k8s-app=heapster,version=v11 1 6m kube-system kube-dns-v9 etcd gcr.io/google_containers/etcd:2.0.9 k8s-app=kube-dns,version=v9 1 6m kube2sky gcr.io/google_containers/kube2sky:1.11 skydns gcr.io/google_containers/skydns:2015-10-13-8c72f8c healthz gcr.io/google_containers/exechealthz:1.0 kube-system kube-ui-v4 kube-ui gcr.io/google_containers/kube-ui:v4 k8s-app=kube-ui,version=v4 1 6m kube-system l7-lb-controller-v0.5.2 default-http-backend gcr.io/google_containers/defaultbackend:1.0 k8s-app=glbc,version=v0.5.2 1 6m l7-lb-controller gcr.io/google_containers/glbc:0.5.2 kube-system monitoring-influxdb-grafana-v2 influxdb gcr.io/google_containers/heapster_influxdb:v0.4 k8s-app=influxGrafana,version=v2 1 6m grafana beta.gcr.io/google_containers/heapster_grafana:v2.1.1 |
As you can see, the arungupta/couchbase
image runs in the default
namespace. All other resources run in the kube-system
namespace. Lets check the context of this replication controller:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
./kubernetes/cluster/kubectl.sh config view couchbase apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://104.197.10.200 name: couchbase-on-kubernetes_kubernetes contexts: - context: cluster: couchbase-on-kubernetes_kubernetes user: couchbase-on-kubernetes_kubernetes name: couchbase-on-kubernetes_kubernetes current-context: couchbase-on-kubernetes_kubernetes kind: Config preferences: {} users: - name: couchbase-on-kubernetes_kubernetes user: client-certificate-data: REDACTED client-key-data: REDACTED token: 1RUrsvA5RDwwRNf0eOvz86elmniOK0oj - name: couchbase-on-kubernetes_kubernetes-basic-auth user: password: cZ9fZSuzIqq5kdnj username: admin |
Look for contexts.context.name
attribute to see the existing context. This will be manipulated later.
Create a Resource in New Kubernetes Namespace
Lets create a new namespace first. This can be done using the following configuration file:
1 2 3 4 5 6 |
apiVersion: v1 kind: Namespace metadata: name: development labels: name: development |
Namespace is created as:
1 2 |
./kubernetes/cluster/kubectl.sh create -f myns.yaml namespace "development" created |
Then querying for all the namespaces gives:
1 2 3 4 5 |
./kubernetes/cluster/kubectl.sh get namespace NAME LABELS STATUS AGE default Active 9m development name=development Active 13s kube-system Active 8m |
A new replication controller can be created in this new namespace by using --namespace
option:
1 2 |
./kubernetes/cluster/kubectl.sh --namespace=development run couchbase --image=arungupta/couchbase replicationcontroller "couchbase" created |
List of resources in all namespaces looks like:
1 2 3 4 5 6 |
./kubernetes/cluster/kubectl.sh get rc --all-namespaces NAMESPACE CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE default couchbase couchbase arungupta/couchbase run=couchbase 1 4m development couchbase couchbase arungupta/couchbase run=couchbase 1 2m kube-system heapster-v11 heapster gcr.io/google_containers/heapster:v0.18.4 k8s-app=heapster,version=v11 1 31m . . . |
As seen, there are two replication controllers with arungupta/couchbase
image – one in default
namespace and another in development
namespace.
Set Kubernetes Namespace For an Existing Resource
If a resource is already created then it can be assigned a namespace. On a previously created resource, new context can be set in the namespace:
1 2 |
./kubernetes/cluster/kubectl.sh config set-context dev --namespace=development --cluster=couchbase-on-kubernetes_kubernetes --user=couchbase-on-kubernetes_kubernetes context "dev" set. |
Viewing the context now shows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
./kubernetes/cluster/kubectl.sh config view couchbase apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://104.197.10.200 name: couchbase-on-kubernetes_kubernetes contexts: - context: cluster: couchbase-on-kubernetes_kubernetes user: couchbase-on-kubernetes_kubernetes name: couchbase-on-kubernetes_kubernetes - context: cluster: couchbase-on-kubernetes_kubernetes namespace: development user: couchbase-on-kubernetes_kubernetes name: dev current-context: couchbase-on-kubernetes_kubernetes kind: Config preferences: {} users: - name: couchbase-on-kubernetes_kubernetes user: client-certificate-data: REDACTED client-key-data: REDACTED token: 1RUrsvA5RDwwRNf0eOvz86elmniOK0oj - name: couchbase-on-kubernetes_kubernetes-basic-auth user: password: cZ9fZSuzIqq5kdnj username: admin |
The second attribute in contexts.context
array shows that a new context has been created. It also shows that the current context is still couchbase-on-kubernetes_kubernetes
. Since no namespace is specified in that context, it belongs to the default namespace. Change the context:
1 2 |
./kubernetes/cluster/kubectl.sh config use-context dev switched to context "dev". |
See the list of replication controllers:
1 2 |
./kubernetes/cluster/kubectl.sh get rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE |
Obviously, no replication controllers are running in this context. Lets create a new replication controller in this new namespace:
1 2 |
./kubernetes/cluster/kubectl.sh run couchbase --image=arungupta/couchbase replicationcontroller "couchbase" created |
And see the list of replication controllers in all namespaces:
1 2 3 4 5 6 |
./kubernetes/cluster/kubectl.sh get rc --all-namespaces NAMESPACE CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE default couchbase couchbase arungupta/couchbase run=couchbase 1 16m development couchbase couchbase arungupta/couchbase run=couchbase 1 4s kube-system heapster-v11 heapster gcr.io/google_containers/heapster:v0.18.4 k8s-app=heapster,version=v11 1 17m . . . |
Now you can see two arungupta/couchbase
replication controllers running in two difference namespaces.
Delete a Kubernetes Resource in Namespace
A resource can be deleted by fully-qualifying the resource name:
1 2 |
./kubernetes/cluster/kubectl.sh --namespace=default delete rc couchbase replicationcontroller "couchbase" deleted |
Similarly the other replication controller can be deleted as:
1 2 |
./kubernetes/cluster/kubectl.sh --namespace=development delete rc couchbase replicationcontroller "couchbase" deleted |
Finally, see the list of all replication controllers in all namespaces:
1 2 3 4 5 |
./kubernetes/cluster/kubectl.sh get rc --all-namespaces NAMESPACE CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE kube-system heapster-v11 heapster gcr.io/google_containers/heapster:v0.18.4 k8s-app=heapster,version=v11 1 3h kube-system kube-dns-v9 etcd gcr.io/google_containers/etcd:2.0.9 k8s-app=kube-dns,version=v9 1 3h . . . |
This confirms that all user created replication controllers are deleted.
Resource Quota and Limit using Kubernetes Namespace
Each namespace can be assigned resource quota. By default, a pod will run with unbounded CPU and memory requests/limits. Specifying quota allows to restrict how much of cluster resources can be consumed across all pods in a namespace. Resource quota can be specified using a configuration file:
1 2 3 4 5 6 7 8 9 10 11 12 |
apiVersion: v1 kind: ResourceQuota metadata: name: quota spec: hard: cpu: "20" memory: 1Gi pods: "10" replicationcontrollers: "20" resourcequotas: "1" services: "5" |
The following resources are supported by the quota system:
Resource | Description |
---|---|
cpu |
Total requested cpu usage |
memory |
Total requested memory usage |
pods |
Total number of active pods where phase is pending or active. |
services |
Total number of services |
replicationcontrollers |
Total number of replication controllers |
resourcequotas |
Total number of resource quotas |
secrets |
Total number of secrets |
persistentvolumeclaims |
Total number of persistent volume claims |
This resource quota can be created in a namespace:
1 2 |
./kubernetes/cluster/kubectl.sh --namespace=development create -f quota.yaml resourcequota "quota" created |
The created quota can be seen as:
1 2 3 4 5 6 7 8 9 10 11 |
./kubernetes/cluster/kubectl.sh --namespace=development describe quota Name: quota Namespace: development Resource Used Hard -------- ---- ---- cpu 0 20 memory 0 1Gi pods 0 10 replicationcontrollers 0 20 resourcequotas 1 1 services 0 5 |
Now, if you try to create the replication controller that works:
1 2 |
./kubernetes/cluster/kubectl.sh --namespace=development run couchbase --image=arungupta/couchbase replicationcontroller "couchbase" created |
But describing the quota again shows:
1 2 3 4 5 6 7 8 9 10 11 |
./kubernetes/cluster/kubectl.sh --namespace=development describe quota Name: quota Namespace: development Resource Used Hard -------- ---- ---- cpu 0 20 memory 0 1Gi pods 0 10 replicationcontrollers 1 20 resourcequotas 1 1 services 0 5 |
We expected a new pod to be created as part of this replication controller but it’s not there. So lets describe our replication controller:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
./kubernetes/cluster/kubectl.sh --namespace=development describe rc Name: couchbase Namespace: development Image(s): arungupta/couchbase Selector: run=couchbase Labels: run=couchbase Replicas: 0 current / 1 desired Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed No volumes. Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 1m 24s 4 {replication-controller } FailedCreate Error creating: Pod "couchbase-" is forbidden: must make a non-zero request for memory since it is tracked by quota. |
By default, pod consumes all the cpu and memory available. With resource quotas applied, an explicit value must be specified. Alternatively a default value for the pod can be specified using the following configuration file:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
apiVersion: v1 kind: LimitRange metadata: name: limits spec: limits: - default: cpu: 200m memory: 512Mi defaultRequest: cpu: 100m memory: 256Mi type: Container |
This restricts the CPU and memory that can be consumed by a pod. Lets apply these limits as:
1 2 |
./kubernetes/cluster/kubectl.sh --namespace=development create -f limits.yaml limitrange "limits" created |
Now when you describe the replication controller again, it shows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
./kubernetes/cluster/kubectl.sh --namespace=development describe rc Name: couchbase Namespace: development Image(s): arungupta/couchbase Selector: run=couchbase Labels: run=couchbase Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed No volumes. Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 8m 2m 14 {replication-controller } FailedCreate Error creating: Pod "couchbase-" is forbidden: must make a non-zero request for memory since it is tracked by quota. 2m 2m 1 {replication-controller } SuccessfulCreate Created pod: couchbase-gzk0l |
This shows successful creation of the pod. And now when you describe the quota, it shows correct values as well:
1 2 3 4 5 6 7 8 9 10 11 |
./kubernetes/cluster/kubectl.sh --namespace=development describe quota Name: quota Namespace: development Resource Used Hard -------- ---- ---- cpu 100m 20 memory 268435456 1Gi pods 1 10 replicationcontrollers 1 20 resourcequotas 1 1 services 0 5 |
Resource Quota provide more details about how to set/update these values. Creating another quota gives the following error:
1 2 |
./kubernetes/cluster/kubectl.sh --namespace=development create -f quota.yaml Error from server: error when creating "quota.yaml": ResourceQuota "quota" is forbidden: limited to 1 resourcequotas |
Specifying Limits During Pod Creation
Limits can be specified during pod creation as well: If memory limit for each pod is restricted to 1g, then a valid pod definition would be:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
apiVersion: v1 kind: Pod metadata: name: couchbase-pod spec: containers: - name: couchbase image: couchbase ports: - containerPort: 8091 resources: limits: cpu: "1" memory: 512Mi |
This is because the pod request 0.5G of memory only. And an invalid pod definition would be:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
apiVersion: v1 kind: Pod metadata: name: couchbase-pod spec: containers: - name: couchbase image: couchbase ports: - containerPort: 8091 resources: limits: cpu: "1" memory: 2G |
This is because the pod requests 2G of memory. Creating such a pod gives the following error:
1 2 |
./kubernetes/cluster/kubectl.sh --namespace=development create -f couchbase-pod.yaml Error from server: error when creating "couchbase-pod.yaml": Pod "couchbase-pod" is forbidden: unable to admit pod without exceeding quota for resource memory: limited to 1Gi but require 2805306368 to succeed |
Hope you can apply namespaces, resource quotas, and limits for sharing your clusters across different environments.