21 Popular Kubernetes Interview Questions and Answers

The article focuses on answering the frequently asked Kubernetes interview questions. These questions will help you prepare for your interview and revise the concepts of Kubernetes.

Q. What is Kubernetes?

Kubernetes, also known as K8s, is an open-source orchestration platform that performs different tasks like deployment, scaling, management, and monitoring containerized applications. It eases the management and deployment of the applications in an automated manner. It manages these applications on a cluster of servers that can either be in the cloud or on-premise.

Q. What are the components of Kubernetes?

Kubernetes Components
Kubernetes Components

The Kubernetes cluster generally involves a control plane and a set of worker nodes. A control plane manages the worker nodes and the pods running on these nodes. It also takes care of the scheduling, monitoring, and deployment of the pods in the worker nodes.

The components involved in the Kubernetes cluster are as follows:

  • API server: It exposes the several APIs of the Kubernetes cluster and acts as an interface in the cluster. It communicates with the worker nodes and all the other components involved in the Kubernetes cluster. All the communication occurs through the API server.
  • Scheduler: The scheduler assigns the newly created pods to different nodes. It checks the utilization of the nodes in the cluster and assigns the pods to the optimal node. A few elements considered while allocating the pods are resource requirements of the pods, affinity and anti-affinity specifications, hardware or software constraints, etc.
  • Controller Manager: It involves multiple controller processes compiled into a single binary. Below are some of the controller processes:
  1. Node controller - Responsible for checking the health of the nodes.
  2. Job controller - Responsible for running the pods until they are completed.
  3. Endpoints controller - Responsible for creating the services and mapping them with the appropriate pods.
  4. Service account and token controller - Responsible for creating account and API access tokens for the namespaces.
  • Etcd: It is a distributed key-value store that stores all the data related to Kubernetes. It stores information such as the state of the cluster, metadata, etc.
  • Cloud-Controller Manager: It connects the Kubernetes cluster with the cloud provider’s API. The processes running in the cloud-controller manager are specific to the cloud provider. This component is set up when we are using Kubernetes in a cloud environment. On-premises Kubernetes clusters do not have a cloud-controller manager.
  • Kubelet: Kubelet is an agent that runs on each of the worker nodes in the Kubernetes cluster. It is responsible for running Pods on these nodes and ensuring the state of the pods is healthy. It observes the specs of the pods through the API server and maintains the pod lifecycle.
  • Kube-proxy: It is used to implement the services on the cluster so that the applications running inside the pods can be exposed. It allows communication to these exposed services from inside or outside of the cluster.

Q. What are pods?

Pods are the smallest objects that can be deployed on the Kubernetes cluster. A pod can contain one or more containers. The applications are running inside these containers. These containers share the resources that are allotted to the pods, like storage, network, etc.

Below is an example of creating a pod using the busybox image.

apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "1800"
    imagePullPolicy: IfNotPresent
    name: busybox

You can run the same code on the Kubernetes playground provided by KodeKloud. The playground gives you direct access to the Kubernetes cluster to get hands-on with Kubernetes. KodeKloud provides many DevOps playgrounds for the users to get hands-on with different tools. The subscribers of the KodeKloud platform can access all these playgrounds.

Q. How do we see logs of a pod?

The logs of the pods can be viewed using the following command:

kubectl logs -n <namespace> <pod_name>

To tail and follow the logs, we can make use of the -f and --tail flags.

kubectl logs --tail 100 -f -n <namespace> <pod_name>

There are multiple ways to check if the deployment is not running as expected.

1. Check if the deployment has been successfully rolled out. To check the rollout status, run the below command:

kubectl rollout status deployment/<deployment_name>

2. Check the events related to the deployment. Run the following command to check the latest events of the namespace:

kubectl get events -n <namespace> --sort-by='.lastTimestamp'

We can also get events related to a certain pod that is not running.

kubectl get events -n <namespace> | grep <pod_name>

3. Check the logs of the pods that are not running as expected. If the pod is in the CrashBackLoopOff state, then we can check the logs of the pod.

kubectl logs -n <namespace> <pod_name>

Q. What are services in Kubernetes?

Services are used to expose the applications that are running on a set of pods. Each newly created pod is allotted an IP address. These services allow pods to interact with other pods and expose the applications. These exposed services can be used to interact with the pods from inside or outside of the cluster. The services target the pods by using the selector field that filters the resources based on labels.

Q. What are the different service types in Kubernetes?

There are four different service types in Kubernetes, namely:

  1. Cluster IP - Cluster IP exposes the service on a cluster level. This enables the service to be reachable from within the Kubernetes cluster. The IP provided to the service is an internal IP that can be recognized within the cluster itself. This is the default service type while creating the service.
  2. Node Port - The Node Port exposes the service on the defined port of each node. This port can be defined using the NodePort parameter while creating a service. The service can be accessed using the node port of every node in the cluster. This type of service allows connections from outside the Kubernetes cluster as well. The traffic coming to the node port is routed to a Cluster IP, which is automatically created while creating a node port service.
  3. Load Balancer - The load balancer exposes the service externally using the load balancer of the cloud provider. The load balancer service routes the traffic to the Node Port and Cluster IP that is automatically created while defining a load balancer service.
  4. External Name - It is used to expose the service using the name defined while creating the service. It maps the service with the defined name using the externalName field in the configuration file.

Q. How can we enable monitoring of the pods?

To monitor the pods running on the cluster, we can use the following command to check their utilization:

kubectl top pods -n <namespace>

Pod-level metrics can be viewed using the Grafana dashboard as well. Prometheus and Grafana need to be integrated with the metrics that are exposed by Kubernetes. Kubernetes exposes several metrics that are related to overall cluster health as well as pod-level metrics. The metrics server is responsible for exposing these metrics. These metrics can be accessed using /metrics API.

kubectl get --raw /metrics

Q. What are namespaces in Kubernetes?

Namespaces are used to segregate the resources within the Kubernetes cluster. When we run multiple deployments in the cluster, it becomes difficult to manage and monitor the pods that are running. Namespaces allow the organization of these resources for better clarity. We can create groups for specific use cases or applications through namespaces. The namespaces are only applicable for certain objects like pods, services, deployments, etc. Other objects like StorageClass, PersistentVolumes, and nodes are not applicable.

We can also check the resources that can be namespaced using the below command:

kubectl api-resources --namespaced=True

To persist the application-related data, we need to create PVC and use it as a mount point while creating a pod. Below is an example of creating a PVC.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-application
spec:
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 1Gi

After creating a PVC, the name of the PVC needs to be referred to in the definition file of Pod.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod-pvc
spec:
  volumes:
    - name: pod-volume-pvc
      persistentVolumeClaim:
        claimName: pvc-application
  containers:
    - name: task-pv-container
      image: nginx
      ports:
          - containerPort: 80
      volumeMounts:
          - mountPath: "/data/nginx/"
          name: pod-volume-pvc

Q. What are PV and PVC?

PV stands for PersistentVolume and PVC stands for PersistentVolumeClaim. PersistentVolume creates a storage volume so that the users can claim a specific amount of storage from the PersistentVolume. The PV can be created using different volume provisioners like Local, NFS, CephFS, GCEPersistentDisk, AWSElasticBlockStore, AzureFile, etc.

After the PV is created, the users can request storage from the created PV. PersistentVolumeClaim allows the users to claim storage resources from PV. Just like pods can request resources like CPU and memory, PVCs can ask for storage resources from PV with certain access modes.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: postgres-pv
spec:
  storageClassName: local
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: postgres
  name: postgres-pv-claim
spec:
  storageClassName: local
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Q. How can we regulate access to the Kubernetes cluster?

The cluster access can be regulated using role-based access control (RBAC) authorization. This type of authorization can allow users access to specific resources of the cluster only. The API group of RBAC creates four kinds of objects: Role, ClusterRole, RoleBinding, and ClusterRoleBinding.

Q. What is the difference between Role and ClusterRole?

Role and ClusterRole are objects that allow users access to specific resources. A set of permissions are defined while creating the objects. Role objects can allow access within a namespace. We have to specify the namespace while creating a Role object. ClusterRole objects allow permissions to the resources across the cluster, i.e., it is a non-namespaced resource. As ClusterRole objects are applicable cluster-wide, they can grant the same permissions as a Role object.

Q. What is the difference between RoleBinding and ClusterRoleBinding?

The RoleBinding and ClusterRoleBinding objects are used to bind the Role or ClusterRole objects to a set of users or groups. RoleBinding objects grant permissions defined in Role or ClusterRole objects to a set of users. We mention a list of subjects like users or groups that will be given those permissions. The reference to the Role object needs to be provided while creating the RoleBinding object. RoleBinding objects can grant permission to resources of a specific namespace. ClusterRoleBinding objects are used to grant permission to cluster-wide resources.

Q. How do we interact with the Kubernetes cluster?

We can interact with the Kubernetes cluster using the kubectl tool. We can use kubectl to deploy applications, manage the cluster, and monitor the pods.

We can also use the Kubernetes Dashboard to interact with the cluster. The dashboard allows the users to create deployments, view logs of pods, monitor the usage of the nodes, etc.

Q. How can we use Kubernetes on a single machine?

To set up Kubernetes on a local machine, we can use a tool such as minikube or Kind. Both are open-source binaries that can create a local Kubernetes environment on your machine.

Q. How can we check the utilization of the nodes?

We can check the utilization of the nodes in the cluster using the command below:

kubectl top nodes

If available, we can also use the Kubernetes dashboard to monitor the utilization of the nodes. We can also integrate the cluster with the Prometheus-Grafana stack for monitoring the cluster and alerting.

Q. What are the access modes used while creating PVCs?

The access modes in PersistentVolumeClaim (PVC) are as follows:

  1. ReadWriteOnce (RWO) - It ensures only one node can read or write on the mounted volume. Multiple pods running on the same node can also access the volume.
  2. ReadOnlyMany (ROX) - Multiple nodes can mount the volume as read-only.
  3. ReadWriteMany (RWX) - Multiple nodes can mount the volume as read-write.
  4. ReadWriteOncePod (RWOP) - The volume can be mounted as read-write by a single pod. It is used when we want to ensure only one pod can read or write on the mounted volume.

Q. How do we regulate the resource usage of the pods?

To regulate the resources for the pods, we make use of the ‘resources’ parameter while creating the pods. We can request and limit the resources that a pod can consume. This allows us to control various resources required for the pods, like storage, CPU, memory, etc. Below is an example showing how to control the resource usage of the pods.

apiVersion: v1
kind: Pod
metadata:
 name: resource-control-example
spec:
 containers:
 - name: container-of-pod
   image: nginx
   resources:
     requests:
       ephemeral-storage: "1Gi"
       memory: "500Mi"
       cpu: "50m"
     limits:
       ephemeral-storage: "2Gi"
       memory: "1Gi"
       cpu: "100m"

Q. What are the different types of container patterns?

  1. Sidecar container pattern
  2. Init container pattern
  3. Ambassador pattern
  4. Adapter pattern
  5. Work Queue pattern
  6. Leader Election pattern
  7. Scatter/Gather pattern
  8. Single container pattern

Q. How can we find out the details about the pods?

We can get more details regarding the pod using the below command:

kubectl describe pod <pod_name> -n <namespace>

This command can provide details such as the IP of the node where the pod has been assigned, the IP of the pod, service endpoints, the status of the pod, the image used for running the pod, volumes mounted to the pod, and resource specifications of the pod, etc.

Want to prepare yourself for the DevOps interviews? Check out our DevOps Interview Preparation Course. This course will prepare you for any DevOps interview that you will attend. It will make you feel more confident while attending your interviews and allow you to tackle any DevOps questions asked in an interview.