top of page

One story for effortlessly passing Kubernetes interview questions in 2023

Welcome to my Kubernetes blogs. The blogs aim to provide you with effective Kubernetes knowledge and tools that increase efficiency while reducing stress and time to deliver high-quality solutions. Click the follow button to be notified when a new story is released


Let’s get into it…

Today, one of the most used tools that tech companies have in their technology stack is Kubernetes. Since its release date, Kubernetes got a massive adoption rate, pushing its ecosystem and growing its users. In 2021 CNCF (Cloud Native Computing Foundation) did a survey that found that 96% of the organization (which answered the survey) are using or evaluating Kubernetes in their tech stack.

It’s going without saying that with Kubernetes adoption, the search for skilled personnel is in higher demand than ever. In this blog, we are going to take a look at the most common questions being asked by interviewers and the answers for them.


So let’s take a look at the most common Kubernetes interview question

  • What are the Kubernetes control plane components and their purpose?

  • What are the worker node components and their purpose?

  • What is the difference between an init container and a sidecar container?

  • What is the difference between a deployment and a statefulset?

  • List out different service types and what they are used for

What are the Kubernetes control plane components and their purpose?

The Kubernetes control plane nodes are the “brain” behind the Kubernetes cluster operations. The control plane nodes manage the pods in the cluster, and the worker nodes which take part in the cluster.


The Kubernetes control plane consists of four components on the on-prem Kubernetes cluster and five on the Kubernetes cloud/hybrid clusters. As cluster admins, we would want at least three control plane nodes in a production environment for HA reasons.

  • Kube-api-server — The Kubernetes API server validates and configures data for the API objects, including pods, services, replication controllers, and others. The API Server serves REST operations and provides the frontend to the cluster’s shared state through which all other components interact.

  • Kube-controller-manager — The Kubernetes controller manager is a daemon that embeds the core control loops shipped with Kubernetes. In applications of robotics and automation, a control loop is a non-terminating loop that regulates the state of the system. Examples of controllers that ship with Kubernetes today are the replication controller, endpoints controller, namespace controller, and serviceaccounts controller.

  • Kube-scheduler — The Kubernetes scheduler is a control plane process that assigns pods to nodes. The scheduler determines which nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources. The scheduler then ranks each valid node and binds the pod to a suitable node. There can be more than one scheduler.

  • Etcd — open-source, distributed, consistent key-value store for shared configuration, service discovery, and scheduler coordination of distributed systems or clusters of machines. In the Kubernetes control plane Etcd is used to store and replicate all of the Kubernetes cluster states.

  • Cloud-controller-manager (Used on cloud providers) — The cloud-controller-manager provides the interface between a Kubernetes cluster and cloud service APIs. The cloud-controller-manager allows a Kubernetes cluster to provision, monitor, and remove cloud resources necessary for the operation of the cluster.

In a scenario where most of the control plane nodes are down, the cluster will not be able to serve API requests, and the cluster will not be available. Although if the worker nodes are healthy, the pods will keep running but will not be able to reschedule.


What are the worker node components and their purpose?

The worker nodes are responsible for hosting the application pods in the cluster. While it’s possible to host applicative pods in the control plane nodes, the best practice is to schedule pods on the worker nodes for security reasons. The worker nodes hold components that allow them to act on the control plane requests.

  • Kube-proxy — kube-proxy is responsible for maintaining network rules on your nodes. The network rules allow network communication to your pods from inside and outside of your cluster.

  • Kubelet — Kubelet is an agent which runs on each node. It’s responsible for creating the pods by the provided YAML spec, send to the API server the health status of the pods, and provide status information on the node as network, disk space, and more.

What is the difference between an init container and a sidecar container?

The simplest design of a pod is a single container pod that serves the pod’s main functionality, but what if you want to extend the existing functionality without changing or adding complexity to your main container?


Pods can wrap one container or more for that reason. There is a container design pattern that is useful for different scenarios, but the building blocks for those design patterns are implemented with an init container and a sidecar container.

  • Init container — init containers are always run before the sidecar and main application container. The init container has to run to successful completion before the rest of the containers can run. The reason for init container usage can be for versatile options. For example, it can be used to check if application dependencies are available, to set up the main or sidecar container environment, and more.

  • Sidecar container — A sidecar container runs parallel to the main application container. The reason to use a sidecar container can be for different reasons. For example, in Istio the sidecar container is used as a proxy for managing incoming traffic for the main container, it can also be used for logging, monitoring purposes, and more.

Example of Istio injection containers to initialize the environment and intercept the container traffic.

apiVersion: v1
kind: Pod
metadata:
name: example
spec:
# init container section where you can setup your init containers
  initContainers:
  - name: istio-init
    image: istio/proxyv2:1.11.2
  containers:
  - name: hello
    image: alpine
# our sidecar container which will intercept the pod network tarffic
  - name: istio-proxy
    image: istio/proxyv2:1.11.2
    volumeMounts:
    - mountPath: /etc/certs
      name: certs
  volumes:
  - name: certs
    secret:
      secretName: istio-certs

What is the difference between a deployment and a statefulset?

One of the most common questions we will face in the interview. To answer that question, we will need to cover each resource and understand their differences.

  • Deployment — Deployment is the easiest and the most used resource for deploying applications in the Kubernetes cluster. Deployments are usually used for stateless applications, meaning the data which reside on the pod will be deleted with the pod. If we use persistence storage with the deployment, we will have one Persistence volume claim for all of the pods that take part in the deployment. Deployments wrap a ReplicaSet resource, allowing him to roll back between versions easily. The naming convention of the pod is set up as follows <deployment-name>-<replicaset-id>-<pod-id>.

  • Statefulset — A resource that became stable in Kubernetes 1.9 version as the community requested the ability to host stateful applications on the Kubernetes cluster. Statefulset doesn’t use Replicaset as a secondary controller, but he manages the pods by himself. The naming convention of the pods is as follows <Statefulset-name>-0, <Statefulset-name>-1. The naming convention is being used for network identifiers and upgrade control. Statefulset requires headless service, enabling network identification and DNS resolution for the pods participating in the Statefulset. Every replica in the Statefulset deployment gets its own persistence volume claim so that each pod will have its own state.

To conclude, the rule of thumb is that stateless applications should be deployed with deployments. Deployments wrap another controller called Replicaset for easier upgrades and rollbacks. Statefulset was created out of the community need and is usually used for stateful applications as databases where identification of other replicas in the cluster is crucial, and upgrades should be done gracefully. List out different service types and what they are used for

Kubernetes service is a logical abstraction of a group of pods selected by a selector. The service is being used to setup a policy by which to access the underlying pods.


In your interview, you will probably be asked about the four most common service types, which are as follows.

  • ClusterIP — The clusterIP type is the default service type and the most common service in the Kubernetes ecosystem. ClusterIP service has an internal cluster IP and is only reachable from within the cluster.

  • NodePort — NodePort service type is usually used when you want to expose the service to external traffic from the cluster, and the service type LoadBalancer is not available. With the NodePort service type, you are deciding on a port (within the acceptable range of 3000 to 32767) that each node in the cluster will expose to receive traffic and forward to the service. The NodePort service type has also ClusterIP that allows the service to be reachable from within the cluster.

  • LoadBalancer — LoadBalancer service type is used for external traffic access by allocating a loadbalancer. To use this type of service, there is a need for a supporting platform that will be able to allocate a loadbalancer. The loadbalancer will be created asynchronously, and the information on the loadbalancer will be available on the service when it will be available and assigned to the service. The LoadBalancer service type also has ClusterIP and allocates Nodeport to access the service.

  • Headless — Headless service type is used when direct communication to a pod is needed. For example, in Statefulset applications as databases, the secondary pods need to communicate directly with the primary pods to replicate data between the replicas. The headless service allows DNS to resolve the underlying pods and access them by the pod name. A headless service is a service that doesn’t has a clusterIP setup. An example for accessing a pod via headless service will be as follow: <pod-name>.<service-name>.<namespace>.svc.cluster.local

Conclusion

Kubernetes is one of the most used technologies in the industry today, and as a such organization is looking for talented personnel that is educated and experienced on that topic. The subjects reviewed in this story are some of the most frequent questions being asked in the interviews. Although this story answered five major questions, the information written here would be helpful in different question scenarios.

Thank you, if you have any questions or need any help you can reach me over LinkedIn. Let me know if you want an in-depth review of any of the subjects in the comments below or via direct message.

bottom of page