If you've googled for "What Is Kubernetes?" you probably got the usual:
- Container orchestrator
- Automating deployments
- Scaling containerized applications
- Microservices
- Black magic, Voodoo, bla bla bla, and a looooooong stream of tech words that is hard to understand for a beginner in this field.
Well, you're about to read something different. Finally, a blog that will explain "What is Kubernetes?" in very simple words; an explanation for normal human beings.
But Kubernetes is a solution to a problem. And to understand the solution, you first have to look at the problem it solves.
Why Do We Need Kubernetes?
Containers, containers, containers… in the voice of Steve Ballmer.
You probably heard about Docker. But even if you didn't, it's no big deal to understand it.
Docker is a set of tools that allow you to create, edit, and run containers. And a container is just an application in a small little box.
What's so cool about containers? Well, one thing is that the application has all that it needs (the so-called dependencies) inside that little box. So you can take that container, run it on Windows, MacOS, Linux, whatever. It won't complain that stuff is missing.
You can move it from one server running Ubuntu, to another running Debian. You can move it from Ubuntu 22.04 to Ubuntu 24.04.
It will just run, anywhere — as long as a container-friendly environment is set up on that computer, which is not hard to do.
Maybe you've even run a container, or two. But what if you have to run thousands of containers? Well, that's when you need Kubernetes.
It's Hard to Run a Large Number of Containers
Just imagine if you had to start 50,000 containers. Too many to host on a single server. So you'd have to spread these around to many different servers.
Imagine if you had to log in to just 500 servers, and start 100 containers on each one. That's a lot of commands and manual actions.
And what if the containers need to "talk" to each other? Then you have to set up network connections between them.
What if a container breaks? With 50,000 of them running, something will surely malfunction, somewhere. Imagine if every day you had to hunt down the containers that broke, and manually restart them. Not a fun life.
Long story short, when you have so many containers, you get a million problems. And a million things to do.
That's what Kubernetes was created for.
What Is Kubernetes Used For?
Kubernetes is an autopilot for a million containers. You give it a plan, and it will start to execute it, on its own. It will automate everything.
Automation
- It will start all the containers.
- It will start them up on different servers, at the same time.
- It will set up networks between them, if required.
- It will monitor all of the containers, and ensure they function properly.
- It will fix / restart whatever breaks.
- It will launch more containers (scale up) if there's a need for them. For example, a news article suddenly sends more traffic to your business. To handle the additional traffic, you need more containers. And Kubernetes can add those automatically.
- It will scale down, and close containers that are not required anymore.
- It will send traffic to servers and containers that are able to deal with it (load balancing). This server is super busy? It stops sending traffic to it. This other server is free, not doing much yet? It sends more traffic to it.
And these are just a few things that Kubernetes can do.
But can you figure out a different problem? Sure, Kubernetes can do a million things on its own. But don't you have to tell it what to do? Isn't that like building an application, writing code, inserting a million instructions?
Doing a Lot with Very Few Instructions
Well, Kubernetes solved that problem: How to do a lot, with very few instructions. For this, it uses what is called a declarative approach.
To understand this better, imagine you're in a restaurant, ordering some food. You just say what you want, with very few words. Because the focus is on what, not on how.
You tell the waiter what you want, and the staff in the kitchen will figure out how to do it. It's easy to say you want a cheesecake. It's much harder to explain how to prepare it.
Kubernetes is similar. You don't have to tell it how to do something. You just say what you want, and it figures out how to get there. That's the declarative approach.
You say "I want 1000 containers, distributed evenly on my servers, able to communicate with each other." And then Kubernetes knows what to do. How to start them up, what number of containers to put on each server, how to set up networking, and so on.
Sticking to the Plan You Provide
Another benefit of this declarative approach is that Kubernetes sticks to the plan. Once you describe the situation you want, it will do everything in its power to maintain that state.
So it doesn't just start 1000 containers. It constantly checks: "Are they still running correctly?" If it notices that only 997 are good, it takes measures to fix that. It starts up 3 extra containers to bring the total number back to the 1000 you wanted. Even if an entire server goes down, it can fix that too; and many other things.
It constantly compares:
- What you wanted.
- With what the current situation is.
If the current situation is not what you wanted, it takes corrective measures (auto-heals). And brings the current situation back to the plan you provided; back to what you wanted.
That's why I described it as an "autopilot for a million containers." It can self-correct when things go off-course. Just like an airplane self-corrects its direction, even if the wind constantly pushes it around.
Kubernetes is smart like that, with an eye on the big picture. Maybe one reason why it's called a container orchestrator.
It is similar to a conductor in an orchestra. The containers are doing the main work, just like the instrument players in an orchestra. But the conductor is there to gently guide every person, so that the ensemble functions well as a whole.
Kubernetes also orhcestrates, guides every container in its place, and makes sure that all the pieces fit well together.
What Is a Kubernetes Cluster?
We established that Kubernetes is a sort of "smart manager" of containers. But what happens if this "manager" is lost? Now 1000 containers remain, scattered around, with no "manager" to tell them what to do, restart them when they malfunction, and so on.
At the end of the day, Kubernetes is an application(?). And applications can crash. Also, Kubernetes needs to run on a server, and a server can also crash. Well… not quite.
- Kubernetes is not an application, it's a collection of applications.
- And Kubernetes does not run on a server, it runs on multiple servers.
This collection of servers, running all the Kubernetes applications, and containers is called a Kubernetes cluster. And a Kubernetes cluster is split into two parts:
- The master nodes.
- And the worker nodes.
The worker nodes are the ones actually running the containers.
And the master nodes are the "brain" of Kubernetes, where decisions are made. That's what gives orders to the worker nodes; telling them what containers to run, how to run them, and so on.
This is a great setup because, now, if any of these servers / nodes goes down, it's not a problem. One worker node down? No problem, the rest are working fine. Containers lost on the dead worker node will be brought back to life on the healthy worker nodes.
One master node goes down? Again, no problem if the other master nodes are still working. The "brain" of Kubernetes will still function if at least a few of the other master nodes are still alive.
This way, Kubernetes is not only self-healing — as it can restart containers that fail, or self-correct if something does not go according to plan. But Kubernetes is also resilient / fault-resistant. Or how it's usually called, highly-available. Because some parts can go down, fail, and Kubernetes will still work correctly.
Each one of these nodes runs a collection of Kubernetes applications. For example, an application is responsible for accepting commands from you. Another application is responsible for network-related stuff between containers. Another is responsible for starting and stopping containers.
This set of applications runs on each node. And they all coordinate with each other.
So, again, if one application fails on some server, the others running on the other servers will be fine. And Kubernetes as a whole will be able to self-correct.
All of this, the multiple applications belonging to Kubernetes, the multiple servers running as master nodes and worker nodes, the multiple containers even, they all form the Kubernetes cluster.
Or, if you want the simplified version: All the worker nodes, and master nodes, that work together to launch and manage containers, form a Kubernetes cluster.
You can see that Kubernetes is designed from top to bottom to be able to withstand:
- Container failures.
- Server failures.
- Application failures.
It's a pretty cool design that, in my opinion, resembles some of nature's design. For example, the human body is also fault-resistant. Millions of cells die, but new ones replace them. Just like in a Kubernetes cluster many containers will eventually fail, and be replaced by new ones.
What Is a Pod in Kubernetes?
Another thing that "the Internet" does not explain in simple terms is the Kubernetes Pod. Many Google results tell us that it is the "smallest unit of deployment," and other generic, abstract stuff like that.
But in human-friendly terms, what is a Pod, actually?
Well, the simplest explanation is that a Pod is like a "container" for your containers. It's the "mini-house" that containers have to live in (when they run in a Kubernetes cluster).
Usually, each pod will have just one container inside. So, 10 containers to run? Then Kubernetes will need 10 pods, and each pod will run one container.
But, the cool thing about one pod is that it can also run multiple containers inside, if required.
You can picture pods housing multiple containers like this:

Just remember that it's quite usual for one pod to have just one container inside, not this many.
Although, a certain setup is growing in popularity; where there are two containers in a single pod:
- The main container.
- And the so-called sidecar container.
What is a sidecar container? It's a secondary container that helps the first container do something extra. Something that the main container cannot do on its own.
For example:
- The main container generates an image.
- And the sidecar container encrypts that image before sending it to the user, for added privacy.
Containers can work together inside a pod; help each other out.
Why put them in the same pod, though? Because they will be "closer" to each other, able to talk with each other faster, easier. They "share a room," basically.
So, in our example, it's easier for that sidecar container to encrypt the image. Because it's so close to the main container, communication between them will be super-fast.
When you think about pods, just think about them as the mandatory "room" that a container needs. Because you can't tell Kubernetes to run a container directly, without being placed in a pod first.
From Kubernetes' documentation, here's an example how you'd run an Nginx container, but inside the mandatory pod that it needs:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Note how you first declare that this is a pod, with kind: Pod
. And only after, you declare that this pod will have a container, running the container image
of Nginx version 1.14.2
.
Also, note how easy it is to tell Kubernetes what to do. With this declarative kind of approach that was explained earlier in the blog. You don't tell it how to do things. You just tell it what to do, what state you want: This container, this image, this port open.
Kubernetes vs. Docker
A question that comes up often is "How does Kubernetes compare to Docker?" They both deal with containers, so it's a natural question.
You can think of Docker as a Swiss Army Knife for containers.
The image above really fits this context. Because just like the Swiss Army Knife, Docker is (seemingly) a single tool. But open it up, and you see multiple tools are packed inside. So you can do many different things with what seems like a single application, the docker
command.
Behind the scenes, the command makes use of different programs to do its job. But as far as the user experience goes, it seems like you have a single command that can do everything.
Here are just a few examples of what you can do with the docker
command:
- You can tell it to download a container image from the Internet.
- You can tell it to start a container, and run some kind of service on your computer, or server. For example, you can download the Nginx container image, and start an Nginx container. And you get a website hosted on your server, in seconds.
- You can tell it to stop, or delete containers.
- You can even create your own container images with Docker.
- You can modify existing container images.
- You can "step inside" containers, and run commands in that isolated environment.
- You can upload a container image to a public server, or your own private server. Then other people (or your company) can start using it.
- You can create networks between your containers, so they can talk to each other, exchange data.
All in all, Docker is a multi-purpose tool for all your container needs. Well, almost all… Because, as mentioned before, what if you need to run thousands of containers?
You can start a few containers with Docker. But when you have to deal with a huge number of containers, that's when you need Kubernetes.
- Think of Docker as a tool you use to manually deal with a small number of containers. A tool you use to build, download, upload, or modify containers.
- And think of Kubernetes as a manager of thousands of containers.
The new question that might pop up:
So this means that Docker is a tool that deals with a small number of containers? And Kubernetes deals with a large number of containers? That's the difference?
Yes, that's one difference. But, no, it's not the only difference.
- The similarity between Docker and Kubernetes:
- They can both run containers.
- The differences between Docker and Kubernetes:
- Docker can run a small number of containers.
- Kubernetes can run a huge number of containers.
- One extra thing Docker can do is "deep dive" into containers + container images, and customize them. Kubernetes cannot do that.
- And the extra thing Kubernetes can do is be a "smart manager" for a huge number of containers.
- It can execute complex plans.
- Fix things that go wrong.
- Run containers on thousands of servers.
- It can survive the loss of a few servers, and still work well.
- It's built to manage a mind-blowing number of containers, auto-healing when something goes wrong.
Otherwise said, Docker can also be an "editor" of containers. It can create container images from scratch. It can modify existing ones. But it's a "manual" tool. Where you deal with containers one at a time.
But Kubernetes just takes what is readily available. It does not build container images. It just runs the container images you feed to it. Which you might have created with Docker, or some other tool. But it's built for automation, and large scale. You feed plans to it, and it then takes millions of actions, on its own. It ensures hundreds of thousands of containers run smoothly.
If you want an even simpler version of Docker vs. Kubernetes:
- Docker = Create, run, or edit a few containers.
- Kubernetes = Take container images created by other tools, like Docker. Run hundreds of thousands of these containers on thousands of servers.
Which means that you can even use Docker and Kubernetes together. With Docker you can create the container images you need. And with Kubernetes you can then launch thousands of containers based on the images you created with Docker.
Learn More about Kubernetes with a Beginner-Friendly Course
If you enjoyed this blog, you might also enjoy this course:

It's built specifically for beginners. And people on the Internet, and in Udemy reviews, call it "the best course for Kubernetes." Maybe they're right 😄.
See you in the next blog!
Discussion