Intro to Kubernetes: a rundown of how it works + what you need to know to get started

Tim Dorr
CTO and Founder

Kubernetes adoption has doubled since 2017 and now runs in half of the container environments used by businesses. In short, it’s become the defacto approach for managing containerized workloads and services, and it shows no signs of slowing down.

If you want to get in on the conversation, here’s an intro to Kubernetes to catch you up on why it’s become so widely adopted and how exactly it works.

What’s so great about Kubernetes?

There are many reasons that Kubernetes has become so popular, but one of the biggest reasons is the way in which it operates – which represents a huge game changer for operations teams.

Kubernetes offers declarative programming, meaning that users simply need to declare what they want to happen and Kubernetes will do the work to get to that state. Traditionally, operators would have to not only determine the end state, but then write commands to give the system step-by-step instructions on how to achieve that goal. Kubernetes eliminates this work by enabling operators to just tell the system what they want to achieve, not how it needs to be done.

How does Kubernetes actually work?

The idea of Kubernetes doing all the work to determine how to achieve a desired end state certainly sounds ideal, but how does it actually work? Here’s what you need to know:

Everything in Kubernetes is an object

Every single thing in Kubernetes is defined as an object, including the applications and the servers you run, the networking configuration and so on. Each of these objects has a lifecycle that allows them to be created, updated or deleted over time. Together, these objects represent the desired end state. Kubernetes will then work to achieve that state.

You can manage objects throughout their lifecycle

The most common format for writing objects is the YAML language. Once you have these objects, you can use the kubectl command line tool to add, edit or delete any object in the system to which you have access.

It’s also very common to keep objects in files on your local machine or in a Git repository so you can browse them without having to read from the Kubernetes API and so you can prepare changes before sending them to Kubernetes. Keeping these objects in a repository also provides a backup to easily restore your system to a working state if anything ever gets lost for some reason. Equally as important, it allows you to visualize changes over time, including who requested the change, when they requested it and what they wanted to change.

Various types of objects exist in Kubernetes

Various types of objects exist in Kubernetes. Some of the most important objects to know about include:

Nodes

A Kubernetes cluster starts with a collection of Nodes. These are the servers on which everything runs. They run the “kubelet” Kubernetes agent software, speak the Kubernetes API and have some type of container runtime (such as Docker or containerd).

Namespaces

Most other objects in the system define workloads or configuration, which all exist in Namespaces. Namespaces let you group together objects to keep them scoped to particular teams or projects that you have running on the cluster. Kubernetes itself defines built-in Namespaces for its own objects, but you can create any number of Namespaces in the system to organize, isolate or group your objects however you want. For example, you might have one Namespace for production and another for staging. In general, there’s no right or wrong way to set up Namespaces, it simply depends on your team’s needs.

Pods

The most basic object in any Kubernetes cluster is the Pod, which is a collection of containers that run an image in the cluster. The Pod’s containers are co-located and share resources from whatever Node they run on. You will usually run a single container in each Pod, as this simplifies the management of them and makes them easier to reason about as you expand your usage of Kubernetes and need to evaluate overall system health.

Usually you will not manage Pods directly. Instead, you will generally use other objects to extend their lifecycle and manage Pods in a way that better matches your typical operational workflows. Using other objects in this way allows you to run a set of Pods together, which can help to increase capacity and create redundancy, so that even if one Pod or Node in a cluster goes away you’re still left with healthy parts running your system.

Deployments

The most common managing object for Pods is the Deployment object, which defines a set of Pods that can change over time as you add new container images, scale up resources or change their configuration. Because Kubernetes objects define the desired state of the system, you only need to make simple changes to the configuration of the Deployment object and it will manage getting the cluster to that state.

For example, if you change the container image on the Deployment, it will manage a gradual transition to that new container image by carefully scaling up Pods with that new image and scaling down Pods with the old image – all without any downtime. It will also attempt to recover any unhealthy Pods or ones that do not start up correctly. Importantly, the fact that this process happens gradually not only eliminates downtime, but it also gives you time to react and respond if anything goes wrong.

ConfigMaps and Secrets

ConfigMaps and Secrets are two objects to manage configuration in the system. They can be provided to any Pod as environment variables, individual files or whole directories inside of the Pod’s container.

A ConfigMap is generally readable by anyone, while a Secret offers some basic obfuscation. However, you should not consider Secrets a secure storage method by any means. Contents stored in Secrets are encoded so that if someone walks by a computer screen they won’t be able to read Secrets, but these objects are fairly easy to decode for anyone who has a computer nearby, so you should still treat them as easily accessible.

Services and Ingresses

Finally, you can use two objects to gain network access to your Pods: Services and Ingresses.

Services define the low level network primitives like network ports and IP addresses and can either represent a single Pod or a collection of Pods. They are most often used for communicating internally within the system, but can also be used for service discovery. For instance, if you want to talk to another service in the system, Kubernetes will route you directly to that service to open those communications (rather than you first having to determine where that service is located based on its IP address). You can also isolate these communications based on Namespace, such as in cases where you want to talk to a service but only on your staging Namespace, not your production Namespace.

When you want to bring in traffic from the outside world, you would use an Ingress. Kubernetes will manage an Ingress Controller (which is usually just a load balancer such as NGINX) and will configure the controller to route external requests to wherever the Ingress objects specify.

Getting started with Kubernetes

Once you understand the basics of Kubernetes objects, including how they work and how to manage them, you can build all kinds of things – whether that’s custom objects specific to your cluster, objects for managing outside infrastructure or anything else.

Overall, the base structure of Kubernetes, which is the idea of declarative programming built around these objects, is broadly applicable across different types of systems, operations and workflows. This model of defining end goals without then having to think about how to get there makes operations significantly easier and is useful in a variety of contexts.

Interested in learning more? Contact Spaceship today to discover how we can help give your operations and delivery workflows a boost.

Want to be first in line to get early access to Spaceship?

Be the first to know when we launch! Sign up for our waiting list below: