Getting started with Kubernetes
Kubernetes is an open-source platform for running and orchestrating container workloads. Originally developed by Google in 2014, it has become one of the most popular platforms among development teams thanks in large part to its regular innovation and strong, fast-growing community.
For all the promise that Kubernetes offers though, what’s the best way to get started? Specifically, if your team is considering or has already decided to join the Kubernetes bandwagon, what resources can you use to test, learn and ultimately set up an environment that works for you?
Assuming you have a base knowledge of Docker, containers and even how Kubernetes operates at a high level, here’s everything you need to know to get started.
Option 1: run Kubernetes online
Running Kubernetes online eliminates the need to install anything locally, making it a great option for testing it out and learning more about how it works. When it comes to running Kubernetes online, one service offers the best option.
Katacode offers several real-world, in-browser examples for development tools, including an entire course dedicated to Kubernetes that is officially supported by the Kubernetes team. This course is a great option for becoming familiar with Kubernetes in a hands-on way if you’ve never used it before. It drops you right into the basics of getting started and includes a variety of example use cases, like deploying containers using YAML. It also provides an open-ended playground environment in which you can play around with a real Kubernetes cluster. Importantly, Katacode is very low risk since it doesn’t require any hardware or software setup – you simply click on a link to get started and can close the browser tab to end the session at any time if it’s not going well. Plus, it’s free.
Option 2: run Kubernetes locally
If you have more familiarity with Kubernetes but still need to play around with it a bit, a good option is to run it locally on your computer. This option works well in this situation because it reduces the cost compared to buying multiple cloud servers on which Kubernetes can run (which is the intended situation).
While running Kubernetes locally is a great way to get going, it’s important to recognize this approach will be slightly different than how you would ultimately run Kubernetes in production. Most of the common use cases will be there, but elements like direct cloud integrations won’t be available locally. That said, even once you do launch Kubernetes in the cloud, you may want to keep it installed locally as a way to help operators make sure Kubernetes stays top of mind at all times.
Four primary options exist for running Kubernetes locally:
Minikube spins up a small version of the Kubernetes control pane that controls the master servers of any Kubernetes cluster. You can use Minikube to create and run a Kubernetes cluster locally, do operations on that cluster, create resources and more. Essentially, it’s a one-size-fits-all command that runs on its own inside a local virtual machine or inside a container. You can configure it in a variety of ways, so it’s simply a matter of typing in a command to start.
Kind (which comes from Kubernetes in Docker) runs a cluster of services locally inside Docker. This means it will spin up a Kubernetes cluster inside Docker and simulate the different servers you might have on machines inside of separate containers to take advantage of the container architecture in which everything is isolated. In this case, you can use Docker processes to simulate having different servers. Overall, Kind offers a great way to use other tools and see everything happening inside of Kubernetes.
Docker for Mac/Windows
Docker for Mac/Windows is simply a matter of checking a box in the settings to enable Kubernetes. From there, you can run a set of Docker containers automatically to establish and run a Kubernetes cluster. While this setup doesn’t offer all of the Kubernetes features, it is very easy to get stated and offers a highly productive way to work with Kubernetes without having to install any new tools. It’s important to note that if you want to use Kind as described above, you’ll first need to have Docker running on your device in this way.
K3s is a mini version of Kubernetes that’s intended for places where you need to run Kubernetes but don’t have a lot of hardware available. It is a fully certified Kubernetes system, but it’s architected in a different way to make it lightweight. The primary use case for K3s is an environment with limited resources, such as IoT devices, but this setup also makes it great for local usage. If you use K3s for local development, you will receive less support, but it’s a great option for running Kubernetes locally without bogging down resources (think about running it on a light-powered laptop without slowing down your computer).
Option 3: run Kubernetes in the cloud
Google, Amazon and Microsoft all have Kubernetes services built into their cloud options at a deep level. The best part of these services is that they remove operational challenges around installing Kubernetes and provide ongoing management for Kubernetes servers. This means your team doesn’t have to worry about maintaining master servers, upgrades, availability, networking, disk storage or anything else. While the cloud providers will charge you for this high reliability and convenience in terms of an upfront overhead, anything after that is just the cost of running the servers.
Additionally, none of these cloud service providers offer any sort of specialized version, which makes it easy to move between clouds. Doing so is just a matter of clicking some buttons in the web interface to get a full workload that’s ready for use.
These options include:
- Google Kubernetes Engine (GKE)
- Amazon Elastic Kubernetes Service (EKS)
- Azure Kubernetes Service (AKS)
Option 4: run Kubernetes in a self-hosted environment
Finally, you can run Kubernetes in a self-hosted environment if you have on-premise servers or work with a cloud provider that doesn’t offer Kubernetes as a one-click option. This approach can also be a good option if you want more direct control over the configuration and operation of Kubernetes clusters, since you can’t control the master servers in the cloud-hosted options provided by Google, Amazon and Microsoft.
Running Kubernetes in a self-hosted way presents three tool options, all of which are interrelated:
Kubeadm installs Kubernetes if you already have servers running and the base operating system installed and networked. The functionality of the tool itself is small, since its focus is purely getting Kubernetes running, but it is flexible for use inside of other tools and a good component for managing the configuration of Kubernetes.
Kubespray is built on top of Kubeadm and uses Ansible to allow you to define and manage cloud level resources. Notably, Kubespray will handle the setup, so you don’t have to worry about the specifics of doing it all by hand. If you already use Ansible, Kubespray will fit in very well with your existing environment in a highly complementary way.
kOps is similar to Kubespray, but runs entirely on its own. kOps can also deliver outputs, for example interacting with tools like Terraform to co-build the infrastructure. Specifically, you might use a tool like Terraform to establish a network setup and an area for creating resources and then bring in kOps to create a cluster of servers, install Kubernetes and get those servers running online. The advantage of kOps is that it interacts with many tools you might already have in place and it can handle the infrastructure and maintenance pieces of running Kubernetes. Additionally, it allows you to create a Kubernetes cluster on your own hardware in a single command, which makes it easy to create and destroy clusters over time – which is generally a good practice when working with Kubernetes.
Ready to get started with Kubernetes?
Whether you’re interested in testing out Kubernetes or have already made the decision to go all-in, there are a variety of options for getting started based on your needs and existing setup. The options presented here provide different entry points depending on where you are in your journey with Kubernetes and the unique needs of your team.
Interested in learning more about getting started with Kubernetes and weaving it into your development processes? Contact Spaceship today to discover how we can help.