Your guide to Kubernetes: Everything you need to know to get started with service discovery
Kubernetes promises to deliver significant benefits to development and operations teams. And while it might simplify a lot of day-to-day work, it’s actually a quite complex system. As a result, there’s a lot you need to consider when setting up and maintaining Kubernetes for your team.
As you do so, one of the most important elements to account for is service discovery, which will enable services within your system to talk to one another. Fortunately, there are plenty of options for service discovery in Kubernetes. Here’s what you need to know to get started.
Why you need strong service discovery in Kubernetes
Plain and simple, if you use microservices or a service-oriented architecture (SOA), you need a way for those services to find and talk to each other.
Consider the following real-life example: You have three different elements in your application, one that handles authentication, one that does billing and one that sends emails. They need to work together to successfully login a user, bill their account and then send an email confirming the payment went through.
Importantly, this is only one example of many, as it’s extremely rare to have code running without any external help or interaction (and if that is the case, you’re just running small islands of code and you don’t really need the horsepower of Kubernetes).
How service discovery works in Kubernetes
Traditionally, setting up service discovery has been a very manual process: You would write a configuration to indicate where a certain service lives and how to talk to it and then check regularly to make sure it was working and stable.
In Kubernetes, automation takes over a lot of that work, particularly the configuration elements of the process. Diving deeper, two options exist for setting up service discovery in Kubernetes.
Option 1: Use Kubernetes’ built-in service discovery system
The first option is to use Kubernetes’ built-in service discovery system, which offers basic capabilities to find and communicate with services.
Specifically, Kubernetes’ built-in functionality uses Service objects and DNS or environment variables to find other services, both internal and external (e.g. databases). One of the benefits of this functionality is that it allows you to name services and then address them by that name, meaning if you set up a service called “Authentication” or “Billing,” you can reach that service just by looking up that name. You can also use environmental variables as a backup configuration option. Overall, Kubernetes’ native option offers a strong, stable configuration to find services without having to build anything special.
While it does that well, this service discovery option is pretty bare bones. It simply covers the discovery portion of finding services and doesn’t offer anything in the way of traffic control, authentication, authorization or performance monitoring. Basically, it finds the designated service and makes the connection but doesn’t manage that connection in any way. And because the services talk to each other directly with this setup, your team has very little control over how that happens. This can make the system less reliable, especially during outages or new deployments.
As a result, the built-in service discovery is a fine option for getting started, but it’s usually something you’ll outgrow quickly as your system scales. Some of the most common issues you’ll start to run into that signal it’s time for an upgrade center around traffic control and needing a more reliable way to manage connections as you roll out new versions of code (e.g. so that if services are in the middle of talking, one of them doesn’t go down right away).
Option 2: Introduce a third party service mesh
If you’re ready to graduate from the basic built-in service discovery, the next step is to look into a third party service mesh. These systems piggy back off the service discovery system in Kubernetes to improve reliability, performance, security and observability around the service connections.
Among the many service mesh systems available, three in particular are the most popular for Kubernetes. They all operate very similarly, using a proxy server to manage and observe connections between services. This lets them control network connections more thoroughly and inject additional functionality without having to modify the service itself. In turn, this leads to greater reliability, more stable performance, better monitoring and increased security.
These systems include:
Istio: Istio is by far the most popular option, but also one of the most complex. It’s a heavyweight offering that will do everything you might need, which is something of a double-edged sword. On the one hand, it’s very complex to set up, but on the other hand a lot of people have gone through that process which means there are all kinds of use cases available for you to see. Additionally, if you run into any sort of problem, it’s likely already been solved by someone else.** **
Istio also has many additional features built in, including fault testing (which puts faults into the system to improve the user experience, similar to the concept of chaos engineering from Netflix) and rate limiting (which prevents services from being overwhelmed in failure scenarios). These are just two examples of many bonus features that are natively built into Istio. Notably, while they may start out as “bonus” features, they do become critical as you continue to scale, which makes Istio particularly popular among larger enterprises.
Linkerd: Linkerd is a very lightweight service mesh that’s simple to use and easy to get started. It doesn’t have all the bonus features that Istio has, but it does have the full set of core features required for a service mesh.
This makes Linkerd a great option if you’re just getting started with a service mesh because it allows you to get to the performance, security and monitoring benefits quickly without wasting time configuring it. Long term, if you do find you need those additional features as your application scales, you’ll already have the foundation in place to make a switch. It is a fair amount of work to swap service meshes, but once you’ve built your system in a way to support one of them, your code, configuration and operations are inviting to all of them, so you won’t have to start from scratch or completely re-architect your system to make that switch.
Consul: Consul is also a simpler option compared to Istio. Created by Hashicorp, Consul is built for more than just Kubernetes and fits into their full suite of tools (Nomad, Terraform, etc). It operates well within Kubernetes, but it does work somewhat differently than typical Kubernetes-native tools.
The fact that Consul is not built specifically for Kubernetes can actually be a benefit if you’re looking for a transitional tool to help you bring your systems into the Kubernetes world slowly or if you have an existing set of services running elsewhere. That said, all of these systems are usable outside of Kubernetes (for example, at Spaceship we use service discovery to find our external database servers and then map them within Kubernetes). Finally, Consul also integrates very well with the other Hashicorp tools, making it a good option if you use any of those.
Getting started with service discovery in Kubernetes
At any stage, service discovery is an important part of getting set up in Kubernetes. As your system grows, you’ll very likely find you need to graduate from the built-in service discovery functionality to a third party service mesh system to ensure communications between services are appropriately scalable and observable. Fortunately, plenty of options exist to make that happen.
Interested in learning more? Contact Spaceship today to discover how we can help give your operations and delivery workflows a boost.