The ghosts of software deployments past: how new solutions + processes simplify software deployments

Patrick Wiseman
CEO and Founder

Historically, software deployments were tough. They were risky and expensive and happened in a fairly ad hoc manner, making it difficult to replicate success over time.

Fortunately, we’ve come a long way. Over the past few years, in particular, a lot has happened to help standardize software deployments and make them less risky and less expensive. In turn, these improvements have made constant releases through a Continuous Delivery approach far easier to manage.

Given the progress we’ve made in the past few years and the ongoing shift to Continuous Delivery, we can expect this trend of improvements to continue. With that in mind, let’s take a look at where we’ve come from to better understand where we’re heading. These eras do not represent a linear history at every company but rather industry trends over much larger periods of time. Much in the same way that geologic periods have many of the same plants and animals.

Era 1: Physical servers offer complex, manual deployments with no resource scaling

The first era of software deployments is characterized by the use of physical servers. Physical servers were built and then setup with the necessary operating system and syste level dependencies for an application. Operations teams would often set up the servers and throw them over the wall to the development team which would install their own software dependencies for their preferred language or web framework. Nothing about the process was highly standardized or repeatable. The servers were pets that had to be cared for their entire lifetime.

Beyond a lack of standardization, the entire process was expensive. These manually built machines also had no ability to elastically scale resources in a timely manner, which meant development teams would have to over-provision what they would need since there was no way to add resources efficiently as demand increased. Most of the resources went largely unused.

Finally, delivering the next version of the software was just as complicated, with release plans written down and executed in a manual way across teams.

Key characteristics:

Era 2: Virtual machines (VMs) reduce maintenance and improve resource scaling

The constraints of the physical server era gave way to a second era built on virtual machines (VMs) and machine images that centered around better resource utilization and on demand pricing through cloud computing.

The rise of cloud native platform providers like Amazon Web Services (AWS), Google Cloud, and Microsoft Azure eliminated the need to build and operate servers and made developing web apps accessible to a lot more people. Additionally, the ability to replicate environments and run more than one environment at a time made it possible to easily scale servers up and down based on demand and pay for those resources based on usage to better control costs.

This era also offered significant improvements in terms of software infrastructure and release plans. The advent of VMs allowed developers to package an operating system and system libraries on top of which they could then build code, offering a faster way to get started and a more standardized way to do the same thing multiple times. And while release plans were still written down and executed, they were sometimes run as scripts across multiple servers to alleviate some of the manual work.

Despite all this progress, the VM era still had a lot of manual configuration with little-to-no documentation and many teams still treated their cloud servers like pets. While cloud providers had automated much of the provisioning process and helped reduce costs many companies failed to automate their own internal processes. This resulted in VM instances often being treated like pet servers with regular maintenance being required like with pet ownership.

The best of the best practitioners in this era were using machine images and autoscaling to build applications that could rapidly make use of newly provisioned resources. But many more were simply operating pet servers in the cloud failing to reap the benefits of cloud native technologies.

Key characteristics:

Era 3: Containerization standardizes deployment environments

While the VM era offered improvements over the physical server era, it didn’t offer much in the way of standardization and often created serious security vulnerabilities within neglected software.

The challenges of the VM era led to the container era, which offered a lighter weight abstraction compared to VMs and reduced the security issues that plagued the previous era. Containers also offer a way to standardize deployments by allowing for user accounts, permissions, working directories, software dependencies, and other common application concerns to be written in code. By writing these into code it allowed teams to execute these steps uniformly and be able to recreate previous builds of the software. It also recorded a history in source control of the changes and when they were made rather than maintaining a separate process outside of the software itself. This change not only standardized development and deployments, but it also provided high levels of visibility in terms of revisions, progress and issues over time. In doing so, it professionalized the ability to run infrastructure as code by making it more replicable and standardized. Unfortunately there were many cloud providers trying to solve this problem and often while containers were portable from provider to provider the process to deploy software was not.

Additionally, the container era made it possible to deliver more operational code with feature releases by standardizing the runtime environment. More often than not container image builds and production deliveries were run as custom scripts in a company’s continuous integration tool to further eliminate manual work. Once in production, many companies still manually handled the scaling of resources, but scaling was relatively easy.

Key characteristics:

Era 4: Container orchestration allows for full automation and lowers risks

The current era of software deployments is characterized by container orchestration. This era eliminates years of concerns about how to run containers, scale them up and down based on demand, link related applications, aggregate logs, monitor application health and maintain security by standardizing all of these activities through code.

This era also allows developers to automate operations through the use of Kubernetes. For example, developers previously had to define releases in small, declarative units, but now APIs allow developers to define their ideal state for a release and use Kubernetes to automatically resolve the differences between the current state and that ideal state.

Together, these advancements have made for less risky blue-green deployments, in which teams stand up two production stacks for new releases so that they can roll back any changes instantaneously if an issue occurs. This lowered risk makes it far easier to constantly release software under a Continuous Delivery model.

Key characteristics:

What’s next for software deployments?

Each era of software deployments have improved on the challenges of the previous one, leading to less costly, less risky and more standardized deployments. In turn, these improvements have allowed teams to better respond to customer needs by releasing new software faster. And as teams continue to embrace Continuous Delivery to offer more regular and more efficient releases, we can rest assured that this trend of innovation will only continue.

We hope that Spaceship can make the entire delivery process portable between cloud providers.

Need help staying up to date with the latest trends in software deployments? Contact us today to learn how Spaceship can help.

Want to be first in line to get early access to Spaceship?

Be the first to know when we launch! Sign up for our waiting list below: