Skip to content

From Physical To Virtual To Containers

I have spent a long time talking about VMware technology and as a result Virtual Machines (VMs). VMs have been around for a while now and until recently have been the de-facto form factor especially in private cloud deployments. In this post, I would like to cover the history of physical servers to virtual machines to containers. Read on to learn more!

Physical Servers

Oh the good old days of buying a piece of hardware, installing an OS on it and running one or more applications on top of that. Sounds like a simple, repeatable process right? It was! Unfortunately, there were a few issues with this model:

  • Slow – buying a piece of hardware could take weeks or even months depending on the process. Besides getting budget approval, you had to work with hardware vendors to get specs and quotes, ensure you had sufficient datacenter space for the hardware, wait weeks for hardware to arrive, unbox the hardware, rack the hardware, configure the hardware, etc.
  • Inefficient – while having your own dedicated box meant you could control it, the end result was often an underutilized server. A single underutilized server may be OK, but a company would have hundreds if not thousands of servers which each cost tens of thousands of dollars. Waste was expensive.

In the physical server model you were typically running your servers in your own datacenter (later called “private cloud”), at a co-location (i.e. leased space) or were leasing the hardware from a managed hosting provider. Except in the managed hosting model, you were responsible for managing the hardware. Looking at a typical company, a variety of roles and responsibilities existed to leverage this hardware. These included:

  • Datacenter Operations – these were your rack, stack and cable folks. They were responsible for ensuring the physical servers were working as expected
  • IT Administrators – these people provided services on top of a subset of the hardware that people at the company could leverage (e.g. authentication, email, etc)
  • Developers – these were the people writing the code that run on a large subset of the hardware (typically not responsible for running the code in production)
  • Operations – these were the people responsible for ensuring the code Developers wrote stayed online in production (commonly broken into separate compute, network and storage teams)

Sometimes IT Administrators and Operations were the same team.

Virtual Machines

To address some of the limitations with physical servers, virtual machines were introduced. VMs run on top of a hypervisor which run on top of a physical server (known as “bare-metal”). The idea was simple, if you run multiple VMs on the same physical server you can better utilize the hardware resulting in less inefficiency. In addition, assuming you had spare hardware (which was much more likely if you were running VMs) then you could quickly (typically hours) provision new workloads (i.e. deploy more VMs).

The hypervisor abstracted the physical hardware and presented it to the running VMs. Each VM had its own operating system and could run one or more applications. The hypervisor also handled coordination of the VMs. While initially the hypervisor introduced significant overhead, over time the hypervisor became extremely efficient (due to code and physical hardware changes). All these factors combined resulted in a high adoption rate for VMs.

As VMs became popular, another shift in the industry started to form. This was the shift from private cloud to public cloud (i.e. having a provider worry about the hardware so you can focus on your software). While this shift was occurring, some new requirements started to form including:

  • Speed – while weeks to hours was a huge improvement for provisioning resources, there was a desire to move even faster
  • Portability – while VMs provided some amount of portability, there was a desire to have even more portability and in a faster way

In terms of roles and responsibilities for VMs, the primary change from physical servers was the introduction of a VI Administrator. The VI Administrator was typically part of either the IT Administrators or Operations groups.

Containers

As the switch from private cloud to public cloud began another switch in the industry began — the shift from a monolithic architecture (a single system running an entire application) to a microservices architectures (each service of an application running independently). This was in part to begin addressing some of the limitations seen in the VM era. To assist these shifts, containers became popular. Containers were similar to VMs, but were typically much smaller in size. The size reduction was due to two primary reasons:

  1. A container was typically single purpose (VMs were often multi-purpose)
  2. A container did not include a copy of the OS — the OS was shared amongst all the containers running on a host

Sharing of an underlying OS initially resulted in concerns around security. As containers matured so did the security model around them. Today, most people trust containers as much as they trust VMs.

In the microservices, container-based era, roles and responsibilities really began to change. With the push to public cloud, there was no need for datacenter operations as that was provided by the cloud provider. The Operations team dissolved and the R&D team became responsible for running the code they produced (became know as DevOps). The formally Datacenter Operations and Operations teams became known as SRE in the DevOps world and is now starting to converge into the Platform team.

Summary

Over the last couple of decades we have seen the switch from physical services to virtual machines and virtual machines to containers. Each of these shifts was driven by the need to be more agile and to deliver value faster while maintaining acceptable cost. As the deployment model changed so did the location (private cloud to public cloud) as well as development model (monolithic to microservices) and the team structure (separate Development from Operations to R&D owning quality and embracing DevOps). In future posts, I will cover these other changes in more depth.

© 2018, Steve Flanders. All rights reserved.

Published inSoftware Development

Be First to Comment

    Leave a Reply

    Your email address will not be published. Required fields are marked *