Choosing a web hosting solution isn't always easy. The good news is that you have many choices to pick from including hypervisor and container-based virtualization. Not sure which is which? Lets discuss each and then provide a more technical overview of containers which is where the technology trend is moving (at a very rapid pace!).
In the Beginning….
Before Cloud became a marketing buzzword and even before virtualization was a common technology, a web site would be housed on a physical server, this was generally not an optimal use of the server's power as most servers would be idle for most of the time. As communication links became faster and web traffic increased, server utilisation began to increase. As technology evolution has seen even more power from CPU's the introduction of virtualization technologies from numerous vendors began to seriously utilise this power and so was born the Hypervisor from vendors like VMWare and later Microsoft with Hyper-V.
Virtualization allowed a hosted operating system to utilise the hardware resources as if it totally owned the resources but in reality the Hypervised instance was given access to the underlying CPU, Memory, Network and Disk storage in an emulated, secure and controlled manor by the Hypervisor, since it had direct control of the underlying hardware. This model alone revolutionised the computing world at the server level, (not so much at the desktop level) and has done so for over 10 years now.
The Rise of Containers
With the Hypervisor market cementing itself as a rock solid reliable technology, an open source movement began to develop a “Hypervisor like” environment using the Linux environment as the hosting platform but with isolation and resource management built in. This enabled multiple users to host applications in isolation to each other, those applications could be anything but hosting web servers was the primary use. The hosted environment looked like a complete working operating environment, it would have its own file system, network address, memory space, users and applications just like a Hypervised environment but without the overhead of a separate Hypervisor. The result could realise an additional 10% to 20% performance gain by removing an entire software layer from the equation. So was born the “Container” environment.
Container Based Hosting
Organizations like Google which once ran a single Operating system on a hardware node now had the technology to run several instances of their operating systems in what was soon to be known as “containers”. This dramatically increased their available computer resources. The open source group leading the Container concept called “Open VZ” went on a slightly different path and were eventually snapped up by a company that is now known as Parallels. At Conetix we use the Parallels Container based virtualization software, for windows known as “Virtuozzo” and on Linux its Parallels CloudLinux, so many of the Open_VZ tools are similar and Parallels has given much of the Container virtualization code back to the Linux and Open_VZ community over the years, partly in good faith but also to ensure its in the Kernel in every release and not as a patch set that needs to be applied later.
Like most open source projects, different offerings of Containers became available, Linux Containers also known as “LXC” is a different variant that uses the Container Virtualization support that Parallels gave back as well as “cgroups” from Google and Linux namespaces. There are no large scale commercial deployments of LXC but it's support is growing with the advent of Application Containers developed by “Docker”, more on this later.
Inside a Container
From the end users perspective, hosting their web site inside a container is no different to hosting it inside a Hypervised Linux environment or on a physical server.
From a logged in user's perspective a container looks just like any other Linux environment and on a Parallels Virtuozzo for Windows system your RDP session looks just like a fully blown windows environment (yes you can get containers for Windows right up to Windows 2012!). In a Linux container all the same tools are present, you can do a process list, start every application the same way, you have daemons and the standard file system mount points look identical, so there is a /etc, /dev, /usr, /var and so forth.
The scheduler “init” is running to schedule your applications and each container has its own “init” process which from the hardware nodes perspective is just another process running.
But under the hood the container is doing some interesting things. You and the 8 dozen other containers are most likely running the same processes but you each don't have an individual copy even though the “ls” command shows your /bin directory is full of programs, instead you are sharing a “template”, a neat design where all the apps that come with the operating system and many of the most common applications like the LAMP stack are packaged together as groups of files hosted by the hardware node's operating system and virtual symbolically linked into each container. This includes configuration files as well, unless you modify them, when that happens the operating system copies the template file (called copy on write), removes the virtual sym link and puts your modified file in your file system, The space saving is enormous, with only locally created files existing in your file system.
At a disk level, your container looks like a file, just like a VMDK file in Vmware or the equivalent disk image in Hyper-V, but in most container offerings the file system is thin provisioned. So if you need to scale up your disk space (or reduce it) that can be done on the fly. From a virus checking point of view, your container's file system is mounted under a special mount point on the hardware node so system tools at the hardware node level can safely and securely check every file if needed.
From a Cloud vendor's perspective a container based infrastructure means the density of containers can be dramatically scaled up compared to a hypervisor based environment. At Conetix we scale the cloud servers to keep the load optimally balanced so that memory and CPU is available if containers need to burst their CPU and memory usage. We can also dynamically migrate a container between compatible nodes when needed without scheduling downtime.
In a Container environment, the amount of disk, inodes, CPU and RAM can be dynamically provisioned without a Container restart, that's a big plus. Also a new Container's start up time is measured in seconds as the startup process has already occurred so its just a matter of starting the init process and the two dozen daemons that typically exist in a minimal Linux environment. For the Windows Virtuozzo nodes its slightly more to start a new environment but usually a lot faster than a typical VM bootup.
At Conetix, we are gearing up for the next wave of Container hosting, that's going to be both exciting and a revolution! Application containerisation is the latest technology push, spearheaded by “Docker”. Rather than buying a VPS (VM) or Hosted Container, you could end up buying just a container for your application, need a web server and database? Just buy the applications and join them up to build a working system.