User mode linux was kind of popular many years ago especially with the cheap virtual hosting crowd, but interest seemed to die off a while ago, with what seems to be a semi-official page for user mode linux not being updated since the Fedora Core 5 days which was around 2006.
Red hat apparently just released RHEL 6.2, and among the features, is something that looks remarkably similar to UML –
Linux Containers
•   Linux containers provide a flexible approach to application runtime containment on bare-metal without the need to fully virtualize the workload. This release provides application level containers to separate and control the application resource usage policies via cgroup and namespaces. This release introduces basic management of container life-cycle by allowing for creation, editing and deletion of containers via the libvirt API and the virt-manager GUI.
•    Linux Containers provides a means to run applications in a container, a deployment model familiar to UNIX administrators. Also provides container life-cycle management for these containerized applications through a graphical user interface (GUI) and user space utility (libvirt).
•    Linux Containers is in Technology Preview at this time.
Which seems to be basically an attempt at a clone of Solaris containers. Seems like a strange approach for Red Hat to take given the investment in KVM. I struggle to think of a good use case for Linux containers over KVM.
Red hat also has enhanced KVM quite a bit, this update sort of caught my eye
Virtual CPU timeslice sharing for multiprocessor guests is a new feature in Red Hat Enterprise Linux 6.2. Scheduler changes within the kernel now allow for virtual CPUs inside a guest to make more efficient use of the timeslice allocated to the guest, before processor time is yielded back to the host. This change is especially beneficial to large SMP systems that have traditionally experienced guest performance lag due to inherent lock holder preemption issues. In summary, this new feature eliminates resource consuming system overhead so that a guest can use more of the CPU resources assigned to them much more efficiently.
No informations on specifics as far as what constitutes a “large” system or how many virtual CPUs were provisioned for a given physical CPU etc. But it’s interesting to see, I mean it’s one of those technical details in hypervisors that you just can’t get an indication from by viewing a spec sheet or a manual or something. Such things are rarely talked about in presentations as well. I remember being at a VMware presentation a few years ago where they mentioned they could of enabled 8-way SMP on ESX 3.x, it was apparently an undocumented feature, but chose not to because the scheduler overhead didn’t make it worth while.
Red Hat also integrated the beta of their RHEV 3 platform, I’m hopeful this new platform develops into something that can better compete with vSphere. Though their website is really devoid of information at this point which is unfortunate.
They also make an erroneous claim that RHEV 3 crushes the competition by running more VMs than anyone else and site a SPECvirt benchmark as the proof. While the results are impressive they aren’t really up front with the fact that the hardware more than anything else drove the performance with 80 x 2.4Ghz CPU cores, 2TB of memory and more than 500 spindles. If you look at the results on a more level playing field the performance of RHEV 3 and vSphere is more in line. RHEV still wins, but not by a crushing amount. I really wish these VM benchmarks gave some indication as to how much disk I/O was going on. It is interesting to see all the tuning measures that are disclosed, gives some good information on settings to go investigate maybe they have broader applications than synthetic benchmarking.
Of course performance is only a part of what is needed in a hypervisor, hopefully RHEV 3 will be as functional as it is fast.
There is a Enterprise Hypervisor Comparison released recently by VMGuru.nl, which does a pretty good job at comparing the major hypervisors, though does not include KVM. I’d like to see more of these comparisons from other angles, if you know of more guides let me know.
One thing that stands out a lot is OS support, it’s strange to me how VMware can support so many operating systems but other hypervisors don’t. Is this simply a matter of choice? Or is the VM technology VMware has so much better that it allows them to support the broader number of guest operating systems with little/no effort on their part? Or both ? I mean Hyper-V not supporting Windows NT ? How hard can it be to support that old thing? Nobody other than VMware supporting Solaris ?
I’ve talked off and on about KVM, as I watch and wait for it to mature more. I haven’t used KVM yet myself. I will target RHEV 3, when it is released, to try and see where it stands.
I’m kind of excited. Kind of because breaking up with VMware after 12 years is not going to be easy for me 🙂