Home > Articles > Operating Systems, Server > Linux/UNIX/Open Source

  • Print
  • + Share This
This chapter is from the book

Advanced Topics

A single book chapter isn’t the right place to go into great detail on all the features packed into Ubuntu Server. There isn’t enough space, and many of the features are quite specialized. But that doesn’t stop us from taking you on a whirlwind tour. Our goal here is to give just enough information to let you know what’s there and interest you in finding out more about those features that may be relevant to how you use Ubuntu.

Virtualization

If there’s been one buzzword filling out the server space for the past couple of years, it’s virtualization. In August 2007, a virtualization company called VMware raised about a billion U.S. dollars in its initial public offering, and the term virtualization finally went supernova, spilling from the technology realm into the financial mainstream, and soon to CIOs and technology managers everywhere.

Fundamentally, virtualization is a way to turn one computer into many. (Erudite readers will note this is precisely the opposite of the Latin motto on the Seal of the United States, “E Pluribus Unum,” which means “out of many, one.” Some technologies match that description, too, like Single System Image, or SSI, grids. But if we talked about virtualization in Latin, it would be “Ex Uno Plura.”) Why is it useful to turn one computer into many?

Back in the 1960s, servers were huge and extremely expensive, and no one wanted to buy more of them than they absolutely needed. It soon became clear that a single server, capable of running different operating systems at once, would allow the same hardware to be used by different people with different needs, which meant fewer hardware purchases, which meant happier customers with less devastated budgets. IBM was the first to offer this as a selling point, introducing virtualization in its IBM 7044 and IBM 704 models, and later in the hardware of its Model 67 mainframe. Since then, the industry largely moved away from mainframes and toward small and cheap rack servers, which meant the need to virtualize mostly went away: If you needed to run separate operating systems in parallel, you just bought two servers. But eventually Moore’s law caught up with us, and even small rack machines became so powerful that organizations found many of them underutilized, while buying more servers (though cheap in itself) meant sizable auxiliary costs for cooling and electricity. This set the stage for virtualization to once again become vogue. Maybe you want to run different Linux distributions on the same machine. Maybe you need a Linux server side by side with Windows. Virtualization delivers.

There are four key types of virtualization. From the lowest level to highest, they are hardware emulation, full virtualization, paravirtualization, and OS virtualization. Hardware emulation means running different operating systems by emulating, for each, all of a computer’s hardware in software. The approach is very powerful and painfully slow. Full virtualization instead uses a privileged piece of software called a hypervisor as a broker between operating systems and the underlying hardware, and it offers good performance but requires special processor support on instruction sets like the ubiquitous x86. Paravirtualization also uses a hypervisor but supports only executing operating systems that have been modified in a special way, offering high performance in return. Finally, OS virtualization is more accurately termed “containerization” or “zoning” and refers to operating systems that support multiple user spaces utilizing a single running kernel. Containerization provides near-native performance but isn’t really comparable to the other virtualization approaches because its focus isn’t running multiple operating systems in parallel but carving one up into isolated pieces.

The most widely used hardware emulators on Linux are QEMU and Bochs, available in Ubuntu as packages qemu and bochs respectively. The big players in full virtualization on Linux are the commercial offerings from VMware, IBM’s z/VM, and most recently, a technology called KVM that’s become part of the Linux kernel. In paravirtualization, the key contender is Xen; the Linux OS virtualization space is dominated by the OpenVZ and Linux-VServer projects, though many of the needed interfaces for OS virtualization have gradually made their way into the Linux kernel proper.

Now that we’ve laid the groundwork, let’s point you in the right direction depending on what you’re looking for. If you’re a desktop Ubuntu user and want a way to safely run one or more other Linux distributions (including different versions of Ubuntu!) or operating systems (BSD, Windows, Solaris, and so forth) for testing or development, all packaged in a nice interface, the top recommendation is an open source project out of Sun Microsystems called VirtualBox. It’s available in Ubuntu as the package virtualboxose, and its home page is www.virtualbox.org.

If you want to virtualize your server, the preferred solution in Ubuntu is KVM, a fast full virtualizer that turns the running kernel into a hypervisor. Due to peculiarities of the x86 instruction set, however, full virtualizers can work only with a little help from the processor, and KVM is no exception. To test whether your processor has the right support, try:

$ egrep '(vmx|svm)' /proc/cpuinfo

If that line produces any output, you’re golden. Head on over to https://help.ubuntu.com/community/KVM for instructions on installing and configuring KVM and its guest operating systems.

Disk Replication

We’ve discussed the role of RAID in protecting data integrity in the case of disk failures, but we didn’t answer the follow-up question: What happens when a whole machine fails? The answer depends entirely on your use case, and giving a general prescription doesn’t make sense. If you’re Google, for instance, you have automated cluster management tools that notice a machine going down and don’t distribute work to it until a technician has been dispatched to fix the machine. But that’s because Google’s infrastructure makes sure that (except in pathological cases) no machine holds data that isn’t replicated elsewhere, so the failure of any one machine is ultimately irrelevant.

If you don’t have Google’s untold thousands of servers on a deeply redundant infrastructure, you may consider a simpler approach: Replicate an entire hard drive to another computer, propagating changes in real time, just like RAID1 but over the network.

This functionality is called DRBD, or Distributed Replicated Block Device, and it isn’t limited to hard drives: It can replicate any block device you like. Ubuntu 9.04 and newer ships with DRBD, and the user space utilities you need are in the drbd8-utils package. For the full documentation, see the DRBD Web site at www.drbd.org.

Cloud Computing

Cloud computing builds on the most interesting aspect of virtualization: you can easily create, pause, and tear down multiple virtual computers, all running on one real bare-metal machine. To the user, those virtual computers behave just like real computers.

With cloud computing, computing power becomes a commodity. You don’t need to plan, budget for, and install new computing power months in advance. Instead, ask the cloud for a machine and it’s there within seconds. When you’re done, throw it away.

It’s like the difference between digging a well and turning on a tap. If you have running water from a tap, water remains a precious resource, but it’s no longer something you need to invest time in to obtain. If you have cloud computing, computing power may remain limited, but it’s much easier to get hold of, share, and repurpose.

If you want cloud computing power, there are a couple of ways to get it: build your own private cloud or buy computing power as and when you need it from one of the big cloud providers, such as Amazon or RackSpace.

Actually, with Ubuntu you can mix and match between using your own private cloud and buying additional resources when you need them.

Ubuntu is at the heart of the cloud computing revolution. Both as the base operating system running cloud compute clusters—the machines that provide the computing power to the cloud—and as the OS that people choose to run in the virtual machines they run in the cloud.

Because it’s so new, cloud computing is fast moving, but two dominant approaches to running a cloud have already emerged: Amazon’s EC2 and the open source project OpenStack. Ubuntu is at the heart of both. That’s great news if you want to experiment, because Ubuntu gives you a range of tools to set up and manage your own cloud.

The two main tools you’ll come across are MAAS, otherwise known as Metal As A Service, and Juju. MAAS helps you set up and manage a cluster of servers, and Juju makes it easy to get services running either directly on bare-metal servers or in the cloud.

Using MAAS, you can treat a group of bare-metal servers like a cloud: instead of a bunch of individual servers, you have a computing resource to which you can deploy different services. Using MAAS’s Web UI, you can quickly get an overview of how your computing cluster is being used and what’s available. Even with just a handful of servers, MAAS is a great way to start treating real machines in a similar way to a cloud.

You can think of Juju as similar to apt-get but for services that run on servers. Juju uses scripts, called charms, that do all the setup necessary to get a particular service running. Let’s say you want to run a WordPress server. All you need to do is deploy the correct Juju charm to the server and everything should get set up correctly.

By combining MAAS and Juju, you can relatively easily set up your own private cloud. MAAS helps you manage the real machines, and Juju helps you set up the services you need to run that cloud.

Let’s say you want to deploy a specific service to the cloud using Juju. We use Zookeeper in our example. Deploying a service normally begins like this:

$ juju bootstrap
$ juju deploy zookeeper
$ juju expose zookeeper

At this point, Juju creates a virtual machine instance, installs Zookeeper, and configures it. Use the following to determine the status of the instance:

$ juju status

When you get the notification that your instance is up and running, you will learn its IP address and can start using it. This process is amazingly quick and easy!

One useful tool that is brand new for 12.04 is the Juju charm store. When Juju started, it was much more difficult to find and deploy services using charms, but with the charm store as a central repository, the process has become much more elegant. Just as APT and related software repositories have made finding and installing software easy, the Juju store is designed to make launching and configuring servers in the cloud easy. There is an interesting difference between how APT and Juju work.

In an APT repository, the software is generally frozen—that is, it will not receive updates for a specific release cycle except perhaps for security updates or minor bug fix updates, and even then the updates are often stored in another repository (e.g., precise-main or precise-security). With Juju, charms can be written by anyone and can be updated within a release cycle for the charm store. It isn’t frozen the way APT repositories are. This is important, especially for those who (wisely) choose to run an LTS release of Ubuntu for their servers and who otherwise might need to wait several years before they can install a new version of desirable software or until they upgrade their Ubuntu release version to the next LTS or find a PPA with the software in it.

So, let’s say you are working on a development version of Zookeeper and want to quickly create a cloud server instance with the development version deployed to it instead of the stable version. You could simply use the following series of commands instead of the previous set:

$ juju bootstrap
$ juju deploy zookeeper
$ juju set zookeeper source=dev
$ juju expose zookeeper

By adding only one line, you are deploying the development version! The ability to easily set the source from which to deploy is useful.

Charm authors are given the ability to write these details into their charms. Specifics like versions, sources, and so on, are all in there. If you want to install nodejs, there is no need to search all over for someone’s install script in a blog post or a pastebin code snippet. You can use that person’s Juju charm. Simple. The charm includes information like which PPA to use and the install script necessary.

Allowing anyone to give you a charm to install software could be dangerous, so some safety and security mechanisms are built in. This is the second reason for the store (after convenience for end users). Charms included in the Juju store go through a community peer review process and an automated build process to test for failures. Anyone can write charms for their own use or to share directly with people, but it takes a little more effort and scrutiny before a charm is included in the store.

There is so much more to be said about Juju, MAAS, and charms that is beyond the scope of this book. If you are interested, see juju.ubuntu.com/ Charms or The Official Ubuntu Server Book, Third Edition, also from Prentice-Hall.

  • + Share This
  • 🔖 Save To Your Account