The Open Source Desktop Myth
Date: Dec 22, 2006
Article is provided courtesy of Prentice Hall Professional.
According to an old saying, the military always fights the last war. In World War I, both sides were relying on tactics that were obsoleted by the invention of the machine gun. In World War II, the French didn’t factor in the development of the bomber. In Vietnam, the American military tried to apply blitzkrieg-like tactics to a guerilla war.
The Free Software community has a similar problem. Recent years have all begun with people asking, "Is this the year of the open source desktop?" (Or sometimes the Linux desktop if the journalist in question doesn’t actually know what Linux is.) The correct answer to this question is "who cares?"
Microsoft Won. Get Over It.
The war for the desktop PC is over. It was over in the ’90s. Microsoft won. Apple made a better desktop OS, and Microsoft gained the bigger market share. Currently, you can build a Free Software desktop PC that is better than Microsoft’s offering and has a smaller market share than Apple.
Producing a desktop PC that’s better than Apple’s is a difficult task for the Free Software community, but far from an impossible one. Producing one with a greater market share than Microsoft is much harder and probably not worth the effort.
The good news is that it doesn’t matter.
What Next?
Microsoft got their monopoly with DOS, and later with Windows, by anticipating the market better than their competitors. They kept it by providing vendor lock-in.
The biggest threat to Microsoft in the ’90s was when Netscape announced that they were turning their browser into a platform for running client-server applications. Soon after the announcement, Microsoft did everything in their power to kill Netscape.
Sun was a similar threat with Java, although it was much more recently that fast CPUs and improvements to the Java runtime meant that it was actually possible to run Java applications at a reasonable speed. Again, Microsoft tried to kill the technology.
So where are we headed? The desktop PC is dying. We have already hit the peak of the desktop PC era. Companies such as Apple are selling more laptops than desktop PCs already, and the rest of the industry is not far behind.
Laptops, however, aren’t that different from desktop PCs in terms of software. When it comes to tablets, they might require better handwriting recognition and a user interface designed for single-button pointing devices (ever tried right-clicking with a pen?), but the software requirements are quite similar.
Laptops are being heavily outsold by even smaller machines, however. A lot of people who don’t even own a desktop PC or laptop are buying mobile phones and are upgrading them more often.
My current mobile, which is almost a year old now, has a 220MHz ARM-9 CPU, 32MB of RAM, and 1GB of Flash. A decade ago, my main machine was a 133MHz Pentium with 32MB of RAM and a 1GB hard disk. This was fast enough to run Windows NT 4.0 and a suite of applications.
Within a few years, a pocket device will have enough processing power and storage space for the average user’s computing needs. It will have enough bandwidth to delegate storage of very large files and complex computations to remote devices.
The main differences between desktop and mobile applications are these:
- The size of the UI
- The mobility of data
The size of the UI is quite misleading because it is possible to add things such as bluetooth keyboards and external displays quite easily. This means that a small UI is not such a defining feature as a variable-sized UI. Although a desktop application can assume that you’ll have relatively constant size, a mobile application has to be usable on anything from a one-inch screen to a wall-sized display.
The other feature is perhaps more important. I want to be able to get at my data anywhere, but I don’t want a passing thief to be able to walk off with it. For backup purposes, I want my data stored somewhere secure, but for convenience I want it stored close to me. These conditions provide some interesting challenges for people designing the next generation of operating systems.
The Virtual Machine
I carry a laptop around with me because it lets me have the same computing environment wherever I am. I don’t have to worry about whether the software I want is installed or whether someone else has set up unusual shortcuts.
When I get to work, however, I plug in an external keyboard, mouse, and monitor. Effectively, I’m just bringing a hard drive, CPU, and RAM with me. If the files were stored on a fileserver, I wouldn’t need the disk, and a CPU and RAM are easy to come by.
What is it that I’m really carrying around? The answer is state. I am carrying around an operating environment’s state wrapped up in a lump of metal and plastic. What do I really need for that?
Sun has one potential solution in the form of their Sun Ray systems. The only thing you carry around with you is a smartcard, which identifies you. When you insert it into a card reader, it connects to a server that sends your desktop to the machine.
IBM has a potentially more interesting suggestion: they put a Xen virtual machine on a USB flash drive, which contained an entire OS install and applications. At login, it would mount a remote fileserver and unmount it whenever the VM was suspended. This could just be plugged into a machine running Xen and immediately display the user’s desktop, even without a network connection.
With live migration of VMs, it would be possible to keep an environment running on a mobile phone and then migrate it to a desk PC at work, an entertainment center at home, and so on.
What Is the Free Software Advantage?
With software migrating all over the place, it becomes very difficult to keep track of licenses. The Vista license even prohibits running it in a virtual machine and ties it to a particular physical computer.
In contrast, a Free operating system can be installed in a VM, migrated around a network, to pocket machines, cloned for testing, and so on without any licensing issues. The cloning is a big point. If you are collaborating with someone on a project, you can make their life easy by cloning the virtual machine you are working in and giving them a copy. This will have the same set of tools, the same layout, and everything.
I expect the ubiquitous computing (UbiComp, to any newspeak speakers) world will see the rise of the virtual appliance; entirely self-contained applications that run in their own virtual machine. They will be migrated to where they are needed and cloned when more are required.
Interestingly, this is close to the conclusion of Alan Kay’s vision when he invented the term object oriented. Anyone who has grown up with cargo-cult object oriented languages, such as C++ and Java, may be laboring under the delusion that OO is about classes and methods and inheritance. In fact, the idea was very simple. Computers are complicated. We can make them appear simpler by splitting them into simple (virtual) computers that communicate by message-passing.
In Alan Kay’s object oriented world, everything is composed of simplified computers. Even the messages that the simple computers exchange are simple computers themselves. This idea, again, goes right back to the foundations of computer science, when Alan Turing talked about Turing Machines that operated on other Turing Machines.
We are currently seeing the first steps toward a fully object-oriented system with things such as Xen. The simplified computers that Xen supports are not very simplified. They still have a lot of the cruft of the x86 instruction set, but they do begin to provide some simplification, including an informal message-passing mechanism. Xen implements only shared memory directly, but Xen drivers typically use a ring-buffer mechanism for passing messages.
The Web
The Web keeps trying to be the platform of the future. Netscape tried it. Sun tried it. Microsoft sort of tried it in their own typical half-hearted way.
The latest buzzword to grip the Web is AJAX, a buzzword so significant that it justifies an entire new major version of the Web. Of course, those of us whose mothers told us to avoid .0 releases of anything like the plague are still somewhat wary of Web 2.0; we are waiting for at least Web 2.1 or possibly Web 2.2b before we join in the bandwagon jumping.
The basic idea behind Web 2.0 is that you can write your GUI in a combination of HTML and Javascript and write your back-end code in whatever language you like, and have the two communicate over the network. Anyone paying attention in the ’80s will find this very familiar; Display Postscript and NeWS both did the same thing, although they used PostScript instead of HTML and JavaScript. Oh, and they both failed; it turned out that no one liked having to write views in a different language to models and controllers.
The biggest problem with AJAX (apart from trying to write views that look the same in different browsers, and the speed penalty of doing so much in JavaScript) is that HTTP is really not designed for it. HTTP is a stateless protocol, whereas applications want to have a statefull connection between their front and back halves.
If only there was a statefull protocol we could use for sending XML. An XML protocol for messaging. If we’re using it for messaging, perhaps we could also use it for sending presence information. Let’s call it the Extensible Messaging and Presence Protocol, or XMPP for short.
Once again, the magic protocol pixies have delivered, and we find that XMPP not only exists but is also an IETF-ratified standard. Rather than shoe-horning XML queries and responses into HTTP, it would be far more sensible to keep an XMPP connection open between the server and the browser and use XMPP info-query stanzas for this messaging.
Will this happen? Maybe. At the very least, AJAX is introducing a new generation of developers to programming asynchronous message-passing systems—a skill that will be essential in the UbiComp world.
What Would MS Do?
Microsoft is trying desperately hard to find a business model that fits with the post-desktop PC era. Their entire strategy depends on the idea that software is something that you buy and install on one machine.
Microsoft’s forays into the mobile arena so far have been uninspiring. They have pushed Windows CE, which makes the same UI mistakes as desktop Windows (and a few new ones), into a smaller device. The additional license restrictions in Vista make it clear that they still haven’t woken up to the idea that a static desktop operating environment has only a very limited shelf life.
If the Free Software community continues to fight the last war and build a desktop environment, they might wake up in time and produce something for the post-desktop PC world. (Singularity looks promising in this regard, if they can find some workable licensing terms.) Microsoft has adapted to new environments before. They started selling a BASIC interpreter; then moved on to operating systems, GUIs, and even office suites.
Currently, however, the Free Software community has a huge advantage. The open licensing model is a lot more attractive in the post-desktop PC era than increasingly restrictive proprietary EULAs, as long as the software has the required quality.