For years, there has been a sentiment throughout the industry that the desktop is where products should be targeted and that the home system is the center of personal computing. There is some truth in this: Great advances have been made in getting technology into the home sector, but it is foolish to think that it should stop with a single (or even multiple) personal machines in the house. In coming years, more services will be automated and centrally controlled via independent yet cooperating systems, all operating within the context of a larger system, whether that is the house, a corporate office, or a remote facility.
It is of utmost importance that the groundwork of these systems is built upon open systems. Step back for a minute and consider the alternative: Your house might one day be controlled through any number of proprietary systems that are incompatible with each other, with the only exceptions being those devices from manufacturers that have formed alliances long enough to make their systems cooperate. Left to these designs, we will all be surrounded by systems out of our control, with our only recourse being to continue paying service to the corporations that have brought us to this point.
Failure to maintain the coming onslaught of subscription services will leave many in the cold, locked out from the flow of information that is quickly becoming a staple in our society. The so-called digital divide, which has been slowly deteriorating up until this point, will snap back in place. At the same time that this is occurring, the free market for digital devices that work within this twisted web of proprietary systems will disappear. As with any other system, standards are useful only if they are developed and used in an open fashion. This idea has been presented repeatedly through the years, and it will only become more important as we move forward.
But enough about this horrible dystopian future. I'm not here to discuss it or to wait for it to happen. Let's look at what can be done to avoid it, while at the same time building a solid infrastructure for the coming intelligent devices. As you might have guessed already, the primary focus of this article is the use of Linux for embedded and real-time systems. When it comes to open development and standards-based computing, Linux (and, of course, the Open Source/Free Software movements behind it) is unparalleled. In the server room, Linux has shown itself to be standards- compliant and flexible, and while this has been important so far, it will be only more so in the field of embedded systems.
When you're building an embedded system of any kind, you most likely are focusing on making sure that the system is flexible, has a low cost, and is capable of interacting with the rest of the world easily. Linux is well known for its flexibility and its capability of scaling from the smallest to the largest of machines with relative ease. As an OS, it is very modular, both in the kernel and in the rest of the needed system components, allowing you to mix and match pieces as needed. In terms of cost, most embedded systems have a very small margin for cost and software overhead; in many situations, adding the cost of a full OS license along with the computational overhead of a full OS on top of the system causes the overall cost of the system to skyrocket. Getting an operating system at little or no cost per unit can determine feasibility for the project.
The final point is the most important in these days of interoperability: Nearly everything must be capable of effectively dealing with existing protocols to exchange data with other machines in the environment. A completely standalone machine is relatively unheard of. By using Linux on your embedded system, you get all the features of a UNIX-like environment, in a very small amount of space. All (or most) of the capabilities that you had in the server room are present on the system, meaning that you don't have to reinvent the wheelthe same interfaces, programming environment, and tools are available to you as before.
As these embedded systems are becoming more popular, Linux programming environments are appearing on more architectures. Toolkits, along with support, are commonly available. Even though you might not realize it, some of the appliances that you use on a daily basis already are running some form of Linux. Some of the more readily known examples of this are TiVo and some of the handheld organizers. Commonplace items such as home theater are becoming more sophisticated, with some of the components now containing embedded operating systems (such as Linux) to help coordinate configuration and management. As time moves on, this will only become more commonplace.
Let's take a minute to look at the differences between the different kinds of offerings that are presently available. There are a variety of toolkits and environments for embedded systems, and there is likely to be one that fits your needs nicely. But there are other considerationswhether your system's needs are hard real-time or soft real-time, or whether they can fit within the context of a normal operating environment. Most embedded systems run normal kernels, possibly with some patches for support on their architecture. These can use a fairly normal distribution in many cases, although it is usually stripped down to fit within the hardware limits.
Real-time systems, however, come in several flavors. Soft real-time refers to systems that can handle some latencies, such as processing delays while interacting with a disk. Hard real-time systems are those that can't handle any kind of latency without losing important data or, worse, suffering complete failure.
There are a few different ways to handling these problems. Some offerings take the approach of patching the kernel to reduce latencies or to allow pre-emption within the kernel. As you might guess, this allows programmers to pre-empt the kernel while it is performing other tasks, to allow tasks to be done in a deterministic fashion. Still others, such as RTAI and RTLinux, offer a dual-kernel approach, which runs a second kernel to handle real-time tasks. In this model, Linux is run as a thread of the real-time kernel, and real-time tasks are run as simple kernel-level modules scheduled by the lightweight real-time component. Nonreal-time components are run as normal user processes within the Linux system, with several means of communication between the real-time kernel modules and the userspace processes. While hard real-time has traditionally been in the expensive domain of robotics and industrial control systems, the barrier of entry has essentially been removed from developing on this kind of system. In some situations, hard real-time is so easily attainable that it doesn't make sense to avoid it.
Soft real-time is easier to deal with, but if you are developing something like a video processing system, it is probably worth it to move to hard real-time, if possible. The system might be capable of handling light latencies that cause the occasional loss of frame data, but if that can be easily avoided, why not use a hard real-time solution?
This sums up a relatively light introduction to embedded systems and their importance. In the next article, I will look more in depth at the technologies involved in Linux-based real-time systems and how they might be used in various situations.