This section describes how some common p2p-capable implementations, especially those built into operating systems, have applied one or another of the general architectural models. We examine how closely each implementation adheres to a given model, and of particular interest is to note how each has tried to solve issues relating to user convenience, reliability, scale, and so on.
These examples are intended as illustrative of concept, not as exhaustive analysis, and they mainly provide a general backdrop to the detailed analysis of more recent technologies in Part II. Although capable of creating legitimate and often perfectly adequate p2p solutions in the LAN context, they aren't in themselves necessarily practical solutions for deploying peer networks today. Modern p2p solutions are based on open source applications that create virtual networks and can run on any network-aware system, which gives numerous advantages compared to proprietary solutions that depend on particular operating systems.
Before looking at the p2p solutions that can build on any operating system, the following section takes up native networking abilities in the main operating systems encountered by individual users today. I discuss "OS-bundled" networking in this limited way, because it falls somewhat outside the intended scope of this book.
By "native networking" we understand the inherent ability to connect to a network (of peers) using only those components already present in the respective operating system, possibly with further installation of some non-default ones.
Such networking ability enables "instant" peer networking on the machine level, at least for messaging and resource sharing across the network. In today's systems, this native capability almost always means Internet connectivity, in addition to transport support for local networks. Native networking is distinguished from the application-level p2p networking that is the main focus of this book, and it is often neglected in discussions of p2p technologies.
Application-level solutions communicate on top of the established machine-level networking, but can be independent of the latter's addressing and peer or non-peer ability, and are therefore seen as complementary enhancements.
Unix to Unix
As mentioned in Chapter 1, the first computer networking was between mainframes. It quickly evolved to communication between Unix machines, which early had a basic peer protocol called UUCP (Unix to Unix Copy Protocol).
Because UUCP is a standard copy process between all Unix machines that can be applied to any content, it was also used to transport messages. Many newsgroup servers still rely on UUCP to transport messages to and from other systems. Although ancient and not especially efficient, its main merit is that it's always available, whatever the Unix system. Early chat, e-mail and the newsgroups (on Usenet) were built on top of this protocol. Unix defined the interoperable standards for all e-mail support, mailbox format, and applicationsand were inherited by Linux.
Reference to Unix characteristics common to a variety of implementations of either Unix or derivatives like Linux is commonly denoted by writing "*nix".
Peer Networking in MS Windows
One of the better design decisions in 32-bit MS Windows was the integration of generic networking components. Once the hardware, protocol, and workgroup configurations are properly set up, basic peer networking is essentially plug-and-play, whatever the mix of Windows platforms95, 98, ME, NT, 2000, or XP.
Network components however are not default in Windows (prior to XP) but are installed and configured whenever hardware or software that requires them are added to the system. Installing for example a network interface card or a modem not only requests the appropriate device driver, but also sets up the corresponding client and protocol layers to handle network abstraction. Figure 2.5 illustrates the model used, with reference to the OSI protocol layers.
FIGURE 2.5 Microsoft Networking models the network as three layers and two bindings, here compared to the OSI model. The user must configure one or more appropriate binding paths Client-Protocol-NIC for each application and network used.
The supported network protocols are not just Microsoft's enhanced version of network BIOS, NetBEUI, for Microsoft Networking, but also for Novell NetWare (IPX/ISX) and Internet-compatible networking (TCP/IP). Others can be added. Network support is largely transparent to the user.
The default configuration prior to Windows XP relies on the proprietary NetBEUI protocol, although it is relatively painless to reconfigure it to TCP/IP from the Network Properties dialogsXP defaults to TCP/IP. The advantage of NetBEUI in the small local network is that it doesn't require any setup apart from uniquely naming each machine and assigning it to a common workgroup. Once connected, machines will "see" each other using the integrated network browser. The integrated network browser makes access of remote files and resources transparent, and to the user as easy as accessing local files and resources.
As each machine's resources (such as hard disk partitions or printers) are locally declared shared, they can be accessed by other machines in the workgroup. There is no built-in central administration. Resources and access are controlled locally for each machine by the respective user (in Windows NT, by the local Administrator account). In the corporate or home LAN case, all the machines tend to be physically administered from external notes and lists managed by one person.
It's straightforward to use machines running some flavor of 32-bit Windows as p2p nodes in a NetBEUI-type LAN with up to perhaps 10 or 15 PC workstations. Beyond that scale, the complexities of consistently administering names, shares, and permissions easily get unmanageable.
Microsoft later implemented a scalable server-centric model for Windows Networking based on NT Server, where resource and access control is handled by a designated Primary Domain Controller (PDC) in the LAN. Users must then first log in to the PDC using their local client before gaining access to the network. Using Microsoft domains adds considerable complexity, but also the kind of power and centralized control that larger corporate networks usually need. This kind of domain-centric network scales tolerably well to thousands of nodes.
The default NetBEUI protocol is furthermore constrained to a small physical LAN, because it only handles network data frames with explicit hardware addressing. The price NetBEUI pays for its simplicity is the inability to cross network boundaries. To route across virtual networks, you need support for software addressing, such as in TCP/IP's packet addressing. Hence, for maximum flexibility, Windows systems should be configured for TCP/IP. Given the continued Internet focus of Microsoft, TCP/IP might become the only networking option in future versions of Windows.
With TCP/IP as the protocol, it's possible and often desirable to install support for point-to-point tunneling protocol (PPTP), which is an encryption-protected virtual private network (VPN) connection between NT servers, or between a Windows client and a server. This protocol is primarily intended to provide secure access to corporate networks from external, dial-up users. It however could also be used to construct a virtual, distributed, and private p2p LAN of up to 256 connections per node.
Home LANs have become more common in later years: several machines in a p2p network sharing an Internet connection, possibly through a cable router. A better understanding of p2p principles even in Windows can greatly enhance the utility of such home clusters by allowing different approaches to how data and resources are deployed, and perhaps shared from outside the home as well.
For the purposes of this book, it's assumed that most readers have Internet connectivity with either some flavor of Windows or Linux, or a Mac, on which system they intend to install application-level p2p solutions.
Peer Networking in Apple Macintosh
Apple's Macintosh early included peer network capability in their operating system. Native support for ubiquitous 10/100 Base-T Ethernet makes physical network connection easy.
The Mac supports either proprietary AppleTalk or open TCP/IP protocols, and can natively build peer networks. Stringing together some Macs with AppleTalk is the easiest route, but easy comes at the price of the power and sophistication that more complex protocols give. AppleTalk is in addition known as a "chatty" broadcast protocol that doesn't scale very well to larger networks.
The early dominance of Mac systems in corporate and educational environments has waned over the years, although they are still fairly common in the latter. The proprietary architecture, solutions, and protocols have always been an impediment to broad interoperability with other platforms, networked or not.
A number of solutions exist to interconnect Macs and PCs to the respective proprietary protocol networks, but TCP/IP is usually the protocol of choice. As with Windows, Mac p2p applications install on top of the current transport. The newer OS X is a Unix derivative, so it supports many *nix tools and applicationswith broader support for p2p than older Macs.
OS/2 Peer Networks
IBM's OS/2 is no longer a current operating system for the average user, although not so many years ago, OS/2 Warp was billed as the next dominant desktop OS. It might have taken a significant share of the market too, if IBM hadn't so abruptly dropped support for it and instead begun to bundle Windows.
OS/2 is worth mentioning because it lives on in some corporate networks and among a core group of enthusiasts. Even today, some still maintain its suitability for office use, with words much like this:
If you want a consistent, friendly interface that has the power to run the office, run your old DOS/Windows programs, and connect to the outside world (all simultaneously), then OS/2 Warp Connect is worth a look.
The last OS/2 Warp versions, Connect and v4, were true 32-bit, multitasking and network-aware operating systems roughly comparable to Windows NT or Linux. Warp consumes less resources than NT, more like Windows 95, and can run most Windows programs intended for the early 32-bit extensions. For our purposes, the question is how well OS/2 supports peer networking.
Network software setup for OS2/Warp is similar to the Mac in its ease of use. Finding a working NIC driver can sometimes be problematic, given the lack of vendor OS/2 support for newer hardware, but the rest is an easy walk. Other platforms may match up to Warp in any one area, yet IBM covered its bases much better overall.
TABLE 2.2 Main OS/2 Peer Networking components
Sharing and Connecting
A program that enables you to connect to the resources of other users, and declare which of your resources are available to other users, and to what degree.
OS/2 Peer's internal e-mail system.
Peer Workstation Logon / Logoff
Logon and logoff service for network access.
IBM's proprietary networking protocol, OS/2 Peer, is limited to sharing resources among machines running Warp Connect. However, TCP/IP is also supported, albeit an older version that's less easy to configure. Protocols are session specific, so you can log on to OS/2 Peer, IBM LAN Server, Novell NetWare, Microsoft LAN Manager, Windows for Workgroups, and TCP/IP networks simultaneously.
The OS/2 desktop has three network-related folders: OS/2 Peer, Network, and UPM Services (the latter for user and password maintenance). The OS/2 Peer folder contains all of the good stuff, the most important being Sharing and Connecting, Network Messaging, Clipboard and DDE, Information, and Peer Workstation Logon/Logoffas explained in Table 2.2.
A Linux installation is inherently a full-featured server in the Internet networking model, natively supports TCP/IP, and also includes all the associated client software. Linux is based on Unix, which to all intents and purposes is the Internet.
Linux exists in a variety of branches and distributions, all similar and generally interoperable, but with different configurations and purposes. Full-scale Linux installations are admittedly not easy to master, but they do have all the power and options for networking you could possibly want. A network of machines running Linux can therefore easily function as both client-server and p2p node using the full array of software developed for the Internet.
This kind of system, partly due to the more "experimentally involved" attitude of the typical Linux user, readily participates in many p2p contexts, locally and over the Internet. One such common context is to return e-mail to the p2p model, because Linux machines can, and often do, each run the server software for sending and receiving e-mail in a variety of protocols, messaging, and sharing files or other content. That way Linux users easily turn their machines into p2p endpoints for a broad range of services. Similar functionality is available in Windows and other systems by adding comparable third-party software, but Linux support is native.
QNX is a mature distributed *nix-like operating system, generally found in but by no means restricted to embedded real-time systems. For several decades, QNX development has sort of paralleled Linux, and can on a PC generally emulate or run much Linux software.
QNX has native networking at several levels, including support for distributed processes and modules, and it of course supports the ubiquitous TCP/IP. It's however rare to find QNX on a desktop PC outside of special developer contexts.
Other Application Groups
This chapter ends with a brief tour of application-level solutions that might or might not be strictly p2p, but are related in concept at some level.
As a kind of catch-all, the term "peer servers" can be used to designate various forms of Internet or LAN servers that maintain p2p connectivity with each other, while serving a host of clients in a traditional client-server role.
Most of the discussions about p2p networking are equally applicable to the server-to-server p2p role, even though little is said about this role in this book. This is in part because the main focus is on the end-user perspective, and not so much on software such as traditional servers that are not administered by the user.
Nevertheless, it's also true that the node application in p2p technologies is quite clearly a "peer server" because all the nodes participate in this role. The server role becomes more explicit in cases like Mojo Nation and Freenet (see Chapters 8 and 9, respectively), where the node software does have a clear client-server role towards separate client software (at the user endpoint) running on the same machine.
Internet Relay Chat gets a brief mention here only because its relay servers function p2p with each other. The IRC client-to-client chat transactions almost exclusively go through the servers, so these relationships are not p2p. Nevertheless, IRC one-to-one chat and many-to-many (or chatroom) discussions can be a method to discover potential peers for other direct p2p connectivity. IRC support is therefore a common extra component in many atomistic p2p technologies. A multi-transport chat client such as Jabber (see Chapter 6) is especially useful in such contexts because it supports several other p2p messaging and file transfer protocols in addition to IRC chat and its own open client and services protocol.
At the time of writing, Microsoft's new and controversial p2p entry, dubbed HailStorm, is barely past prototype status. Although details will surely change over time, Hailstorm and its associated services appear clear enough in principle that mention of this implementation should be made.
Launched as the first real .NET (pronounced and sometimes written as "dot-NET") initiative in March 2001, it is described as a set of user-centric Web services that will "turn the Web inside out". The concept assumes some aspects of the traditional client-server relationship. HailStorm defines a basic network framework around which third-party developers are invited to write applications that rely on user identification. The approach has been described by Microsoft in this way:
Instead of having an application be your gateway to the data, in HailStorm, the user is the gateway to the data.
Unusually for Microsoft, the framework rests on a set of open standards, XML and Simple Object Access Protocol (SOAP), rather than proprietary protocols. It remains to be seen, however, whether these assimilated open standards will in future be extended in proprietary ways. HailStorm's security protocol, based on Kerberos, has already been extended by Microsoft, with unclear consequences for continuing its open and interoperable characteristics.
While officially described as an open p2p system, closer inspection shows that HailStorm sits in an uneasy balance between the centralized closed server and the open p2p models. Depending on how the final implementation designs play, Hailstorm could turn out to be the largest client-server architecture ever devised, with rather minimal peer focus overall.
The core concept depends on a user-centric, or strictly speaking an authentication-centric server model. This has the audacious goal of centrally validating any and all Internet user identities in the world! It would (by way of the Passport service) mediate and authorize valid user access not only to all distributed Web services, but also to locally installed software. The thought is that software registration management will be yet another service sold by Microsoft.
It should be noted that the concept of "personal identity" that HailStorm deals with is not just a simple who am I, but is at minimum a three-tier structure that uniquely specifies the individual, the application that the individual is running, and the location where that software is running. This information is encrypted into Kerberos application requests sent to a Passport server for authentication checks.
Passport defines identity, security and description models common to all services. As currently deployed, Passport identity is keyed to the user's e-mail address, whether existing or created on the Passport server just for authentication purposes. The official motivation for this massive central control is that all the proposed .NET services, especially commercial and banking, are defined as tied to a unique user identity that has to be administered globally. Microsoft is promoting Passport as a one-stop service for identifying people at online outlets.
An example of large-scale, public use of the Passport service for user authentication is the multi-user game Asheron's Call, a Microsoft Zone gaming site that in December 2001 began using the new identity verification system. Users that log in are shunted to a Passport server to verify their identity before being allowed into the game. It's not clear whether continued participation depends on Passport tracking user presence, but that feature is mentioned in other Passport contexts. Windows Messenger (WM) relies on presence tracking with Passport authentication.
Crucial to the Passport concept, as its name implies, is that the distributed services and software honor the identification protocol. Significant is the list of standard functions, such as myAddress (electronic and geographic address), myProfile (personal information), myBuddies (contacts roster), myInbox (e-mail), myCalendar (agenda), and myWallet (e-cash), to name the first offering.
Needless to say, not everyone is comfortable with the idea of one company (with a less-than-reassuring track record for online reliability and interoperability) totally in control of individual and corporate public identity at this global level. One worry is that all identity credential transactions, and hence by extension most commercial transactions, would require participation of central Passport server(s).
Early criticism of the system can be summed up in the sentence:
HailStorm is the business idea of getting you to give up your identity to Microsoft, who will then rent it back to you for a small monthly fee.
Undeniably, there is at least one obvious ulterior motive for implementation of Passport: central identity validation for the new pay-by-use, personal-rental model for software-as-service that the company is adopting to replace the previous user licensing model of software-as-product. Deploying the infrastructure for a single global identity for each individual makes it much easier to manage registration and payment.
Significantly, the HailStorm infrastructure moves the revenue model from selling or licensing proprietary products on a proprietary platform, to pay-per-use fees culled from anyone running anything on any Internet-connected platform. HailStorm is a very "egalitarian" commercial venture in this respectit asks both developers and users to pay for access, though the nature and size of these fees are far from worked out. Assume some form of periodic subscription or pay-by-use.
Privacy groups and others have meanwhile complained the service lacks adequate safety measures for securing sensitive consumer information, charges that Microsoft denied, despite the discovery in October 2001 of a security flaw in the Wallet part of the Passport system that could have exposed confidential user financial data to intruders. Glitches in the transition of the Gaming Zone site in December 2001 reawakened public skepticism.
Nevertheless, momentum is growing for Passport as more companies sign on and switch to a Passport-mediated log-in, use the MS bCentral portal service, or build new applications that lean on Passport services. Needless to say, Microsoft's applications for Internet communication, for example, Exchange Server or the IM client Windows Messaging, all tie into the Passport authentication schemesometimes as an option, sometimes as a necessary component.
The distributed aspect of HailStorm is described as "open access", meaning that in principle any minimally connected device that is compliant with the XML Web SOAP framework can access applicable HailStorm services. No Microsoft runtime or tool is required to call them.
This concept seems clear enough at the client level, but less so at the server level. There, Microsoft has only vaguely stated that servers running on non-Microsoft operating systems like Linux or Solaris will be able to "participate" in HailStorm; the degree of actual integration has not been specified any further. In September 2001, Microsoft opened the gates by announcing that .NET will allow third-party identity providers to compete with Passport. This move is promising, albeit surprising at this early stage of deployment because the detailed strategy of the company can only be the subject of speculation. It however is strengthening the utility of HailStorm (which, by the way, has been renamed ".NET MyServices").
A good place to start if looking for more overview material on the many aspects of .NET and HailStorm is a Belgian site at I.T. Works (www.itworks.be/webservices/info.html). Another, more technical resource is DevX (www.devx.com).
In the next chapter, we leave the overview for more practical matters.