- Ubiquitous Computing
- Web Services
- The Semantic Web
- Spaces Computing
- Peer-to-Peer Computing
- Collaborative Computing
- Dependable Systems
- Pervasive Computing
- Cluster Concepts
- Distributed Agents
- Distributed Algorithms
- Distributed Databases
- Distributed Filesystems
- Distributed Media
- Distributed Storage
- Grid Computing
- Massively Parallel Systems
- Mobile and Wireless Computing
- Network Protocols
- Operating Systems
- Real-Time and Embedded Systems
The notion of an algorithm is basic to all of computer programming, so we should begin with a careful analysis of this concept.
A distributed system is a collection of individual computing components that can communicate. This basic definition covers processing organizations ranging from a VLSI chip to the broadest set of cooperative ensembles that might one day be available as a result of resource convergence over the widest of global public networks. The heart and soul of all such systems is the abstraction expressed by the algorithm.
Algorithms are the step-by-step definitions of computational and communications flow. From a research perspective, algorithms are the research domain of theoretical computer science practitioners, as well as practical references for real-world application development. The study of algorithms has proven to be a successful endeavor in solitary computing node arrangements, providing a basis for understanding problems of practical importance as well as affording a framework for articulating intrinsic limitations (such as computability).
But NDC is inherently different from local computing. Indeed, Deutsch's Eight Fallacies would teach that not only are computing models never uniform across the network, but communication ambiguities also provide indeterminate failure attributes that cannot easily be masked. NDC introduces interesting complexity measures mired in time and space variables that are simply not visible to local computing models. And just as there are greater challenges from greater complexities, the plethora of approaches to NDC serves to further complicate the NDC fitscape, making real-world architectures even more difficult to manage.
Three main architectural models, generally differing in degree of synchrony and decoupling, are algorithmically identifiable in NDC today:
A synchronous message passing model. RPC-like models include RMI (Jini network technology), JAX-RPC, and earlier synchronous message passing systems.
An asynchronous message-passing model. Java Messaging Service (JMS), which provides a basis for Java platform implementations of Web Services, is representative of this approach.
An asynchronous shared-memory model. The more tightly coupled approaches to NDC like grid computing, cluster concepts, and spaces computing can all benefit from distributed computing algorithmic advances gleaned from this model.
Algorithmically, systems can be considered to be asynchronous if there is no fixed upper bound on how long it takes for a message to be delivered or for elapsed time of processing between steps. Email, for example, typically takes only a few seconds to arrive but may also take several days, depending on network temperament. As such, asynchronous models cannot, by definition, provide reliable temporal guarantees. On the other hand, the reliability (also understood as uncertainty reduction) of asynchronous models may be enhanced with other benefits like message persistency. Goff's axiom applies here.
In addition to matters of synchrony, network topology is also a domain of distributed algorithms; determining shapes and boundaries of dynamic networks to effectively navigate the growing rivers of datacom is a matter of much interest. The commercial popularity of the Internet has given rise to a substantial increase in research and development in distributed algorithms, as it has so many other aspects of NDC.