Home > Articles > Software Development & Management > Object Technology

1.3 Design Forces

This section surveys design concerns that arise in concurrent software development, but play at best minor roles in sequential programming. Most presentations of constructions and design patterns later in this book include descriptions of how they resolve applicable forces discussed here (as well as others that are less directly tied to concurrency, such as accuracy, testability, and so on).

One can take two complementary views of any OO system, object-centric and activity-centric:

Figure 1-14

Under an object-centric view, a system is a collection of interconnected objects. But it is a structured collection, not a random object soup. Objects cluster together in groups, for example the group of objects comprising a ParticleApplet, thus forming larger components and subsystems.

Under an activity-centric view, a system is a collection of possibly concurrent activities. At the most fine-grained level, these are just individual message sends (normally, method invocations). They in turn organize themselves into sets of call-chains, event sequences, tasks, sessions, transactions, and threads. One logical activity (such as running the ParticleApplet) may involve many threads. At a higher level, some of these activities represent system-wide use cases.

Neither view alone provides a complete picture of a system, since a given object may be involved in multiple activities, and conversely a given activity may span multiple objects. However, these two views give rise to two complementary sets of correctness concerns, one object-centric and the other activity-centric:

    Safety. Nothing bad ever happens to an object.

    Liveness. Something eventually happens within an activity.

Safety failures lead to unintended behavior at run time — things just start going wrong. Liveness failures lead to no behavior — things just stop running. Sadly enough, some of the easiest things you can do to improve liveness properties can destroy safety properties, and vice versa. Getting them both right can be a challenge.

You have to balance the relative effects of different kinds of failure in your own programs. But it is a standard engineering (not just software engineering) practice to place primary design emphasis on safety. The more your code actually matters, the better it is to ensure that a program does nothing at all rather than something that leads to random, even dangerous behavior.

On the other hand, most of the time spent tuning concurrent designs in practice usually surrounds liveness and liveness-related efficiency issues. And there are sometimes good, conscientious reasons for selectively sacrificing safety for liveness. For example, it may be acceptable for visual displays to transiently show utter nonsense due to uncoordinated concurrent execution— drawing stray pixels, incorrect progress indicators, or images that bear no relation to their intended forms — if you are confident that this state of affairs will soon be corrected.

Safety and liveness issues may be further extended to encompass two categories of quality concerns, one mainly object-centric and the other mainly activity-centric, that are also sometimes in direct opposition:

    Reusability. The utility of objects and classes across multiple contexts.

    Performance. The extent to which activities execute soon and quickly.

The remainder of this section looks more closely at safety, liveness, performance, and reusability in concurrent programs. It presents basic terms and definitions, along with brief introductions to core issues and tactics that are revisited and amplified throughout the course of this book.

1.3.1 Safety

Safe concurrent programming practices are generalizations of safe and secure sequential programming practices. Safety in concurrent designs adds a temporal dimension to common notions of type safety. A type-checked program might not be correct, but at least it doesn't do dangerous things like misinterpret the bits representing a float as if they were an object reference. Similarly, a safe concurrent design might not have the intended effect, but at least it never encounters errors due to corruption of representations by contending threads.

Figure 1-15

One practical difference between type safety and multithreaded safety is that most type-safety matters can be checked automatically by compilers. A program that fails to pass compile-time checks cannot even be run. Most multithreaded safety matters, however, cannot be checked automatically, and so must rely on programmer discipline. Methods for proving designs to be safe fall outside the scope of this book (see the Further Readings). The techniques for ensuring safety described here rely on careful engineering practices (including several with roots in formalisms) rather than formal methods themselves.

Multithreaded safety also adds a temporal dimension to design and programming techniques surrounding security. Secure programming practices disable access to certain operations on objects and resources from certain callers, applications, or principals. Concurrency control introduces transient disabling of access based on consideration of the actions currently being performed by other threads.

The main goal in safety preservation is ensuring that all objects in a system maintain consistent states: states in which all fields, and all fields of other objects on which they depend, possess legal, meaningful values. It sometimes takes hard work to nail down exactly what "legal" and "meaningful" mean in a particular class. One path is first to establish conceptual-level invariants, for example the rule that water tank volumes must always be between zero and their capacities. These can usually be recast in terms of relationships among field values in the associated concrete classes.

An object is consistent if all fields obey their invariants. Every public method in every class should lead an object from one consistent state to another. Safe objects may occasionally enter transiently inconsistent states in the midst of methods, but they never attempt to initiate new actions when they are in inconsistent states. If every object is designed to perform actions only when it is logically able to do so, and if all the mechanics are properly implemented, then you can be sure that an application using these objects will not encounter any errors due to object inconsistency.

One reason for being more careful about invariants in concurrent programs is that it is much easier to break them inadvertently than in most sequential programs. The need for protection against the effects of inconsistency arises even in sequential contexts, for example when processing exceptions and callbacks, and when making self-calls from one method in a class to another. However, these issues become much more central in concurrent programs. As discussed in 2.2, the most common ways of ensuring consistency employ exclusion techniques to guarantee the atomicity of public actions — that each action runs to completion without interference from others. Without such protection, inconsistencies in concurrent programs may stem from race conditions producing storage conflicts at the level of raw memory cells:

    Read/Write conflicts. One thread reads a value of a field while another writes to it. The value seen by the reading thread is difficult to predict — it depends on which thread won the "race" to access the field first. As discussed in 2.2, the value read need not even be a value that was ever written by any thread.

    Write/Write conflicts. Two threads both try to write to the same field. The value seen upon the next read is again difficult or impossible to predict.

It is equally impossible to predict the consequences of actions that are attempted when objects are in inconsistent states. Examples include:

  • A graphical representation (for example of a Particle) is displayed at a location that the object never actually occupied.

  • A bank account balance is incorrect after an attempt to withdraw money in the midst of an automatic transfer.

  • Following the next pointer of a linked list leads to a node that is not even in the list.

  • Two concurrent sensor updates cause a real-time controller to perform an incorrect effector action.

1.3.1.1 Attributes and constraints

Safe programming techniques rely on clear understanding of required properties and constraints surrounding object representations. Developers who are not aware of these properties rarely do a very good job at preserving them. Many formalisms are available for precisely stating predicates describing requirements (as discussed in most of the texts on concurrent design methods listed in the Further Readings). These can be very useful, but here we will maintain sufficient precision without introducing formalisms.

Consistency requirements sometimes stem from definitions of high-level conceptual attributes made during the initial design of classes. These constraints typically hold regardless of how the attributes are concretely represented and accessed via fields and methods. This was seen for example in the development of the WaterTank and Particle classes earlier in this chapter. Here are some other examples, most of which are revisited in more detail in the course of this book:

  • A BankAccount has a balance that is equal to the sum of all deposits and interest minus withdrawals and service charges.

  • A Packet has a destination that must be a legal IP address.

  • A Counter has a nonnegative integral count value.

  • An Invoice has a paymentDue that reflects the rules of a payment system.

  • A Thermostat has a temperature equal to the most recent sensor reading.

  • A Shape has a location, dimension, and color that all obey a set of stylistic guidelines for a given GUI toolkit.

  • A BoundedBuffer has an elementCount that is always between zero and a capacity.

  • A Stack has a size and, when not empty, a top element.

  • A Window has a propertySet maintaining current mappings of fonts, background color, etc.

  • An Interval has a startDate that is no later than its endDate.

While such attributes essentially always somehow map to object fields, the correspondences need not be direct. For example, the top of a Stack is typically not held in a variable, but instead in an array element or linked list node. Also, some attributes can be computed ("derived") via others; for example, the boolean attribute overdrawn of a BankAccount might be computed by comparing the balance to zero.

1.3.1.2 Representational constraints

Further constraints and invariants typically emerge as additional implementation decisions are made for a given class. Fields declared for the sake of maintaining a particular data structure, for improving performance, or for other internal bookkeeping purposes often need to respect sets of invariants. Broad categories of fields and constraints include the following:

    Direct value representations. Fields needed to implement concrete attributes. For example, a Buffer might have a putIndex field holding the array index position to use when inserting the next added element.

    Cached value representations. Fields used to eliminate or minimize the need for computations or method invocations. For example, rather than computing the value of overdrawn every time it is needed, a BankAccount might maintain an overdrawn field that is true if and only if the current balance is less than zero.

    Logical state representations. Reflections of logical control state. For example, a BankCardReader might have a card field representing the card currently being read, and a validPIN field recording whether the PIN access code was verified. The CardReader validPIN field may be used to track the point in a protocol in which the card has been successfully read in and validated. Some state representations take the form of role variables, controlling responses to all of a related set of methods (sometimes those declared in a single interface). For example, a game-playing object may alternate between active and passive roles depending on the value of a whoseTurn field.

    Execution state variables. Fields recording the fine-grained dynamic state of an object, for example, the fact that a certain operation is in progress. Execution state variables can represent the fact that a given message has been received, that the corresponding action has been initiated, that the action has terminated, and that a reply to the message has been issued. An execution state variable is often an enumerated type with values having names ending in -ing; for example, CONNECTING, UPDATING, WAITING. Another common kind of execution state variable is a counter that records the number of entries or exits of some method. As discussed in 3.2, objects in concurrent programs tend to require more such variables than do those in sequential contexts, to help track and manage the progress of methods that proceed asynchronously.

    History variables. Representations of the history or past states of an object. The most extensive representation is a history log, recording all messages ever received and sent, along with all corresponding internal actions and state changes that have been initiated and completed. Less extensive subsets are much more common. For example, a BankAccount class could maintain a lastSavedBalance field that holds the last checkpointed value and is used when reverting cancelled transactions.

    Version tracking variables. An integer, time-stamp, object reference, signature code, or other representation indicating the time, ordering, or nature of the last state change made by an object. For example, a Thermostat may increment a readingNumber or record the lastReadingTime when updating its temperature.

    References to acquaintances. Fields pointing to other objects that the host interacts with, but that do not themselves comprise the host's logical state: For example, a callback target of an EventDispatcher, or a requestHandler delegated to by a WebServer.

    References to representation objects. Attributes that are conceptually held by a host object but are actually managed by other helper objects. Reference fields may point to other objects that assist in representing the state of the host object. So, the logical state of any object may include the states of objects that it holds references to. Additionally, the reference fields themselves form part of the concrete state of the host object (see 2.3.3). Any attempts to ensure safety must take these relationships into account. For example:

    • A Stack might have a headOfLinkedList field recording the first node of a list representing the stack.

    • A Person object might maintain a homePageURL field maintained as a java.net.URL object.

    • The balance of a BankAccount might be maintained in a central repository, in which case the BankAccount would instead maintain a a field referring to the repository (in order to ask it about the current balance). In this case, some of the logical state of the BankAccount is actually managed by the repository.

    • An object might know of its attributes only via access to property lists maintained by other objects.

Figure 1-16

1.3.2 Liveness

One way to build a guaranteed safe system is to arrange that no objects ever execute any methods, and thus can never encounter any conflicts. But this is not a very productive form of programming. Safety concerns must be balanced by liveness1 concerns.

Figure 1-17

In live systems, every activity eventually progresses toward completion; every invoked method eventually executes. But an activity may (perhaps only transiently) fail to make progress for any of several interrelated reasons:

    Locking. A synchronized method blocks one thread because another thread holds the lock.

    Waiting. A method blocks (via Object.wait or its derivatives) waiting for an event, message, or condition that has yet to be produced within another thread.

    Input. An IO-based method waits for input that has not yet arrived from another process or device.

    CPU contention. A thread fails to run even though it is in a runnable state because other threads, or even completely separate programs running on the same computer, are occupying CPU or other computational resources.

    Failure. A method running in a thread encounters a premature exception, error, or fault.

Momentary blockages in thread progress are usually acceptable. In fact, frequent short-lived blocking is intrinsic to many styles of concurrent programming. The lifecycle of a typical thread may include a number of transient blockages and reschedulings:

Figure 1-18

However, permanent or unbounded lack of progress is usually a serious problem. Examples of potentially permanent liveness failures described in more depth elsewhere in this book include:

    Deadlock. Circular dependencies among locks. In the most common case, thread A holds a lock for object X and then tries to acquire the lock for object Y. Simultaneously, thread B already holds the lock for object Y and tries to acquire the lock for object X. Neither thread can ever make further progress (see 2.2.5).

    Missed signals. A thread remains dormant because it started waiting after a notification to wake it up was produced (see 3.2.2).

    Nested monitor lockouts. A waiting thread holds a lock that would be needed by any other thread attempting to wake it up (see 3.3.4).

    Livelock. A continuously retried action continuously fails (see 2.4.4.2).

    Starvation. The JVM/OS fails ever to allocate CPU time to a thread. This may be due to scheduling policies or even hostile denial-of-service attacks on the host computer (see 1.1.2.3 and 3.4.1.5).

    Resource exhaustion. A group of threads together hold all of a finite number of resources. One of them needs additional resources, but no other thread will give one up (see 4.5.1).

    Distributed failure. A remote machine connected by a socket serving as an InputStream crashes or becomes inaccessible (see 3.1).

1.3.3 Performance

Performance-based forces extend liveness concerns. In addition to demanding that every invoked method eventually execute, performance goals require them to execute soon and quickly. While we do not consider in this book hard real-time systems in which failure to execute within a given time interval can lead to catastrophic system errors, nearly all concurrent programs have implicit or explicit performance goals.

Meaningful performance requirements are stated in terms of measurable qualities, including the following metrics. Goals may be expressed for central tendencies (e.g., mean, median) of measurements, as well as their variability (e.g., range, standard deviation).

    Throughput. The number of operations performed per unit time. The operations of interest may range from individual methods to entire program runs. Most often, throughput is reported not as a rate, but instead as the time taken to perform one operation.

    Latency. The time elapsed between issuing a message (via for example a mouse click, method invocation, or incoming socket connection) and servicing it. In contexts where operations are uniform, single-threaded, and "continuously" requested, latency is just the inverse of throughput. But more typically, the latencies of interest reflect response times — the delays until something happens, not necessarily full completion of a method or service.

    Capacity. The number of simultaneous activities that can be supported for a given target minimum throughput or maximum latency. Especially in networking applications, this can serve as a useful indicator of overall availability, since it reflects the number of clients that can be serviced without dropping connections due to time-outs or network queue overflows.

    Efficiency. Throughput divided by the amount of computational resources (for example CPUs, memory, and IO devices) needed to obtain this throughput.

    Scalability. The rate at which latency or throughput improves when resources (again, usually CPUs, memory, or devices) are added to a system. Related measures include utilization — the percentage of available resources that are applied to a task of interest.

    Degradation. The rate at which latency or throughput worsens as more clients, activities, or operations are added without adding resources.

Most multithreaded designs implicitly accept a small trade-off of poorer computational efficiency to obtain better latency and scalability. Concurrency support introduces the following kinds of overhead and contention that can slow down programs:

    Locks. A synchronized method typically requires greater call overhead than an unsynchronized method. Also, methods that frequently block waiting for locks (or for any other reason) proceed more slowly than those that do not.

    Monitors. Object.wait, Object.notify, Object.notifyAll, and the methods derived from them (such as Thread.join) can be more expensive than other basic JVM run-time support operations.

    Threads. Creating and starting a Thread is typically more expensive than creating an ordinary object and invoking a method on it.

    Context-switching. The mapping of threads to CPUs encounters context-switch overhead when a JVM/OS saves the CPU state associated with one thread, selects another thread to run, and loads the associated CPU state.

    Scheduling. Computations and underlying policies that select which eligible thread to run add overhead. These may further interact with other system chores such as processing asynchronous events and garbage collection.

    Locality. On multiprocessors, when multiple threads running on different CPUs share access to the same objects, cache consistency hardware and low-level system software must communicate the associated values across processors.

    Algorithmics. Some efficient sequential algorithms do not apply in concurrent settings. For example, some data structures that rely on caching work only if it is known that exactly one thread performs all operations. However, there are also efficient alternative concurrent algorithms for many problems, including those that open up the possibility of further speedups via parallelism.

The overheads associated with concurrency constructs steadily decrease as JVMs improve. For example, as of this writing, the overhead cost of a single uncontended synchronized method call with a no-op body on recent JVMs is on the order of a few unsynchronized no-op calls. (Since different kinds of calls, for example of static versus instance methods, can take different times and interact with other optimizations, it is not worth making this more precise.)

However, these overheads tend to degrade nonlinearly. For example, using one lock that is frequently contended by ten threads is likely to lead to much poorer overall performance than having each thread pass through ten uncontended locks. Also, because concurrency support entails underlying system resource management that is often optimized for given target loads, performance can dramatically degrade when too many locks, monitor operations, or threads are used.

Subsequent chapters include discussions of minimizing use of the associated constructs when necessary. However, bear in mind that performance problems of any kind can be remedied only after they are measured and isolated. Without empirical evidence, most guesses at the nature and source of performance problems are wrong. The most useful measurements are comparative, showing differences or trends under different designs, loads, or configurations.

1.3.4 Reusability

A class or object is reusable to the extent that it can be readily employed across different contexts, either as a black-box component or as the basis of white-box extension via subclassing and related techniques.

The interplay between safety and liveness concerns can significantly impact reusability. It is usually possible to design components to be safe across all possible contexts. For example, a synchronized method that refuses to commence until it possesses the synchronization lock will do this no matter how it is used. But in some of these contexts, programs using this safe component might encounter liveness failures (for example, deadlock). Conversely, the functionality surrounding a component using only unsynchronized methods will always be live (at least with respect to locking), but may encounter safety violations when multiple concurrent executions are allowed to occur.

The dualities of safety and liveness are reflected in some extreme views of design methodology. Some top-down design strategies take a pure safety-first approach: Ensure that each class and object is safe, and then later try to improve liveness as an optimization measure. An opposite, bottom-up approach is sometimes adopted in multithreaded systems programming: Ensure that code is live, and then try to layer on safety features, for example by adding locks. Neither extreme is especially successful in practice. It is too easy for top-down approaches to result in slow, deadlock-prone systems, and for bottom-up approaches to result in buggy code with unanticipated safety violations.

It is usually more productive to proceed with the understanding that some very useful and efficient components are not, and need not be, absolutely safe, and that useful services supported by some components are not absolutely live. Instead, they operate correctly only within certain restricted usage contexts. Therefore, establishing, documenting, advertising, and exploiting these contexts become central issues in concurrent software design.

There are two general approaches (and a range of intermediate choices) for dealing with context dependence: (1) Minimize uncertainty by closing off parts of systems, and (2) Establish policies and protocols that enable components to become or remain open. Many practical design efforts involve some of each.

1.3.4.1 Closed subsystems

An ideally closed system is one for which you have perfect static (design time) knowledge about all possible behaviors. This is typically both unattainable and undesirable. However, it is often still possible to close off parts of systems, in units ranging from individual classes to product-level components, by employing possibly extreme versions of OO encapsulation techniques:

Figure 1-19

    Restricted external communication. All interactions, both inward and outward, occur through a narrow interface. In the most tractable case, the subsystem is communication-closed, never internally invoking methods on objects outside the subsystem.

    Deterministic internal structure. The concrete nature (and ideally, number) of all objects and threads comprising the subsystem are statically known. The final and private keywords can be used to help enforce this.

In at least some such systems, you can in principle prove — informally, formally, or even mechanically — that no internal safety or liveness violations are possible within a closed component. Or, if they are possible, you can continue to refine designs and implementations until a component is provably correct. In the best cases, you can then apply this knowledge compositionally to analyze other parts of a system that rely on this component.

Perfect static information about objects, threads and interactions tells you not only what can happen, but also what cannot happen. For example, it may be the case that, even though two synchronized methods in two objects contain calls to each other, they can never be accessed simultaneously by different threads within the subsystem, so deadlock will never occur.

Closure may also provide further opportunities for manual or compiler-driven optimization; for example removing synchronization from methods that would ordinarily require it, or employing clever special-purpose algorithms that can be made to apply only by eliminating the possibility of unwanted interaction. Embedded systems are often composed as collections of closed modules, in part to improve predictability, schedulability, and related performance analyses.

While closed subsystems are tractable, they can also be brittle. When the constraints and assumptions governing their internal structure change, these components are often thrown away and redeveloped from scratch.

1.3.4.2 Open systems

An ideal open system is infinitely extensible, across several dimensions. It may load unknown classes dynamically, allow subclasses to override just about any method, employ callbacks across objects within different subsystems, share common resources across threads, use reflection to discover and invoke methods on otherwise unknown objects, and so on. Unbounded openness is usually as unattainable and undesirable as complete closedness: If everything can change, then you cannot program anything. But most systems require at least some of this flexibility.

Full static analysis of open systems is not even possible since their nature and structure evolve across time. Instead, open systems must rely on documented policies and protocols that every component adheres to.

The Internet is among the best examples of an open system. It continually evolves, for example by adding new hosts, web pages, and services, requiring only that all participants obey a few network policies and protocols. As with other open systems, adherence to Internet policies and protocols is sometimes difficult to enforce. However, JVMs themselves arrange that non-conforming components cannot catastrophically damage system integrity.

Policy-driven design can work well at the much smaller level of typical concurrent systems, where policies and protocols often take the form of design rules. Examples of policy domains explored in more depth in subsequent chapters include:

    Flow. For example, a rule of the form: Components of type A send messages to those of type B, but never vice versa.

    Blocking. For example, a rule of the form: Methods of type A always immediately throw exceptions if resource R is not available, rather than blocking until it is available.

    Notifications. For example, a rule of the form: Objects of type A always send change notifications to their listeners whenever updated.

Adoption of a relatively small number of policies simplifies design by minimizing the possibility of inconsistent case-by-case decisions. Component authors, perhaps with the help of code reviews and tools, need check only that they are obeying the relevant design rules, and can otherwise focus attention on the tasks at hand. Developers can think locally while still acting globally.

However, policy-driven design can become unmanageable when the number of policies grows large and the programming obligations they induce overwhelm developers. When even simple methods such as updating an account balance or printing "Hello, world" require dozens of lines of awkward, error-prone code to conform to design policies, it is time to take some kind of remedial action: Simplify or reduce the number of policies; or create tools that help automate code generation and/or check for conformance; or create domain-specific languages that enforce a given discipline; or create frameworks and utility libraries that reduce the need for so much support code to be written inside each method.

Policy choices need not be in any sense "optimal" to be effective, but they must be conformed to and believed in, the more fervently the better. Such policy choices form the basis of several frameworks and design patterns described throughout this book. It is likely that some of them will be inapplicable to your software projects, and may even strike you as wrong-headed ("I'd never do that!") because the underlying policies clash with others you have adopted.

While inducing greater closedness allows you to optimize for performance, inducing greater openness allows you to optimize for future change. These two kinds of tunings and refactorings are often equally challenging to carry out, but have opposite effects. Optimizing for performance usually entails exploiting special cases by hard-wiring design decisions. Optimizing for extensibility entails removing hard-wired decisions and instead allowing them to vary, for example by encapsulating them as overridable methods, supporting callback hooks, or abstracting functionality via interfaces that can be re-implemented in completely different ways by dynamically loaded components.

Because concurrent programs tend to include more in-the-small policy decisions than sequential ones, and because they tend to rely more heavily on invariants surrounding particular representation choices, classes involving concurrency constructs often turn out to require special attention in order to be readily extensible. This phenomenon is widespread enough to have been given a name, the inheritance anomaly, and is described in more detail in 3.3.3.3.

However, some other programming techniques needlessly restrict extensibility for the sake of performance. These tactics become more questionable as compilers and JVMs improve. For example, dynamic compilation allows many extensible components to be treated as if they are closed at class-loading time, leading to optimizations and specializations that exploit particular run-time contexts more effectively than any programmer could.

1.3.4.3 Documentation

When compositionality is context-dependent, it is vital for intended usage contexts and restrictions surrounding components to be well understood and well documented. When this information is not provided, use, reuse, maintenance, testing, configuration management, system evolution, and related software-engineering concerns are made much more difficult.

Documentation may be used to improve understandability by any of several audiences — other developers using a class as a black-box component, subclass authors, developers who later maintain, modify, or repair code, testers and code reviewers, and system users. Across these audiences, the first goal is to eliminate the need for extensive documentation by minimizing the unexpected, and thus reducing conceptual complexity via:

    Standardization. Using common policies, protocols, and interfaces. For example:

    • Adopting standard design patterns, and referencing books, web pages, or design documents that describe them more fully.

    • Employing standard utility libraries and frameworks.

    • Using standard coding idioms and naming conventions.

    • Clearing against standard review checklists that enumerate common errors.

    Clarity. Using the simplest, most self-evident code expressions. For example:

    • Using exceptions to advertise checked conditions.

    • Expressing internal restrictions via access qualifiers (such as private).

    • Adopting common default naming and signature conventions, for example that, unless specified otherwise, methods that can block declare that they throw InterruptedException.

    Auxiliary code. Supplying code that demonstrates intended usages. For example:

    • Including sample or recommended usage examples.

    • Providing code snippets that achieve non-obvious effects.

    • Including methods designed to serve as self-tests.

After eliminating the need to explain the obvious via documentation, more useful forms of documentation can be used to clarify design decisions. The most critical details can be expressed in a systematic fashion, using semiformal annotations of the forms listed in the following table, which are used and further explained as needed throughout this book.

PRE

Precondition (not necessarily checked). /** PRE: Caller holds synch lock ...

WHEN

Guard condition (always checked). /** WHEN: not empty return oldest ...

POST

Postcondition (normally unchecked). /** POST: Resource r is released...

OUT

Guaranteed message send (for example a callback). /** OUT: c.process(buff) called after read...

RELY

Required (normally unchecked) property of other objects or methods. /** RELY: Must be awakened by x.signal()...

INV

An object constraint true at the start and end of every public method. /** INV: x,y are valid screen coordinates...

INIT

An object constraint that must hold upon construction. /** INIT: bufferCapacity greater than zero...


Additional, less structured documentation can be used to explain non-obvious constraints, contextual limitations, assumptions, and design decisions that impact use in a concurrent environment. It is impossible to provide a complete listing of constructions requiring this kind of documentation, but typical cases include:

  • High-level design information about state and method constraints.

  • Known safety limitations due to lack of locking in situations that would require it.

  • The fact that a method may indefinitely block waiting for a condition, event, or resource.

  • Methods designed to be called only from other methods, perhaps those in other classes.

This book, like most others, cannot serve as an especially good model for such documentation practices since most of these matters are discussed in the text rather than as sample code documentation.

1.3.5 Further Readings

Accounts of high-level object-oriented software analysis and design that cover at least some concurrency issues include:

    Atkinson, Colin. Object-Oriented Reuse, Concurrency and Distribution, Addison-Wesley, 1991.

    Booch, Grady. Object Oriented Analysis and Design, Benjamin Cummings, 1994.

    Buhr, Ray J. A., and Ronald Casselman. Use Case Maps for Object-Oriented Systems, Prentice Hall, 1996. Buhr and Casselman generalize timethread diagrams similar to those used in this book to Use Case Maps.

    Cook, Steve, and John Daniels. Designing Object Systems: Object-Oriented Modelling With Syntropy, Prentice Hall, 1994.

    de Champeaux, Dennis, Doug Lea, and Penelope Faure. Object Oriented System Development, Addison-Wesley, 1993.

    D'Souza, Desmond, and Alan Wills. Objects, Components, and Frameworks with UML, Addison-Wesley, 1999.

    Reenskaug, Trygve. Working with Objects, Prentice Hall, 1995.

    Rumbaugh, James, Michael Blaha, William Premerlani, Frederick Eddy, and William Lorensen. Object-Oriented Modeling and Design, Prentice Hall, 1991.

Accounts of concurrent software specification, analysis, design, and verification include:

    Apt, Krzysztof and Ernst-Rudiger Olderog. Verification of Sequential and Concurrent Programs, Springer-Verlag, 1997.

    Carriero, Nicholas, and David Gelernter. How to Write Parallel Programs, MIT Press, 1990.

    Chandy, K. Mani, and Jayedev Misra. Parallel Program Design, Addison-Wesley, 1989.

    Jackson, Michael. Principles of Program Design, Academic Press, 1975.

    Jensen, Kurt, and Grzegorz Rozenberg (eds.). High-level Petri Nets: Theory and Application, Springer-Verlag, 1991.

    Lamport, Leslie. The Temporal Logic of Actions, SRC Research Report 79, Digital Equipment Corp, 1991.

    Leveson, Nancy. Safeware: System Safety and Computers, Addison-Wesley, 1995.

    Manna, Zohar, and Amir Pneuli. The Temporal Logic of Reactive and Concurrent Systems, Springer-Verlag, 1991.

Several specialized fields of software development rely heavily on concurrency. For example, many simulation systems, telecommunications systems, and multimedia systems are highly multithreaded. While basic concurrency techniques form much of the basis for the design of such systems, this book stops short of describing large-scale software architectures or specialized programming techniques associated with particular concurrent applications. See, for example:

    Fishwick, Paul. Simulation Model Design and Execution, Prentice Hall, 1995.

    Gibbs. Simon and Dennis Tsichritzis. Multimedia Programming, Addison-Wesley, 1994.

    Watkins, Kevin. Discrete Event Simulation in C, McGraw-Hill, 1993.

Technical issues are only one aspect of concurrent software development, which also entails testing, organization, management, human factors, maintenance, tools, and engineering discipline. For an introduction to basic engineering methods that can be applied to both everyday programming and larger efforts, see:

    Humphrey, Watts. A Discipline for Software Engineering, Addison-Wesley, 1995.

For a completely different perspective, see:

    Beck, Kent. Extreme Programming Explained: Embrace Change, Addison-Wesley, 1999.

For more information about integrating performance concerns into software engineering efforts, see for example:

    Jain, Raj. The Art of Computer Systems Performance Analysis, Wiley, 1991.

Further distinctions between open and closed systems are discussed in:

    Wegner, Peter. "Why Interaction Is More Powerful Than Algorithms", Communications of the ACM, May 1997

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020