Home > Articles > Programming

  • Print
  • + Share This
This chapter is from the book

Object Orientation

Much has been written about the object paradigm in the past decade. It has in fact become clear that object orientation has emerged as the dominant paradigm of software engineering. With its roots in the world of program construction, it has more recently become pervasive throughout the software process. We now have object-oriented analysis techniques; design methods; techniques for specification, formal or otherwise; verification and validation; metrics; and much more. There are now even full software process definitions and frameworks such as OPEN (Graham et al., 1997) and RUP (Jacobson et al., 1999) that take an object-oriented perspective. But what is object orientation and why is it important?

Understanding the Object Paradigm

When asked to describe the essence of object orientation, many would present concepts such as classification, inheritance, polymorphic behavior, and reuse. Interesting and important as these concepts might be, they are secondary by-products of object orientation. The essence of object orientation lies elsewhere and relates to how we perceive and model systems, whether they are of a problem, a solution, or an implementation of a solution.

Creating models is therefore central to the construction of object-oriented systems. Most simply stated, a model is a mapping from one domain into another (Ross & Wright, 1992). This means that we cognitively translate what we see as relevant from one environment into another, given a particular purpose and abstraction level. For example, if our purpose is to identify how many chairs there are in a room, then a chair observed in the room may be mapped into the domain of positive integer numbers. If our purpose is to know the colors of furniture in this room then the mapping may be into a constructed set of colors (Allen & Yen, 1979).

Speaking from a cognitive psychology standpoint, a model is what is captured and communicated from our understanding of the situation presented in one domain (often the real world) into another, using a particular language of communication, or a modeling language (Eysenck & Keane, 1990). On this basis, most models (those that attempt to map what is externally perceived through our energy-activated sensory organs) are described as at least a double mapping. The first mapping is from the environment (the real world) into our cognition; that is, forming a mental model (this is also called perception). The second mapping involves using our cognition as a means of creating another model of what it has perceived, this time for the purpose of communicating our perception to others (e.g., a blueprint or an object diagram when used as a medium between the designer and the implementer) or to ourselves at a later time (e.g., a note in our scratch book, or a diagram to remind us of what transpired at the client interview; Eysenck & Keane, 1990; Martin & Odell, 1995). Sometimes, however, the first of these mappings might be absent (e.g., when we attempt to create a model of a Gorgon).

Modeling is therefore closely related to the activities of perception and communication. Consistent with this, and a large number of other definitions (Bruner, 1957; Checkland, 1981; Gibson, 1966; Gregory, 1980; Neisser, 1976; Preece et al., 1995), modeling may be defined as the acquisition, retention, and communication of those views of the situation that are, at the time, relevant to the observer.

In relation to the issue of the various views of "reality" previously discussed, it has also been observed throughout the centuries (Aristotle, 1924; Descartes, 1911; Plato, 1974, 1989) that knowledge of structures, processes, and sequences in which events occur are all necessary for comprehending the real world, which is in turn a prerequisite for modeling some aspect of it. This is of particular relevance when there is an absence of a priori knowledge in terms of the future utility of the model to be constructed. An example may clarify the intention here.

Assume that a model of a system is being created in such a way that we would use it, in the future, to answer questions of this type: Of what is X composed? Given this a priori knowledge of the future use of the model, it is probably sufficient to provide a model that concentrates principally on things that exist in the system (objects) and their interrelationship. These are usually called structural models or object models. If we are interested in how Y is accomplished in this system, then a model that concentrates on inputs, processes, and outputs might be the one of choice. These models are known as process models or more correctly as transformational models. An a priori interest in knowing when or in what sequence we do things would likely lead to a model of states, events, and sequences, which is usually called a dynamic or sequential model.

However in the absence of any a priori knowledge of the intended utility of a model, the modeler has no choice but to provide all three models and clear evidence of their mutual interrelation. This is what we do when we become observers of some situation. Our cognition registers not only objects, but also changes and sequences.

Traditional modeling languages and paradigms of software engineering usually concentrated on one such aspect and were, as such, limited in their ability to yield general models. For example, a data model principally captures the structure of the situation and lacks transformational or causal detail. A flowchart captures the transformational details but lacks important structural and causal detail, and a state diagram or state machine captures sequential information but lacks structure and transformation details.

Object technology combines these three perspectives. In fact, this ability to provide such a rich model that does not depend on any specific modeling view is one of the greatest strengths of object orientation. By capturing and putting together the relevant details of the situation from all three perspectives, object orientation "encapsulates" reality into modules, as we perceive it. In other words the object-oriented paradigm allows us to view the system in terms of elements that our perception is most comfortable with as it (our perception) has also absorbed, encapsulated, and modeled the situation using the same abstractions. This in turn means that the proximity of an object-oriented model and how we actually perceive the world (the system) is maximized. This is the concept of encapsulation, which allows us to perceive objects in the world from the perspective of a given relevance and thus identify all those objects that have these same characteristics. This in turn leads us to the concept of types. Those encapsulations that have from a particular point of relevance the same or similar characteristics can be perceived to belong to the same type. Each member of a type is called an object.

For a system (the world) to operate, its objects have to establish relationships with each other and communicate. This is also the case in an object-oriented model of any reality. This means that as part of the causal model of a situation we identify and decide on those interactions of relevance that give the system of our intent a particular meaning. That is, we extract from the infinite number of interactions possible between objects only those that we perceive as relevant and model those, as we did with the objects themselves. This means that a given object would react to a request by another object that calls on its service. This is the message-passing model, and along with encapsulation, it is the very essence of object orientation.

In fact, we can perceive object orientation as the implementation of the client–server model at the micro (individual object) level. This means that Object A, which has as part of it a service x, might be called by Object B, which needs the service x of Object A. Object B is the client and Object A is the server.

A powerful paradigm, object orientation would bring forth both opportunities and challenges for the software engineering practitioner. The following is a discussion of some of these opportunities.

Comprehensive Model Generation

A well-developed object-oriented model is multidimensional. It depicts the encapsulated and combined structural, transformational, and causal–sequential aspects of the situation being modeled. Given this multidimensionality, it can be argued that an object-oriented model of a particular situation would contain more information than one that is developed along only one of the three dimensions previously mentioned. It stands to reason, therefore, that given the same level of granularity of view and accuracy, the object-oriented model would be capable of answering more questions about the situation modeled than say each of an Entity-Relationship (ER) diagram or a Data Flow Diagram (DFD), a flowchart, or a state machine representation individually. In general, and particularly in terms of the objectives of this book, this quality represents a great advantage because if system functionality should be maximally modeled, then omission of any one of the three dimensions would be an absolute omission that would detract from the information content of the model. We will see later that such an omission can be classified as a defect and is thus of interest to defect management.

This, however, does not mean that object-oriented products are necessarily and invariably of higher quality than their traditionally developed counter-parts just because we can be more comprehensive through the object-oriented process. It does mean that taking an object-oriented approach provides the potential for more information to be available up front and greater ease in communication of information. This in turn would tend to work toward clarity and observability in the project, which can contribute positively to the product quality. Seamlessness

Another great advantage of object orientation is seamlessness. In developing software systems, as mentioned earlier, it is necessary to go through three interrelated activities. The observer or analyst must develop an understanding of the problem situation that is in need of improvement; such an understanding, when acquired, must then be recorded and communicated. A model of the problem situation therefore must be constructed. Once the relevant professionals in the software development process are made familiar with at least a sufficiently large portion of the problem situation (often using the model obtained), it is possible to propose solutions to the problem. The mental images arrived at as a workable solution must then be modeled so they can be communicated with those who have to build the product. This is called a design model. Ultimately, this design has to be constructed in program code. This is a third model, a model of the solution that is executable. Applying the object-oriented paradigm to the development of software systems, we can use the same modeling elements to develop all three of these models. In other words, we use the same abstractions, the same vocabulary, and the same grammar to tell each of the three stories. This is the language of objects, classes, inheritance, aggregation, states, transitions, message passing, and the like. Given the comprehensive nature and multidimensionality of the paradigm, as already discussed, the same language can model all three situations. After all, a model is a model.

Permitting More Complex Systems to Be Constructed

The power of abstraction and the multidimensionality inherent in object orientation has allowed us to tackle more complex and convoluted problem situations than had been possible before. In the absence of object technology, many software applications utilized today, graphical user interfaces, the World Wide Web, and many other products would be out of reach in terms of our ability to provide adequate design models to realize them.

Promoting Distribution

In the object paradigm we model the world in terms of communicating objects. This, as mentioned before, implies the implementation of the client–server model at the very low level of the individual object. This capability, in conjunction with the concept of encapsulation, allows for construction of modules at multiple levels of abstraction (object and class, packages, packages of packages, subsystems, etc.) yet still maintains the message-passing paradigm between these encapsulated abstractions. It is easy to see how such organization would naturally support distribution. In fact, given such an architecture, putting various of these elements in different memory spaces (e.g., different machines) would be a matter of providing the mechanism for locating server objects in remote memory spaces by client objects. In a nonencapsulated system, providing for distribution is far more difficult.

Promoting Reuse

Much has been written about the potential afforded by object orientation in terms of reuse. Understanding the potential for reuse at the code level is easy. Component objects are one form of such reuse. Components are specific, often already compiled individual objects (instances of classes) that are designed and implemented to deliver a very specific service as described by its contract. They are the software equivalent of the integrated circuit (IC) chip. They may be put together, possibly with some additional new objects, to compose an application.

Abstract, often generic, extendible classes arranged in libraries are tools or building blocks to construct an application. They are usually not application oriented (i.e., interrelated to compose an application), but instead are utility oriented (i.e., interrelated, usually hierarchically, to be used across a number of application domains to address a specific type of need). Class libraries are usually passive; they are just a set of classes interrelated through inheritance without any control flow or message passing between them. They do not do anything, but they can be used to build things that do things. Java foundation libraries, graphical user interface (GUI) libraries, and so on, are good examples.

It is possible in object orientation to extend the potential of reuse to beyond just that of code. Analysis and design patterns and application and design frameworks can all arguably be examples of none-code reuse opportunities afforded by the adoption of this paradigm.

It is important to note that reuse, particularly code reuse, has an implication for defect management. Components and class libraries are written by others, often in software process situations that are vastly different from the one used in your organization. Defect management standards other than yours may have been applied in their construction. In short, they may not be at the same level of quality as your product. If they exceed yours, in general, you are winning. If they lag yours, then you must be careful. Additionally, how you interface such products with your home-written software becomes important. For software to be reliably interfaced with the host system it has to have certain characteristics (e.g., it has to have a clearly and correctly defined contract). Such characteristics must be extant if our testing of the adequacy, robustness, and correctness of the interface is to be effective.

Finally, reusable software obtained from a third party might often be used by many organizations other than yours. This affords the opportunity for you to become aware of some of the product's problems and defects without having to discover them yourself. This would have a positive impact on your defect management effort.

On the other hand, some of the challenges that come with the adoption of this paradigm are covered in the following sections.

A Change in Process Is Required

Although the activities to be completed to produce a software system are largely the same irrespective of the paradigm used, how we conduct them is remarkably different. This means that the tasks and techniques utilized are often not those used when producing software in the traditional (nonobject-oriented) fashion. The realization that different tasks and techniques are required is important, but the issue goes deeper than this. There is also a corresponding shift in the proportion of time and effort expended on the different activities. There will be changes on all three dimensions of methodology, technology, and context, and the process will be remarkably different.

A Change in Attitude Is Necessary

With any change in paradigm comes a corresponding necessity to change certain attitudes. The contextual changes necessary in the process are particularly remarkable. There must be an understanding, for instance, that there is likely to be proportionally more time spent on analysis and design and less on coding than in a traditional development environment, particularly initially. There must be a change in attitude in terms of how software engineering staff productivity is evaluated and how they are rewarded. It is important to realize that Rome was not built in a day; it will be some time before the new paradigm is well established and yields positive results of the magnitude expected. New roles are needed (e.g., reuse manager) and some roles might be eliminated.Resource Issues

An important challenge in migrating to the new object paradigm is that of human and technological resources. Obtaining skilled personnel, appropriate tools and technologies, and even training and consulting in the area can be a major challenge.

Defect Management

Abstraction, modularity, and message passing, which are the basic tenets of object orientation, create both opportunities and challenges in defect management of object-oriented systems. We have already alluded to some of the opportunities in preceding sections. In this section I deal with some of the more significant challenges.

The implication of many studies, books, and other works on object technology has been that the use of the paradigm during analysis and design assists with comprehensiveness and completeness and thus drastically reduces the potential for the inclusion of defects of omission from the specification and design. Defects of omission are also usually positively impacted, or so it seems. Thus, it appears that as far as defect management is concerned, the object paradigm has a positive impact on the defect situation during the analysis and design phases and the defect content of analysis and design models. However, research based on data collected by Jones (1997) from several hundred projects states that object-oriented analysis and design have higher defect potential and it is harder to identify and remove defects from object-oriented analysis and design models than it is from traditionally composed design artifacts and models! This finding is counterintuitive and may be related to the differences in how we identify a defect in an object-oriented model as opposed to a non-object-oriented one. Also, it may have to do with the differences in overall maturity of the extant techniques of design defect identification between the two paradigms and possibly even the level of experience of the people completing or evaluating these artifacts who participated in the study. Nevertheless, if independently confirmed using robust empirical research techniques, this finding could somewhat alter the way we look at object technology.

Specifically, abstraction and modularity might assist in clarity and visibility. Encapsulation, the way we arrive at abstraction and modularity in object-oriented systems, thus helps avoid global declarations, amorphous programming, and so on. At the same time, however, it can introduce some challenges. For example, as a consequence of encapsulation, it becomes difficult to examine the current state of an object, a task that must be done easily during testing. Additionally, as control is distributed in an object-oriented system, even if the state of an individual object is obtained and examined, overall system state, being distributed, is hard to examine. Inheritance, a useful by-product of encapsulation, and particularly multiple inheritance, also creates opportunities for defects to escape detection or introduction of defects that are hard to test out. Polymorphic behavior, dynamic binding, and proliferation of interfaces all can contribute to problems in testing object-oriented code.

The first three of these challenges are management issues, the last is both technically and management oriented. It is this aspect that is the focus of our attention in this book.

  • + Share This
  • 🔖 Save To Your Account