Home > Articles

This chapter is from the book

Technical Innovation

Prior to the OO paradigm all was not chaos, even in the Dark Ages of programming. The academicians were busily adapting mathematics to practical use in the computing environment. During their cogitations the academicians provided a mathematical lingua franca for the underpinnings of software development. This was a collection of theories and models that were particularly well suited to the computing environment. Today this mathematics is the basis of almost everything we do in software development.

This book takes an engineering approach to software development, so it is not going to trot out a lot of theorems. It is, however, worth a couple of subsections to briefly describe at the megathinker level the major relevant mathematical elements and how they influence software development. That's because those ideas still underlie the OO paradigm, albeit in a very well-disguised manner.

The Turing Machine

In the 1930s Alan Turing developed a detailed mathematical model of a theoretical calculating machine. The model was elegantly simple. It assumed all calculating processes were divided into fundamental, individual steps. Those steps could be expressed as a small set of atomic instructions to his machine. Once so divided any calculating process could be described in the following terms:

  • Sequence: A sequence of one or more instructions located consecutively in the program
  • Branch: A shift in the execution of the program from a sequence in one program location to another sequence at a different location in the program based upon the truth of some boolean condition
  • Iteration: A combination of sequence and branch that allowed a sequence to be repeated indefinitely until some boolean condition became true

The mathematics became heavily involved with precise definitions of things like instruction and condition, and then proving that a Turing machine with a finite instruction set could, indeed, successfully perform any arbitrarily complex calculation. To do so Turing had to examine various strategies for combining and nesting the basic operations.

The Turing machine is the most famous, but there have been a succession of models of computation developed over the years. These models belong to the general class of hardware computational models. It is important to realize that such models describe how a computer works, not how software is designed. In reality, almost the entire computing space is based upon various hardware computational models. Even exotic mathematical constructs like 3GL type systems have their roots in hardware computational models.

The OO paradigm represents the first major break away from the hardware computational models in creating an approach to software design that is, at least in part, independent of the hardware computational models.13 That is not to say, though, that the OO paradigm ignores those models. Ultimately the problem solution has to be executed on Turing hardware, so the OO paradigm necessarily has to provide an unambiguous mapping into the hardware computational models.

Languages and Methodologies

As suggested, programs are written in a variety of languages. Initially programs were encoded as binary instructions directly into the hardware on plug boards that directly connected power rails to the hardware to form logic 1 and logic 0, one bit at a time. Instructions to the Arithmetic Logic Unit (ALU) consisted of binary words of a fixed size written directly into hardware registers. This was complicated by the fact that the hardware commonly subdivided those words into bit subsets called fields.

The first major advance in software development was the introduction of BAL.14 As bigger problems were solved, software programs became larger, but people don't think well in binary. BAL enabled hardware operations to be described symbolically with mnemonics like ADD, MOV, and JMP. BAL also enabled ALU registers to be represented symbolically as R1, R2, . . ., Rn. Finally, BAL enabled the developer to assign mnemonic names to memory addresses so that data could be named in a meaningful way. All these things were a boon to software development because they allowed the developer to think about the solution in more abstract terms than endless strings of 1s and 0s.

However, for BAL to work, all those mnemonics had to be translated into the binary view that the hardware understood. Thus was introduced the powerful notion of a program, called a compiler, to process other programs by converting them from one representation to another. (Primitive operating systems were already around to manage the housekeeping around running programs through the ALU, but they didn't significantly modify those programs.)

While all this made life much, much easier for the early software developers, there were still some problems. BAL was necessarily tied to the specific ALU instruction set, and by the 1950s hardware vendors were churning out a wide variety of computers with different instruction sets. What was needed was a more abstract description of the solution that was independent of a particular instruction set. This led to the introduction of FORTRAN (FORmula TRANslator) and COBOL (COmmon Business-Oriented Language) in the late 1950s. Just as BAL raised the level of abstraction of software programs above binary expression, this next generation of languages raised the level of abstraction above the ALU itself.

These 3GLs introduced a number of new abstract concepts for computing. These are the primary ones that characterized that generation of languages.

  • Procedures. Procedures were blocks of instructions that could be invoked from anywhere in the program. When the block completed, control would return to the point in the program from which the procedure was invoked. This enabled a suite of instructions to be reused in different contexts, which was quite common in large applications.
  • Procedural message passing. However, the data that was modified for each invocation context of a procedure needed to be different. Procedure parameters enabled a particular context to provide data that was unique to that context. Then such parametric data became a unique message sent to the procedure by an individual invocation context. This went a long way toward decoupling what the procedure did with the data from the source context of the data.
  • Types. To provide that decoupling an abstract mechanism was needed for describing generally what the data is without the detailed semantics of source context. Types allowed the data to be defined for the procedure very generically (e.g., as an integer value). Thus a procedure to compute statistics on a bunch of samples could do so without knowing what the samples actually were; it only needed to know they were integers to compute things like means and T-Statistics. Thus, types at the 3GL level initially defined the implementation of data rather than the problem semantics of the data.
  • Stack-based scope. Since procedures were called from other procedures, there was an inherent hierarchical structure for sequences of operations. Procedures needed to know how to get back to where they came from, and the notion of a call stack was born. The call stack provided a way to keep track of where you came from so that you could unravel nested procedure calls as they completed their tasks.
  • Procedural block structuring. Procedures provided very useful building blocks for isolating functionality. Such modularity enabled a divide-and-conquer approach to managing complexity. The 3GLs extended this idea to provide modularity within procedures. This supported hard-won lessons like programs being more robust when groups of instructions had single entry and exit points.

Thus the 1960s was a time of profound revolution as software development moved from BAL to 3GLs. New concepts like procedural message passing and stack-based scope required a different view of construction than the only slightly abstracted Turing view of computing in BAL. Into the breach stepped the methodologists. As soon as 3GLs became widely accepted, the need for training developers in using them properly arose.15 By the early 1970s it was clear that the larger programs enabled by 3GLs needed to be properly designed. That gave birth to a host of construction approaches under the general umbrella of Structured Development, as discussed previously.

Basically, those approaches recognized that solving large problems requires something more that just Assembly or 3GL coding skills. Somehow all those abstract constructs needed to play together properly to solve the customer's overall problem. Not the least of the problems was the recognition that programs needed to be maintainable. In addition, large programs needed a structural skeleton and, once in place, that skeleton tended to be very difficult to modify because of the sheer number of disparate constructs hanging off it. So the methodologies quickly evolved to a more abstract level than 3GL coding, and the developer started looking at the Big Picture as a separate design activity from 3GL coding.

Programming languages are not the only representations of executables. There have been a large number of pictorial notations used to describe programs. It is no accident that every design notation from early HIPO charts16 to today's Universal Modeling Language (UML) are often referred to as graphical notations. Such notations can express the abstract concepts needed for design very efficiently.

Most of the readers of this book will have some software development experience, so the previous discussion of languages and methodologies will probably be quite familiar. However, it was worth the review because there are several important things the rest of the book builds upon.

  • A problem solution expressed in any language above binary machine language is a more abstract representation of the machine language that will actually execute on the Turing Machine. Another word for abstract representations of something more concrete is model. Thus a program in C is a model of the problem solution that will ultimately execute on the computer in the same sense as a UML OOA model is.
  • Software development is continuously evolving towards more abstract ways of expressing problem solutions.
  • Because of the ubiquitous presence of mathematics in the computing environment, the computing space is deterministic. For example, compilers, linkers, and loaders automated a lot of drudge work for early BAL and 3GL developers because mapping things like symbolic names to binary addresses is deterministic, even for complex multitasking environments.

Sets and Graphs

It is beyond the scope of this book to describe how ubiquitous various mathematics, especially set and graph theory, are in the computing space. However, a few implications of that are very important to the OO paradigm and MBD in particular.

  • The computing space is deterministic. That's because some kind of rigorous mathematics underlies everything in it, however well disguised that might be. This has enormous implications for automation.
  • The computing space is disciplined. Everywhere we turn we are constrained by rules like those for language syntax, data structures, identity, GUI design, and so forth. In reality there is very little freedom in implementing problem solutions once we have defined the specific computing environment. Creativity in software development lies in designing solutions, not implementing them.
  • The computing space is ultimately based on hardware computational models. However abstract our solution constructs are, the underlying mathematics ensures that they will all map unambiguously into hardware computation.
  • The first methodological step in any software construction is expressing the ill-defined, imprecise, ambiguous notions of non-Turing customer spaces in notations that are complete, precise, and unambiguous with respect to the computing space. The mathematics underlying those notations that enable rigorous expression must necessarily be the same mathematics that underlies the computing space.

The methodologies of SD were not simply encyclopedias of a potpourri of hard-won lessons learned in the development trenches prior to 1970. Instead, they became cohesive philosophies of software construction that were based on solid mathematics. Every software construction methodology that has stood the test of time sits on that sort of mathematical platform, however informal its exposition might be and however simple its notation might seem. This was a two-way street; computing has been the driving force in developing entirely new branches of mathematics, from relational theory through type theory to exotic topological theories for non-Turing computation.

Normal Form (NF)

One of the most useful tools to evolve concurrently with SD was a particular branch of general set theory that underlies data storage and the static description of programs. That is, it formalizes and constrains the way data is defined, modified, and related to other data. E. F. Codd first defined it in the late '60s to describe relational databases.17

Codd's Relational Data Model (RDM) has since been incorporated into both the OO paradigm and extensions of SD. For example, the descriptor attribute associated with knowledge in OO classes is a term Codd introduced in 1970. A suitably arcane branch of mathematics with much broader scope has since been developed, called relational theory. The OO paradigm is based on that broader view.

We can avoid the mathematics of the set theory that forms the basis of NF here and provide the executive summary view because all one needs to know to develop good applications are the implications of the theory. Though NF is more broadly applied, it is most easily understood in terms of tables. The table itself represents an n-ary relation of table entities (the rows) that each share the same n attributes (the columns). That is, the table describes a set of entities that share characterization in terms of the attributes. Each attribute identifies a particular semantic domain of possible values. Each table entity (a row or tuple) has a unique identity and a set of specific values (the table cell values) for the attributes (see Figure 1-3).

Figure 1-3

Figure 1-3 Mapping of problem space entities to an n-ary relation. Each identifiable entity becomes a tuple in the relation. Each relation captures a set of unique values of characteristics that are common to each entity.

In more practical terms, a column attribute represents a particular data semantic that is shared by every row in the table; it describes what the data in the cells of that column is. The cells then represent specific values for that semantic. Then each row represents a set of attribute values that is uniquely associated with the entity. The uniqueness is tied to identity; each row must have a unique identity that distinguishes it from other rows, and the attribute values must depend on that identity. In Figure 1-3 that identity is the name of a person.

In the more restrictive relational data model that applies to databases, some subset of one or more embedded attributes comprises a unique identifier for the table row, and no other row may have exactly the same values for all the identifier attributes. The identifier attributes are commonly known collectively as a key. Other attributes that are not part of the key may have duplicated values across the rows, either singly or in combination. But the specific values in a given row depend explicitly on the row key and only on the row key.

Relational theory is about the ways particular entities in different n-ary relations (tables) can be related to each other. For that theory to be practically useful, there need to be certain rules about how attributes and entities can be defined within an nary relation. That collection of rules is called normal form, and it is critical to almost all modern software development.

The kernel of NF has three rules for compliance:

  1. Attributes must be members of simple domains. What this essentially means is that attributes cannot be subdivided further (i.e., their values are scalars). That is, if we have a table House with attributes {Builder, Model}, we cannot also subdivide Model into {Style, Price}. The problem is that the Style/Price combinations would be ambiguous in a single-valued attribute cell for Model in a row of House. So the House table must be composed of {Builder, Style, Price} instead.
  2. All non-key attributes, compliant with 1st NF, are fully dependent on the key. Suppose a House table is defined with attributes {Subdivision, Lot, Builder, Model} where Subdivision and Lot are the key that uniquely identifies a particular House. Also suppose only one builder works on a subdivision. Now Builder is really only dependent upon Subdivision and is only indirectly dependent upon Lot through Subdivision. To be compliant we really need two n-ary relations or tables: House {Subdivision, Lot, Model} and Contractor {Subdivision, Builder}. (The RDM then provides a formalism that links the two tables through the Subdivision attribute that can be implemented to ensure that we can unambiguously determine both the Model and the Builder values for any House. That notion is the subject of a later chapter.)
  3. No non-key attributes, compliant with 2nd NF, are transitively dependent upon another key. Basically, this means that none of the non-key attribute values depend upon any other key. That is, for House {Subdivision, Lot, Model}, if we have a Subdivision = 8, Lot = 4, and Model = flimsy, the Model will remain "flimsy" for Subdivision = 8 and Lot = 4 regardless of what other rows we add or delete from the table.

Generally, most developers depend on the following mnemonic mantra to deal with 3rd NF: An attribute value depends on the key, the whole key, and nothing but the key.

Other NF rules have been added over the years to deal with specialized problems, especially in the area of OO databases. We won't go into them here because they are not terribly relevant to the way we will build application models and produce code. However, the value of the three normal forms above is very general and is particularly of interest when defining abstractions and their relationships.

Data Flows

Data was a second-class citizen until the late 1960s.18 Most of the early theoretical work in computer science was done in universities where the computer systems were solving the highly algorithmic problems of science and engineering. However, by the late 1960s there was an epiphany when people became aware that most of the software in the world was actually solving business problems, primarily in mundane accounting systems.

In particular, business problems were not characterized by algorithms but by USER/CRUD19 processing of data.20 For one thing, core business problems were not algorithmic rocket science—doing a long division for an allocation was pretty cutting edge.21 On the other hand, those very simple calculations were done repeatedly on vast piles of data. What the business community needed was a way to handle data the same way the scientific community handled operations. This led to various flavors of data flow analysis. The basic idea was that processes were small, atomic activities that transformed data, and the processes were connected together by flows of data, as illustrated in Figure 1-4.

Figure 1-4

Figure 1-4 Example of data flow diagram for computing whether a retirement gift should be given to an employee. Data is provided as flows (e.g., Employee ID and age) or is acquired from data stores (e.g., Date of birth and Today's date).

As it happens, when we make a data flow graph, it represents a breadth-first flow of control. The processes are self-contained and atomic rather than decomposed. So the problem solution is defined by the direct connections between those processes. The connections are direct and depend only on what needs to be done to the data next. If we want to solve a different problem, we change the connections and move the data through the same atomic processes differently. Data flows represented a precursor of the peer-to-peer collaboration that is central to the OO paradigm and will be discussed in detail later.

Alas, this view of the world did not mesh well with the depth-first, algorithm-centric view of functional decomposition. Trying to unify these views in a single software design paradigm led to some rather amusing false starts in the '70s. Eventually people gave up trying and lived with a fundamental dichotomy in approaches. For data-rich environments with volatile requirements, they focused on the OO paradigm as the dominant paradigm. Meanwhile, functional programming evolved to deal with the pure algorithmic processing where persistent state data was not needed and requirements were stable.

In many respects the purest forms of data flow analysis were the harbingers of the modern OO paradigm. As we shall see later, the idea of small, self-contained, cohesive processes has been largely canonized in the idea of object behavior responsibilities. Ideas like encapsulation and peer-to-peer collaboration have driven a breadth-first flow-of-control paradigm. But in two crucial respects they are quite different. The OO paradigm encapsulates both the behavior and the data in the same object container, and the data flows have been replaced with message flows. The important point to take away from this subtopic is that where SD was based on functional decomposition, the OO paradigm is more closely aligned with data flows.

State Machines

Meanwhile, there was another branch of software development that was in even deeper trouble than the business people. Those doing real-time and/or embedded (R-T/E) hardware control systems were experiencing a different sort of problem than maintainability. They just couldn't get their stuff to work.

Basically they had three problems. First, such control systems were often used in environments that were extremely resource limited. They didn't have the luxury of stuffing an IBM 360 into a washing machine or railroad signal. Second, the supporting tools that other developers used weren't available in their environment. This was especially true for debugging programs when they might not even have an I/O port for print statements, much less a source-level debugger.

The third problem was that they were dealing with a different view of time. Events didn't happen in synchronization with predefined program flow of control; they occurred randomly in synchronization with the external hardware, and that external hardware never heard of Turing unless it happened to be a computer. Their asynchronous processing completely trashed Turing's ordered world of preordained sequence, branch, and iteration. The tools have gotten better over the years, but the R-T/E people still have the toughest job in the software industry.

The tool that made life bearable for the R-T/E people was the Finite State Machine (FSM).22 We have no idea who first wrote a program using an FSM, but whoever it was deserves to be in the Software Hall of Fame along with Ada Lovelace and crew. It basically enabled asynchronous processing in a serial Turing world. It also made marvelously compact programs possible because much of the flow of control logic was implicit in the FSM rules for processing events and static sequencing constraints represented by transitions, as shown in Figure 1-5.

Figure 1-5

Figure 1-5 Example of a finite state machine for a simple traffic light. The traffic light migrates through states in a manner that is constrained by allowed transitions. Transitions from one state to the next are triggered by external events.

However, the really interesting thing about FSMs is that they happen to address a number of the same issues that are the focus of OOA/D. In particular, they provide an extraordinary capability for reducing the degree of coupling and truly encapsulate functionality. They also enforce basic good practices of OO design. If we were given the task of coming up with a single construct to capture and enforce core elements of the OO paradigm, the FSM would be that construct. But we will defer the discussion of how that is so until Part III.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020