Home > Articles

This chapter is from the book

1.8 Modeling in Generative Analysis

In this section, we will outline the Generative Analysis approach to modeling. As you have probably noticed by now, we take a very broad view of what constitutes a model, and we consider anything that accurately abstracts key business rules and requirements to be a kind of model of the system under consideration. This means UML models are models, code is a model, and precise narratives that can generate UML and code are models.

Unless we are teaching UML, we are not really concerned with how formally “correct” a model is; rather, we are concerned with how useful it is within the project. We take it as axiomatic that the degree of usefulness of any model depends on how well it maps onto the “real world,” the objective level, which in this case is the problem domain. Our guiding principle is “The truth is out there,” and the best model of a thing is the model that is closest to the thing itself.

In the next subsection, we set out a more precise notion of this using a principle that we call “software sanity.” This will give us useful guidelines for creating and assessing models. We will also explain why Convergent Engineering is the key to building good models.

1.8.1 Software sanity and Convergent Engineering

We pragmatically define the sanity of a cognitive map or a model as the degree to which it is useful in a particular context. We can apply this principle to software systems, and it turns out that this is a very useful thing to do. We can categorize both the interface and the implementation of a software system as Sane or Un-sane according to the criteria in Table 1-1.

Table 1-1 Interface and Implementation—Sane and Un-sane

From the perspective of the end user, the category Interface Sane is all that matters. End users generally don’t care how software systems work; they just want to know what the systems can do for them, and they want them to be easy and intuitive to use. For them, Implementation Sane is not recognized as important, even though it will likely impact them in terms of the correctness and ease of maintainability of the software.

From the perspective of software engineers, we have traditionally needed systems to be Implementation Sane so that we can build and maintain them with relative ease. Implementation Sane allows us to leverage the expertise of domain experts to help us build and understand the system. This is just a restatement of the principle of convergence that we will look at shortly. As an example, consider Figure 1-20. Each of these models is isomorphic, but only one of them is Implementation Sane.

FIGURE 1-20

Figure 1-20 A well-designed class, a poorly designed class, and some variables and functions

The construction of Generative AI has demonstrated powerful systems that are Interface Sane, but Implementation Un-sane. They interact well with humans and we assume that they must have some internal isomorphisms to the problem domains in which they operate, but we don’t know what these are and we might not understand them even if we did. In the future, as hardware gets more and more efficient, more and more systems will not have any conventional software at all. Rather, they will be composed of an AI that has been instructed to behave in a specific way. We already see the beginnings of this with decision-making systems such as recommendation engines. Such AI systems are also generally Interface Sane and Implementation Un-sane—they satisfy users, but we don’t know how they work. Should we be concerned about this? Most definitely. However, the problem isn’t so much that we don’t know how they work; it is that we don’t understand how they arrive at the outputs they do. This is crucially important because their outputs can have technical, social, moral, health, and legal implications. It is a recognized problem that many people are working on.

As we move toward an increasingly software-free future, perhaps we have two new categories, as shown in Table 1-2. AI Sane explains the rules it applies to generate its output, but AI Un-sane does not. Obviously, for many use cases AI Un-sane is not acceptable.

Table 1-2 New Categories for a Software-free Future

 

Sane

Un-sane

Interface (how the software presents to the users)

It does what you need it to do.

It may or may not do what you need it to do.

It is intuitive, rational, and easy to understand.

It is unintuitive, irrational, and hard to understand.

It “feels right” because it seems to be using the same map of the world as you are.

It “feels wrong” because it seems to be working to an entirely different map from your own.

Interface sanity is a necessary condition for software to interact successfully with human beings.

Interface un-sanity ensures software interacts poorly with human beings.

Implementation (how the software presents to the developers)

It has an internal structure that is obviously isomorphic to the structure of the problem domain.

It has an internal structure that may be isomorphic to the structure of the problem domain; any isomorphism is complex and obscure.

Its internal structure is understandable by domain experts.

Its internal structure is not understandable by domain experts.

We understand how it outputs what it does.

It is difficult or impossible to understand how it outputs what it does.

Implementation sanity is not a necessary condition for software to interact successfully with human beings.

Implementation un-sanity may make the software unfit for purpose or at least fragile and resistant to change.

 

Sane

Un-sane

AI

The rules for making decisions are explicit, and we understand why it arrives at its outputs.

The rules for making decisions are implicit, and we do not understand why it arrives at its outputs.

There is an audit trail for each decision.

There is no audit trail for each decision.

1.8.2 Convergent Engineering and software sanity

Generative Analysis owes much to Convergent Engineering [Hubert 1]. The key idea, according to Taylor, is very simple: The structure of the software system should match the structure of the business.

Convergent Engineering is a perennial philosophy for building software systems that appears in many variations at many points in time. Recently, it has had a certain amount of traction as Domain Driven Design, and it has always been at the heart of Generative Analysis and was evident even in our initial Literate Modeling paper [Arlow 3].

A convergent system is Interface Sane (it does what it needs to do in an intuitive and user-friendly way) and Implementation Sane (its ontology and structure mirror those of the problem domain). Our notion of software sanity is just a restatement in terms of nlp and General Semantics of the principle of Convergent Engineering. Convergence is the guiding force for all the models we create. We have found time and again that the more convergent our models are, the saner they are and the more useful and effective they prove to be. In fact, over the years, convergence has become our primary concern because it works so well.

We observe that systems that are Implementation Sane naturally tend to be Interface Sane. Likewise, systems that are Implementation Un-sane tend to be Interface Un-sane. This appears to be because the external behavior of a software system is often predicated in some way on its internal structure—the saner that internal structure is, the likelier it is that the system will present in an externally sane way. There is also a human factor: Software engineers who know about, care about, and focus on Implementation Sane software will tend to apply the same criteria to the Interface aspects of the system, leading to a good result.

1.8.3 Principles of software sanity

Now that we have a working definition of software sanity, we can set down its key principles, which are rather obvious.

  • The key abstractions in the problem domain are specified in models and realized in software (the lowest-level model). For example, in a banking system, the classes BankAccount, Bank, Party, and so on appear in the problem domain, in the models and in the software. As we will see later, Literate Modeling requires us to highlight these abstractions in the text using a different font.

  • The internal structure of the key abstractions in the problem domain are specified in models and realized in software. For example, in a banking system, the Bank abstractions might have the attributes accountNumber, accountName, and balance.

  • The relationships between the key abstractions in the problem domain are specified in models and realized in software; for example, a Bank contains many BankAccounts.

  • The behavior of the key abstractions in the problem domain are specified in models and realized in the software; for example, the BankAccount class may provide a withdraw(…) method.

When we talk about the behavior of a key abstraction, this is shorthand for the business functions that the abstraction supports. Obviously, the idea of a “bank account” is an abstract notion (you can’t kick it) and so doesn’t have any real-world behavior as such. In a manually operated business, this abstraction is realized as information recorded in some way and as rules about how that information may be manipulated, and this constitutes its implicit behavior.

However, when this abstraction is realized in a software system, it must be assigned explicit behavior to support all the business functions in which it participates. For example, let us suppose that the “bank account” abstraction must support the business functions of

  • Maintaining an account balance

  • Withdrawing money from the account

  • Depositing money into the account

These business functions imply structural and behavioral features of the abstraction that we must include when we realize it in a model as a BankAccount class. Thus, it needs an attribute to hold the balance, and operations to deposit(…) and withdraw(…) money from that balance.

The structural features are often quite easy to find: You analyze the key concept, and you find that a “bank account” has parts: a balance, owner, name, and so on. The behavioral features are not quite as obvious. One way to arrive at them is to imagine that you are the abstraction. What services do you need to provide to support the required business functions? Anthropomorphizing abstractions helps you to realize them in OO models. We’ll look at many more specific ways to find the structural and behavioral features of abstractions later in this book.

Generative AI is quite good at this sort of role-playing. For example, we can get Copilot to pretend to be a bank account, as shown in Figure 1-21.

Figure 1-21 Copilot pretends to be a bank account.

1.8.4 Extending the abstraction

One of the often-overlooked aspects of object-oriented analysis is that sometimes we extend an abstraction to help us create software that is intuitive and Interface Sane. For example, suppose we were creating a graphics system, and we thought that the notion of a pen might be a useful abstraction. Let’s get Copilot to role-play that abstraction and see what we get (Figure 1-22).

Figure 1-22 Copilot pretends to be a pen.

The first bit of the answer about the parts of the pen is grounded in objective reality. We can find some or all of these parts on any pen we care to examine. Many people would probably agree with the second part about the services, but it is a fiction. The only service a pen offers is to “dispense ink in a uniform way on demand.” It is the user who does all these other things using the pen, not the pen itself. As Copilot only reflects its training data, we conclude that most people really think about pens in this way when asked what services they offer. Extending the abstraction in this way seems to be natural and intuitive.

In fact, extending an abstraction to offer services it doesn’t objectively have is an accepted part of object-oriented software development that many novices find confusing. For example, let’s get Copilot to use the idea of “pen” as an abstraction in a graphics system (Figure 1-23).

Figure 1-23 What services would a pen offer in a graphics system?

Most software engineers would agree with this analysis, and you can find many examples of graphics software that has a similar concept of a “pen” with similar attributes and functions. The only questionable service is “Saving or exporting the pen drawings.” That would often be done by a Canvas class on which the pen writes. We note that real-world canvases don’t have this facility, but it somehow seems reasonable to extend an abstraction of a canvas in this way.

The lesson from this is that in Convergent Architecture, Generative Analysis, and object-oriented software engineering in general, it is acceptable to extend the capabilities of an abstraction to embrace behavior it does not have in the real world, provided it seems natural and intuitive to do so.

1.8.5 Test your models against reality

Every model is a theory about the world. The thing about theories (as opposed to hypotheses) is that they are testable. According to Korzybski:

  • “Theories are the rational means for a rational being to be as rational as he possibly can” [Korzybski 1].

This brings us to a key guiding principle for all types of modeling:

  • Test your theories against reality at the earliest possible opportunity.

A key test of a model is to apply the principle of convergence. How well does the structure of your model map onto the structure of the problem domain? Does the map (model) match the territory (domain)? Here are some simple techniques you can use to find out.

  • Try presenting your model to domain experts and talking them through it. If the model is convergent, they should be able to understand it because you will be using the same terms.

  • Write a Literate Model for parts of your model. As we explain in Chapter 7, a Literate Model is part of your model embedded in an explanatory narrative that uses the terms defined in the model (e.g., “Every BankAccount has a balance.”). Does the narrative make sense? Does it read well? Can domain experts and other stakeholders understand it?

  • Create a project glossary for the key abstractions in the problem domain. Can you find these abstractions in your models? If not, why not? Do the abstractions have the same meaning in the project glossary and in your models?

Of course, the ultimate way to test any model is to execute it, which is what we do with our mental models of the world. We act as though they are true, execute them in the real world, and get feedback that tells us how useful they are. If we have the behavioral flexibility to change our models according to the feedback we get, then we are much more likely to achieve our goals. Creating a model is a bit like driving a car or bike—you continually make small course corrections based on feedback until you get to your desired destination. You can only know what corrections to make by knowing where you are at any point in time (rather than where you think you are), and where you are trying to get to. So, the sooner you can test a model, the sooner you can take corrective action if that is necessary.

With Generative AI there is the option to create a model as a precise narrative and get the AI to simulate it and answer questions about it. This is a great way to test a model early in its lifecycle, and we will present a full example of this later in the book. Another way to test a model is to get Generative AI to create a behavioral prototype. This is an executable prototype specifically designed to demonstrate the behavior of a key part of the system. It is mainly used to get feedback, but it can sometimes be refined into delivered software. We will later see an example of this using the XAMPP web framework.

1.8.6 Defending the indefensible

Perhaps the worst thing you can do is to defend a model that just isn’t working very well. This is a bit like driving a car, going off the route, and yet continuing because you are convinced your mental map is right despite what the world is telling you. We’ve encountered this unfortunate habit quite a lot over our years in software engineering. It’s completely understandable from a human perspective—someone might have put a lot of time and effort into creating a model and thereby has become emotionally invested in it.

We find that the tendency to defend a broken model is common in circumstances in which there are no established criteria for assessing models. This is one of the reasons we have spent some time in previous sections defining the concepts of software sanity and convergence. If you use these criteria, then you will always have a benchmark against which to assess any model.

Always remember that a model is only a model, and it has no intrinsic value outside of the purpose for which it was created. It is either fit for that purpose or not, and there is no need to take any of that personally. The best modelers have developed the behavioral flexibility to change or abandon a broken model straight away if that’s what the world is telling them to do.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.