Home > Articles

This chapter is from the book

1.3 Communication and neuro linguistic programming (nlp)

  • Software engineering is, first and foremost, about communication.

The model of communication that we have found most useful for Generative Analysis comes from the field of neuro linguistic programming (nlp). Although nlp is nowadays mostly seen as some sort of self-help system, the field was invented in the early 1970s by Dr. Richard Bandler, a mathematician, and Professor John Grinder, a linguist, at the University of California at Santa Cruz. They were working in collaboration with the social anthropologist, semiotician, cybernetician, and general polymath Gregory Bateson, so early nlp has solid academic credentials. The later self-help aspects of nlp have no impact on our work here and won’t be covered.

  • Neuro linguistic programming explores how the mind (neuro) is influenced by (programming) specific linguistic patterns.

In The Structure of Magic [Bandler 1], Bandler and Grinder analyze patterns of therapeutic language and present that analysis as the nlp Meta Model, an analytic model of human communication. In later work with the hypnotist Milton H. Ericson, Bandler presented the Milton Model, a model of hypnotic communication [Grinder 1]. Both modes of communication are important for Generative Analysis, and you will encounter hypnotic modes of communication much more often than you might think. It is good to be able to recognize this mode.

From our perspective as Generative Analysts, nlp is exceedingly useful because it provides a very rich set of communication strategies in the nlp Meta Model that enable us to acquire accurate, high-quality information in a predictable manner. In Generative Analysis, we extend the nlp Meta Model into M++, which is a variant adapted for analysis. M++ gives you specific, learnable techniques for improving any communication, but it is particularly suitable for analytic purposes. M++ is one of the key tools that you will learn to use to create precise inputs for Generative AIs, and we will present a whole chapter on it later in the book. The Milton Model is also useful, and we will present the Generative Analysis version, Milton++, at the end of the book.

Today there are many different approaches to nlp, and our approach is based on the initial work by Bandler and Grinder, as we mentioned above. Bandler regards trance as the underlying mechanism of nlp, and so do we. When we say “trance,” relax…. For our purposes here, all we mean by “trance” is narrowing your attention and focusing on a restricted set of usually internal events. Being able to recognize this phenomenon in yourself and others is an important part of effective communication because when a party is in trance, they are not necessarily well-connected to reality, and this greatly impacts what they can and can’t communicate and how accurate those communications are. It amuses us to think that the tendency of AIs to hallucinate and generate false responses is somewhat like human trance, in which a definite disconnect from reality occurs and fictions blossom. Whether the underlying mechanisms are analogous or not remains to be seen.

1.3.1 Modeling—the map is not the territory

It’s funny how many programmers seem to think that they are not creating models. A program is as much a model of a pertinent aspect of the problem domain as is a UML model, as is a precise linguistic description, as is a business analysis document. To model is to create a representation of something that captures certain features and ignores others. It is a process of abstraction. We are all modelers by virtue of how the human brain functions. We just can’t help it.

What happens is that we receive a limited amount of sensory data and construct some internal representation of the world according to these data. This internal representation is shaped not just by the sensory inputs, but also by purely internally generated factors such as our presuppositions about the world that arise from past experiences. We generally act and react to this internal representation as though it is the world even though it is not—it really is just a very personalized representation.

In nlp and Generative Analysis, we call our internally constructed representation of the world the “map” and we call the world itself the “territory,” and this gives rise to one of the first principles of nlp and Generative Analysis:

  • The map is not the territory.

This is a surprisingly important principle that originally came from General Semantics [Korzybski 1], in which Alfred Korzybski stated four premises about maps.

  1. The map is not the territory.

  2. No map is a complete representation of its territory.

  3. Maps can be mapped (meta-models).

  4. Every map at the very least says something about the mapmaker.

In Generative Analysis, our analysis models—be they UML models, Literate Models, mental representations, or something else—are all considered to be maps, and the four Korzybski principles apply.

Interestingly, these principles apply just as much to Generative AIs. The neural network of the AI has been trained on vast data sets to obtain some internal representation (the map, which is currently not understood). However, this is a map of the data sets that are themselves (we hope) maps of some aspect of the real world. While humans have some sort of direct connection to the real world via the sense fields, AIs are trained on data abstracted from the real world in the training data sets and are working on meta-maps (maps of maps). They therefore have no way to check their maps directly against reality at this point in time, although many people are working on that.

According to Korzybski principle 2, no map is a complete representation of its territory. This is because useful maps must be abstractions—information not pertinent to the purpose of the map is discarded to make the map useful. There is a wonderful short story by Jorge Luis Borges titled “On Exactitude in Science” in which cartographers produce a 1-to-1 scale map of a whole Empire that covers the Empire exactly. Of course, it is useless and just gets in everybody’s way.

Korzybski principles 1 through 3 are somewhat obvious, but principle 4 is often overlooked. Nevertheless, it is very important for the Generative Analyst. Whenever you are talking to an individual in an analytic way, you are exploring their map of reality, and very often this map says more about the individual than about the reality. For example, it is quite common, when talking to a stakeholder, to come away with the impression that whatever it is they do is the core business activity and that everything would fall apart without them!

Abstraction often creates error. For example, the wonderful map of the London Underground created by Harry Beck in 1931 (please find it online because copyright does not allow us to reproduce it here) is as fine an example of abstraction as you can get. Stations are represented by colored nodes, and tube lines are represented by colored edges. It tells you exactly how to get around London by tube, and its level of abstraction and style has been copied around the world because it works so well and is completely fit for purpose.

However, should you try to use this very abstract but very useful map for a purpose for which it is not intended—say, to take a walking tour of London—you will immediately have problems. The River Thames is in the wrong place and runs the wrong course. Distances between stations are compressed or expanded to fit on the map, and stations are in the wrong place. Every year a surprising number of tourists try to use this map for walking tours and are disappointed.

There are a couple of important principles we can extract from the Beck example.

  1. A map needs to be at the right level of abstraction to make it fit for its purpose.

  2. Do not use a map until you know its purpose and level of abstraction.

A very pragmatic way to view mental and other maps is as models of the world that are by their nature neither completely correct nor completely incorrect, but are designed to be fit for a specific purpose. There is no universally accepted decision procedure to determine the correctness of a cognitive map, and it is doubtful there ever will be. If such a procedure existed, it would, by definition, give us access to absolute truth—the holy grail of philosophy.

1.3.2 Distortion, deletion, and generalization

We want to introduce you to distortion, deletion, and generalization now because these ideas are central to Generative Analysis. We will have much more to say about them in Chapter 6.

According to nlp, mental maps are constructed by a filtering process comprising the following.

  • Distortion: The map is not accurate; it has hallucinatory elements.

  • Deletion: The map is not complete; information is missing.

  • Generalization: Specific details have been removed from the map and replaced by rules and beliefs.

Distortion, deletion, and generalization are, somewhat paradoxically, the way mental maps are made efficient—are made just good enough to get the job done with minimal resources.

Distortion is the process of hallucinating things that don’t exist. A good example is that every time you move your eyes, your vision blanks out completely, but somehow your brain “fills in the gaps” and hallucinates a continuous visual field. Another obvious example is the blind spot on the retina that we all have but are usually not aware of.

Deletion is always going on simply because we do not have the resources to process the flood of sensory data. For example, as you read this, you are probably not aware of the top of your head until we mention it. The process of attention itself relies on deletion because deletion allows us to filter out things that we are not attending to.

Generalization is a “mental shortcut” and is a great time-saver, because instead of having to attend to every specific detail, we can often get by just by applying rules and beliefs about things. For example, I don’t know where my cat is right now, but as a rule, at this time of night she is sleeping on the sofa.

Distortion, deletion, and generalization appear to be human givens, and so they find their way into natural language. Generative AIs are based on Large Language Models, so they are subject to these processes.

In terms of using Generative AI in software engineering, distortion is perhaps our biggest concern because it is well-known that Generative AI can often just invent answers to questions. Deletions are also quite common. For example, you might specify a rule, such as “Each department has one or more employees,” and get a Generative AI to generate some code based on that (see later), and you find that the business rule “one or more” has been reduced to “zero or more” in the generated code. Generalization is also a significant problem if you let the AI suggest prompts. An example we will see shortly is that the AI generates code to calculate tax and generalizes this so that the tax rate is fixed and hard coded.

We don’t really know what, if anything, corresponds to a mental map in an AI system, but we suspect that it is not efficient in the way human mental maps are. Rather, it seems to be exhaustive. All input information is encoded somehow, and it appears not to be regulated by the mechanism of forgetting.

As a Generative Analyst, how you deal with distortion, deletion, and generalization defines how well you can do your job. You need to be aware of the operation of these processes in every communication:

  • Your own team’s internal communications

  • Communications conducted directly with stakeholders

  • Written, audio, or visual communications

  • Communications with Generative AIs

Later in the book, we will introduce a specific metalanguage called M++ that allows you to identify distortions, deletions, and generalizations and work constructively with them to generate the precise, high-quality information you need. As well, M++ will help you to formulate precise inputs to Generative AIs and critically evaluate their outputs.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.