Selecting technology for your project could occupy several books. This chapter covers four topics: how you make your selections; programming language; tools, including CASE tools; and the cutting edge of technology.
The Selection Process
Your selection of technology and tools profoundly affects your chances of survival, so observe how your team makes technology decisions. Usually, a decision is made because one person is persuasive or stubborn; the team already knows a similar technology; it is considered a safe, popular, or standard choice; or because a group of people decide it is the rational choice. I wish I could say that one of these is the best way to proceed. I have watched projects get in and out of trouble with all of them, so it is worth spending some time going over their strengths and weaknesses. Try to switch to a different method if you run into trouble with a weakness of your usual method.
One Person Is Persuasive or Stubborn
Many organizations have one person who is particularly persuasive or stubborn. It is not clear whether this person is wrong for being stubborn or right for being persuasive, or vice versa. Here are two short stories to illustrate. As usual, I have chosen to keep the project participants anonymous.
Project Amber was the initial, small, and successful project conducted prior to moving a department of 50 to 60 mainframe COBOL programmers to objects as part of a corporate initiative. Only two people had done any workstation development, and that was in C. One of them had just received the new C++ compiler. Not surprisingly, he advocated that the pilot project be done in C++. Because he was the most advanced workstation programmer in the organization, the others, of course, listened to his advice. A consultant was asked to check the project setup, and recommended Smalltalk over C++, based on basic considerations, as in this book. He stressed the difficulty of moving COBOL programmers to C++. The decision took several weeks to make, since the two initial programmers were faced with learning an entirely new language. They finally decided to run the initial project in Smalltalk. They got training, mentoring, and so on, as would be hoped. A few months later, they were happy with the decision. The initial project was successful and the organization continued to use Smalltalk.
Project Amber illustrates a case of the key person being open to discussion. This probably also helped the group to accept the advice that made their project eventually successful. One is not always so fortunate.
Deedee was a very small company, with only three programmers. Management added a fourth person, who refused to program in any language but Forth even though he was the only person there who knew it. He did the company's next projects in Forth. When he eventually left the company, it suddenly had legacy programs in a language no one knew.
Sometimes the person being stubborn is right, but sometimes he or she is wrong and is just being stubborn. It is hard to tell whether someone is wrong or is really two steps ahead of everyone else. In such a dilemma, what can you do? Get additional advice and take a rational decision-making approach. If the person in question still differs from the rest of the group, decide whether you can run the project without that person. Some systems are shipped with obvious design flaws, because without that stubborn person, the system simply would not have shipped. The managers evidently decided that a slightly flawed, shipped system is better than one that does not ship at all.
The Team Knows a Similar Technology
I used to believe people who said: "We have to use C++ because we already know C." Or, "We have to use this CASE tool because we already have it in house." Eventually, I learned, and I hope you learn sooner rather than later, that retraining in object-think swamps all other costs.
I have learned that it is faster to teach C programmers Smalltalk than it is to teach C programmers C++. Thinking in objects is easier in Smalltalk, the language is smaller, and the environment is more forgiving (refer to Project Amber, just described). You can, of course, build systems in either language. Each has it strengths, and projects have succeeded and failed in both languages. There are reasons and times to select C++, but they do not include "We already know C." More on C++ later in this chapter.
The same is true of CASE tools. Many projects have tried to take advantage of their organizations' investment in nonobject-oriented, upper-CASE tools, but they consistently find that they cannot express the OO functions they really want to. The tool is reduced to an expensive drawing package (see also Upper-CASE Tools later in this chapter).
If the team wishes to choose a technology because they know something similar, they should check to find out to what extent that similarity actually transfers to good advantage.
The Technology Is Safe, Popular, or Standard
Here is the story of a company I'll call Franco.
Franco's research department developed a sophisticated system in CLOS. When it came time to market the system, someone said it should not be shipped in that language because CLOS was not standard or popular, and that the system should be reprogrammed in C++ since the production department "already knew C and would be moving to C++they couldn't be expected to maintain the system in CLOS." The problem, of course, was that the researchers did not know C++, and had little interest in rewriting their system in a lower-level language. Franco's efforts to bring the system to market came to a standstill.
What might Franco have done? Because the system was ready in CLOS, my recommendation would have been to market it as it was. Users usually do not care what language a system is written in, and the CLOS version could have established the market for the sophisticated system.
I have seen organizations select technology on the basis of the number of copies of a book sold, or a company's sales volumes. Nothing in object technology is yet so standard that this argument should be the driver. Instead, you should select the technology that gets you market.
Minimize the constraints you put on your decision-making process so that you can allow each alternative to show how it will benefit or hurt your organization in reaching its goal. Where performance is an issue, "fastest possible" is not an acceptable goal. Figure out how fast it must be and measure against that threshold.
The Technology Is the Rational Choice
Much of the time, a rational selection process works well. One rational process is to discuss openly and then get a consensus. Consensus means that even people who would prefer another choice agree to be satisfied with, and even defend, the agreed-on choice.
A second rational process is to create a selection matrix in the following way:
List the characteristics you want to consider down the left side of a piece of paper, and the choices along the top.
Mark each cell with a tick mark if that choice has that characteristic.
Evaluate the matrix to decide the winning choice.
Some people put a weighting factor on each characteristic, then evaluate by multiplying each mark by the weighting factors and adding up the numbers in the columns. The highest score wins. There are two problems with using weights. One is that if the weights are close in value, a set of secondary factors might combine to outweigh the primary factors.
A second difficulty with weighting schemes is that the weights really are the personal preferences of the people putting the matrix together. I have found it easy to predict the outcome of a weighted selection process simply by discovering the preferences of the dominant people in the exercise.
The primary goal in working with the matrix is to discover the dominant factor in making your selection. Often, there is just one dominant factor that overshadows all others. On rare occasions there may be two. To illustrate, I offer the following conversation I had with my wife, which brought the point home to me.
"We can buy any car you like, as long as it is as reliable as a Toyota."
"This one has more trunk space, more legroom in the backseat, and you told me you liked the ride in the passenger seat."
"That is fine. As long as it is as reliable as a Toyota."
"This other one has faster pickup and better gas mileage."
"That is fine. As long as it is as reliable as a Toyota."
You can guess which car we bought. Once reliability was settled as the deciding factor, nothing else mattered. At first I thought the car situation was an anomaly, but I have seen similar scenes repeated as organizations select technology.
Portability was the factor that drove organizations to consistently select C++, until Java. In its early days, C++ used a preprocessor that produced ANSI standard C, which could be run on workstations, servers, and mainframes. In contrast, there was one Smalltalk for UNIX, another for Windows, a third for OS/2, and none for mainframes. No combination of other factors could replace portability. For AT&T, backward compatibility with millions of lines of installed C-code was a dominant factor. No number of subtle advantages of Smalltalk, CLOS, Eiffel, Object Pascal, and other languages could outweigh that one consideration.
Ease of learning and maintainability leads to the selection of Smalltalk. These factors tend to show up in IS organizations, so Smalltalk is often recommended for IS applications. Ease of learning and maintainability do not show up as the dominant factors in scientific and engineering organizations.
Programming power sometimes leads to the selection of CLOS. That is why Project Franco researchers used CLOS originally. The Franco management team incorrectly decided that language popularity was the dominant factor for their product.
Use a selection matrix to help discover the dominant factor(s). Go through the exercise of attaching weights and filling in the matrix. As you evaluate the matrix, watch carefully to see whether minor characteristics add together to give you the "wrong" answer. You will know the wrong answer because it will not have some characteristic you simply must have. Strengthen that characteristic so that the minor ones cannot overpower it. Repeat the process. After a few passes, you will have isolated one or two characteristics that overshadow the rest. At this point, think carefully about your values and needs, and make a final choice.
Choose a technology that gets you to your goal,
not one that is popular.
Every programming language brings with it a set of hazards. The hazards often come from preconceptions about the language, which may obscure the experience you will have. In this section, I discuss three languagesSmalltalk, C++, and OO COBOLin some detail, and Java more peripherally due to the shortage of completed Java projects from which to draw conclusions. From these discussions, you may be able to anticipate hazards for other languages.
When preparing to make your language decision, assume that knowing your current language will not reduce the training time. Ask, "What damage might this language cause on the project?" and assess the answers to that question.
THE MISCONCEPTION: "SMALLTALK IS SMALL". It sounds small. It even says small in its name! Yes, the language definition itself is small, but anyone learning the class library will tell you it is not a small system. Smalltalk comes with a library of several hundred classes, with several thousand methods; it has been evolving continually for over 20 years, and includes a vast array of useful components.
The large class library is actually an asset; the alternative would be for your developers to design some of those classes on your project's time (see Chapter 5). Still, it is disconcerting to be faced with the size of the system if you think it will be small.
When you use Smalltalk, the primary hazard to deal with is performance. Some people worry about the absence of compile-time type-checking in Smalltalk, but I have not seen a project impacted by the absence of compile-time type-checking. On the other hand, I have seen projects damaged for poor performance and lack of design (dealt with in Chapter 4).
PAY ATTENTION TO PERFORMANCE BUDGETS. Smalltalk is often criticized for being too slow for production systems. Object Technology International demonstrated that Smalltalk need not be slow by delivering a number of hard real-time systems written in Smalltalk, one of which was an oscilloscope.
On one project, accusations about the Smalltalk performance alternated with those about database performance. It turned out that neither was the primary cause of performance delays; rather, it was the overhead of many small requests going to the database. When the team modified the design to create as few requests as possible, each asking for the maximum database processing, performance improved by several orders of magnitude.
Nonetheless, many projects have given up on Smalltalk after being disappointed with the performance of their system. Therefore, manage the performance issue carefully. Work in the following way:
Allocate time in your project plan for performance improvements.
Buy a performance measuring tool and use it.
Establish maximum acceptable delays for your functions.
Make your system function correctly before going to the next step.
Make your system function fast enough by doing the following:
Measure the delay of your function. If it is acceptable, move on.
Do not overoptimize the performance. You may, of course, change the performance requirements.
Assuming performance is not acceptable, find the bottleneck. It is usually in one or two small areas. Improve the performance of as few places as possible. Do not overoptimize the performance of any area that is not a bottleneck.
To improve the performance of a bottleneck, change the data structures or the algorithms used. Do not use a different language unless there is a compelling argument for doing so.
Once the system's function is adequate, stop optimizing.
With this procedure, you will be able to write your system in Smalltalk, and still get acceptable performance.
DESIGN AND CODING STANDARDS. Buy some books to help you establish best practices and good habits for your group and then use them. These are the titles I recommend as the starter set: Design Patterns, E. Gamma, R. Helm, R. Johnson, and J. Vlissides (Addison-Wesley, 1994); Smalltalk with Style, S. Skublics, E. Klimas, and D. Thomas (Prentice-Hall, 1996); and Smalltalk Best Practice Patterns, K. Beck (Prentice Hall PTR, 1997).
Smalltalk is not small.
THE MISCONCEPTION: "C++ IS LIKE C". It has C in its name. It is advertised as "a better C." But, C is a small language, whereas C++ is a big language, much harder to learn and to use. Even its standards committee is trying to stop adding features because it is getting so big. That is just the language; the class library is or will become as large as the Smalltalk class library. So C++ is not really like C.
When you use C++, you must deal with three hazards. The first is the complexity of the language itself, the second is the restrictions the language places on the design, the third is the impact of the language on the design process. More specifically:
It requires designers to introduce complexity into the design.
It requires significant recompilation when a class changes.
It requires programmers to keep track of the heap.
The first issue is design complexity. The rules of visibility and type-checking oblige designers to work some fairly complicated schemes into their design. Examination of the books recommended at the end of this section, plus a little further investigation, will give you an idea of the level of sophistication and subtlety needed to construct a good design. That complexity is not part of the your problem domain, it is the complexity added by the rules of the language. Header files, virtual, public, protected, private, friend, const, casts, templates, multiple inheritance, constructors, destructors, reference counting, and run-time type indicators are all topics your designers have to master and keep in mind while designing. Your maintenance team also will have to master these topics, and then, as the system evolves, understand how the original application designers were using them.
The second issue is static type-checking using classes as types, and recompilation. The spirit of object orientation is to reduce the impact of change. In C++, however, function calls are compiled in a fragile way. When the definition of a function's class is changed, even if the function itself isn't, all classes in the entire system that call the function get recompiled. This goes against OO's intention of reducing the impact of change, and it introduces long recompilation periods into your project development. Recompilations of 8 or 12 hours are common. Both the fragility of the system and the time consumed by recompilations add hazards to a project.
The third issue is absence of automatic garbage collection. Explicit control of memory is sometimes needed (protocol and real-time systems; see earlier in this chapter). Most projects, however, need to build the system tures, 35 as quickly as possible; anything else distracts. Keeping track of objects on the heap distracts, in an energy-consuming way. Automatic garbage collection algorithms have progressed enough by now so that having manual garbage collection should be a thing of the past.
I consider C++ the most significant technical hazard to the survival of your project and do so without apologies. Still, I expect many of you to decide to use C++ anyway, even after reading this chapter. So here I have the following three goals:
To get you to consider another language.
To guide you past the obvious hazards, should you decide to use it anyway.
To get you to hire someone who can get you past the rest of the hazards.
CONSIDER USING ANOTHER LANGUAGE. If you are lucky, as the Project Amber one was, you will find another language that will work for you. If you are observant, you will be able to detect whether your organization can pass the tests for succeeding with C++. If you decide to use C++ and you are not suited to it, acknowledge that as soon as possible and start over. A number of large projects were delayed but then succeeded when they switched from C++ to Smalltalk after the first year of work. A year of lost time is still less than losing the project altogether.
WHY DOES C++ EXIST AT ALL? C++ was designed, quite deliberately, to occupy a design point different from Smalltalk. Where Smalltalk was slow, C++ was designed to be maximally fast ("pay as you go" is the phrase). Where Smalltalk was incompatible with C, C++ was designed to be compatible. Where Smalltalk had no static type-checking, C++ was designed to have compile-time type-checking. Where Smalltalk hid garbage collection in the run-time system, C++ gave it to the programmer and required no runtime system. These all represent valid language design trade-offs.
C++ sits at the crossroads of object orientation and C, a crossroads that is so important that more than five language systems compete for pole position: Objective-C, C++, C@+, SOM, Java, and some proprietary variations. At the time of this writing, C++ leads that competition, probably because its dominant factors are performance and backward compatibility. The following are guidelines to keep in mind as you evaluate whether your organization should use C++:
Consider using C++ if your staff consists of engineering or systems programming people. Do not consider C++ if you are an IS organization and your staff has a COBOL background. This difference in background matters more than the project nature or size. People with engineering backgrounds are more accustomed to the subtle interactions between the language features they will find in C++.
Consider using C++ if you are doing a low-level, real-time system.
Consider using C++ if you have a severe backward compatibility issue.
Consider using C++ if your key developers will quit if faced with having to use Smalltalk. Also, consider hiring new key developers.
CREATE SUBSETS OF AND STANDARDIZE THE LANGUAGE. The first step in managing C++ is to develop a standard usage of the language, and have people use it. Having a standard approved set of C++ locutions means that everyone who joins the project can read and understand the code. It means that designers will not spend time worrying about or discovering new ways to do simple things. It gives consistency to the system, and saves time.
Creating a standard usage is difficult, and getting people to use it is even more difficult. Use the ideas that follow to guide your effort.
Find an expert to set up the language standards. This person must be technically strong, a bona fide C++ expert who understands the strengths and weaknesses of the language features as they apply to object-oriented development. She or he must also be personally convincing and strong enough to be respected and to persuade others on intricate matters of programming. If you are just starting with C++, and do not wish to hire outside help, send your most experienced and most trusted programmer to C++ school, with the assignment to get this information.
Check out one of the current books on C++ idioms. More appear all the time; some good ones to start with are listed at the end of this C++ section. Numerous organizations have gone to the trouble of sifting through the language to set up conventions, so why not use this information to get a head start? If you find you can base the standards entirely on a book, all the better.
Develop standards that call for a simple and object-oriented use of the language. The best OO C++ programs have the general appearance of:object.message; object.message; object.message;
and so on. Note the absence of language-specific clutter. These programs are efficient and easy to read. This is, of course, an embarrassment to the language specialists who wish to make use of every nuance C++ provides; they will not be able to brag about all the language features they used. Your challenge is to create a development culture in which efficiency and readability are valued.
Get your group to use the standards. First, get the standards accepted by the technical leaders. Most seasoned lead designers have their personal preferences about using the language, but are willing to set some of them aside in order to get a common design/coding discipline. Some negotiation may be necessary.
Second, hold a half-day internal class or meeting to go over the standards that have been developed. Do not underestimate the value of this meeting. I have seen good standards ignored because such a meeting was not held. The primary purposes of the meeting are to show each programmer that every other programmer is there, to hear all the questions and answers from each person, and to build a culture and establish that the standards are real. The people present will ask very specific questions. Some questions will uncover flaws in or legitimate exceptions to the standards. Other questions will show personal fears that the standards do not have enough power or sophistication to handle some situation. A good answer will allay those fears. A secondary value to the meeting is to save time by answering all the questions at one time, instead of repeating the answers individually over the months that follow.
Establish design and code reviews. You should have these anyway. You particularly need them if you are trying to keep to a subset of C++. If the technical leaders are behind the standards, getting people to accept them is relatively easy, but reminding everyone to use them is still necessary, and design/code reviews serve that purpose.
Allocate time in the schedule for periodic restructurings of the class hierarchy. This will be necessary no matter which language you use. When you use C++, it just will take a little longer to do all the editing and recompiling.
If you follow the preceding suggestions, you will be on your way to managing C++. Jeremy Raw's Eyewitness Account tells more about this.
DESIGN AND CODING STANDARDS. I recommend the following books to help you get started on establishing best practices and good habits for your group: Effective C++: 50 Specific Ways to Improve Your Programs and Designs, S. Meyers (Addison-Wesley, 1992); Koenig, A., Moo, B., Accelerated C++: Practical Programming by Example (Reading: MA, Addison-Wesley, 2000); Design Patterns, E. Gamma, R. Helm, R. Johnson, and J. Vlissides (Addison-Wesley, 1994); and Object-Oriented Design Heuristics, A. Riel (Addison-Wesley, 1996).
If you can't avoid C++, find someone knowledgeable enough to set and ensure simple standards.
Managing OO COBOL
THE MISCONCEPTION: "OO COBOL IS LIKE COBOL". Your people will either write paragraphs that are object-oriented or write just COBOL. Selecting OO COBOL will not make your COBOL designers suddenly able to create good objects. The designers of OO COBOL learned from Smalltalk, C++, and the early CORBA work, so as an OO language, it is sound. At the time of this writing, however, there is not yet much project experience with it. What might we expect as project hazards to fix for this language?
The hazard has to do with expectations. It would seem reasonable to think that if your developers know COBOL, they should be able to learn OO COBOL, and if they do not master OO COBOL, they can still work in COBOL. But this thinking overlooks that object-oriented design is the design of modules, it is not the writing of the code. If your designers do not master OO thinking, then their design will not be object-oriented, and it will not help if your programmers have mastered the new language syntax. An OO COBOL design is either objects or just COBOL.
Disciplined Use of C++ Jeremy Raw, Independent Consultant
The most successful big C++ project I've been involved in clearly revealed the strengths and limitations of C++. It was undertaken by a design and coding team of seven people, four of whom were very experienced in either C++ or object-oriented design; the rest were experienced in C programming. The application was to be delivered on a workstation platform on which all the participants had some experience. C++ was chosen as a matter of convenience, since some of the project goals demanded a certain level of "unique " functionality.
The primary pitfall for this project revolved around finding a disciplined, OO way to use the language, despite many temptations to take short- cuts. We adopted strict coding standards at the beginning, some of them obvious and some of them arbitrary. They included:
No "downcasting," other than to specifically designed classes
No visible use of C types in implementations of our project-specific types
No pointers to members
No multiple inheritance
No friend functions
No public data members
We also took the position that implicit type conversions among our own types (other than those planned from derived to base classes) would be frowned on.
Despite a warning from one experienced programmer, the client expected the project to be built with Microsoft Foundation Classes (MFC). We quickly determined they should be framed as an implementation detail and that the system object model would be developed in complete isolation from MFC. This was for portability and to make sure our goals were not crimped by incompatible characteristics in MFC 's object model.
Our initial coding standards held up quite well. We had planned to use a variant of Hungarian notation (special prefixes on names), but found in practice that the most useful information to code into a name was its scope or "where it came from "clearly marking out global names, parameters, local names, and class members. The fact that this was a different scheme from that used in MFC also worked to our advantage since it became very clear what was "ours " and what was "theirs."
We also found that the pressure to "code in C " was extremely intense, particularly as we worked close to MFC code. There was a lot of internal resistance in the group at first to throwing away working code, even after it was shown to be incompatible with the model we were aiming for. We also had trouble retreating to analysis and design once we had started building the application in earnest, although the project's initial shortcomings rapidly convinced us that we would have no choice.
Ironically, as we relaxed about "making it good from the beginning " [see Chapter 5,Managing Precision, Accuracy, and Scale ], we had an easier time concentrating on the design, disengaging egos, and improving the structure of the object model and the application itself.
The project as a whole was a generally positive experience for everyone involved. We concluded that if we had it all to do again, we would have made an even bigger point of not implementing anything in final form until there was no alternative, adhering to one programmer's motto: "Easy to write, easy to read."
Case material contributed by Jeremy Raw, used with permission.
The second possible faulty expectation is to think that by the time your designer has been trained in object thinking, a COBOL-like writing style will be the most efficient. Recall, retraining in object-think swamps all other costs. Smalltalk is so simple that the entire language can be taught in an afternoon and mastered the next day. What eats up learning time are: (1) OO concepts, (2) design principles, and (3) the contents of the class library. OO COBOL has those costs, plus the cost of learning how and when to do things differently in OO versus standard COBOL. At the time of this writing, Smalltalk is becoming the legitimate successor to COBOL for object-oriented applications on the workstation. It is now available on the mainframe.
The third hazard has to do with the class library. Smalltalk's class library has been growing for several decades, and the library for C++ for over 10 years. The OO COBOL class libraries are in their infancy. You may be faced with spending money to develop software that you should, in principle, be able to buy.
USING OO COBOL. When, then, should you consider OO COBOL? Suppose there is a large amount of COBOL work being done on the mainframe, and your lead developers want to experiment with object-oriented structure on the mainframe. OO COBOL allows them a reversible path. They can apply some object-structuring techniques in a few places, and stay compatible with the rest of the team. For other appropriate times, we shall have to wait and see.
An OO COBOL design is either objects, or just COBOL.
Java is the newest arrival at the crossroads of C and object orientation. What you lose in the move to Java is exactly what C++ chose to optimize: backward compatibility with C, no need for a run-time system, and control over performance (pay-as-you-go performance features). What you gain with Java is less design complexity, greater design stability, and increased portability. With both C++ and Java, you leverage your programmers' experience in C, and your developers still have to learn object design principles and the class library.
Although more complex than C, Java is considerably simpler than C++, and avoids the key problems I associate with C++: language complexity; a fragile, static type structure based on classes as types; and no automatic garbage collection.
Java is a smaller language because it gives up compatibility with C in favor of being completely object-oriented. It uses dynamic type-checking, and allows inheritance of interfaces, as opposed to forcing one to inherit interfaces plus all the implementation details. This feature simplifies some of the complexity C++ designers face when using multiple inheritance. Finally, it provides automatic garbage collection, which eliminates a major area of concern for developers. It is a fully OO language that comes with a run-time package, as does Smalltalk.
Java is most often targeted at internet and intranet applications because the run-time portability pays most quickly for the cost of moving to this new language. Java, of course, is a general-purpose language, and may be used for any sort of application.
Although Java is evolving quickly, it will for a time suffer from immature class libraries and development environments. Smalltalk has a 20-year head start, and C++ has a 10-year head start. All the general Smalltalk and C++ advice given earlier applies to your Java project: Take the time to design, set standards, and follow the standards. Given the use of a run-time system, you can expect performance problems from Java for some time to come, so this advice from Smalltalk also holds: Develop time budgets, get the program to work correctly, and improve the design where needed until the performance goals are met.
The Sam Griffith Eyewitness Account (on pages 62 and 63) describes using Java in 1996 through 1997.
Java is easier than C++, but still young.
Does your choice of tools affect your survival? Yes, if you do not know exactly what you need them for. If you buy a tool believing that it will somehow guide your people in their thinking, then you have added risk to your project.
Know what each person is going to produce (see the discussion of tools in Chapter 4). Once you know that, you can buy, mix, and adapt all sorts of tools to your needs. As Humpty Dumpty said to Alice in Lewis Carroll's Alice Through the Looking Glass, "The question is, which is to be masterthat's all." Do not buy the latest CASE tool, work through all the diagrams it supports, and hope that the software will somehow pop outit will not. Most of the later design work cannot be generated from prior information, but requires new information and careful thought. Recheck your expectations about tools.
In over a dozen project debriefings, I asked the teams for their tool priorities. They were consistent in naming
Versioning and configuration management
The common characteristic of these most highly valued tools is that people cannot or will not perform these tasks manually. In general, tools considered useful record and track work, or analyze it to give your people insights as to what they did. Recordingtracking tools include text and drawing editors, group communications tools, versioning systems, project-tracking systems, and code-packaging systems. Analyzing tools include run-time performance monitors, code-consistency testers, compilers, linkers, and metrics-gathering tools.
You can build some good, small tools on your project, a small code generator to create accessor functions and declaration files from simpler specifications being one example. Such tools are fairly easy to write, and save hours of keystrokes. You might write a small metrics-gathering tool to monitor code complexity for design reviews. Most projects have people both capable of writing and delighted to write such small tools.
By upper-CASE tools, I mean those tools that let the team draw diagrams of the object model, and that claim to generate code, check consistency, or reverse-engineer existing code. They deserve a special mention as they can create extra work for an already busy team, draining valuable energy from the project. The extra work may push the team into a depression caused by a sense of bureaucratic overload.
This subject is politically sensitive. Every year, tool vendors claim they have made dramatic tool improvements, eliminating the problems from previous tools. Then, some executives feel they must run the project with the latest computer-based tools. The record of the upper-CASE tool industry is poor enough, however, that I recommend extreme sobriety in judging these tools. Later in this chapter, I give specific suggestions about needed improvements to tools.
Over a period of four years in the early and mid-1990s, I debriefed numerous projects on several continents, both inside and outside IBM. I always asked about upper-CASE tools. I often got an embarrassed head- shake: They had not thought enough of them to give them serious consideration. A few projects related how they had investigated and taken classes in various tools. After examination, they decided that the tools detracted from the project, rather than adding value. Tom Morgan of the B.U.G. project said he had concluded that drawing notations do not scale up to large projects. All of B.U.G.'s critical work was done in text, and they easily built very effective cross-referencing and consistency-checking tools. In more recent work, I asked the same questions and got similar answers. Although there have been improvements with respect to reverse-engineering and code generation, they are small improvements when compared to the total purchase, training, and time cost of the tools.
Using Java Sam Griffith, Objects Methods Software
Interactive Web Systems worked on a project to build a wireless network configuration engineering support tool. The tool was supposed to support a very progressive GUI, and replace a system that ran on a main frame. The engineers enter data about a module they need to configure and the system munges on the data, figures out what equipment is needed, and where to place it in the module, draws a picture of the module, and then lets the user rear- range the equipment in that module using drag-and-drop.
Why did we use Java?
W used Java because our client wanted their applications to run over their intranet, and because Java support in internet browsers is strong and getting stronger. We did look at using Sun Microsystems ' Safe-TCL, which is available for Macs, PCs, and UNIX boxes. However, it requires a plug-in and is also a relatively obscure language to PC programmers.
One of the things we had to do was to educate the client on the current state of Java and its support of various platforms. They had heard all the hype and none of the downsides. The client was a little disappointed in the current state of things, but optimistic that the situation would improve by roll-out time.
Java versus C++
Java is definitely an easier language to program in than C++, with many fewer "gotchas." One look at the book C++FAQS and a potential C++ programmer may be scared to death or ready to become a new member of the sadomasochistic community. :) With Java there is no need to know the many different ways that const may be interpreted based on its position in the source: const before a function, const before a function argument, const after the function, and so on. Java also makes memory management a nonissue. It has a nice feature that it borrowed from Objective-C called interfaces, which are used in place of C++'s multiple inheritance. Interfaces allow programmers to easily codify exactly what must be implemented and allow the compile-time and run-time systems to verify support of a particular set of messages. The run-time verification is what C++ doesn't have.
Java versus Smalltalk
Java is not as mature or as portable as Smalltalk. Java tries to be platform-independent, but the GUI libraries are its downfall. This is evidenced by the team at Marimba (four of the original Java team members)feeling obliged to create their own GUI library. Using only the very basic low-level classes that the AWT provides, they have created a rich GUI library that is much better and more extensive than the AWT that comes with Java. It should be noted that Microsoft is doing the same thing with a new class library they are creating. Other things that Smalltalk has that can make problem solving and programming easier include Blocks (Closures for Lisp fans), programmer-definable control structures, keyword message syntax; and everything, including numbers, is an object. The Smalltalk environment is also light-years ahead of Java's. In Small talk, one can extend the environment just by adding code and saving the image. I can't do that with any of the Java development environments. Most Java development environments are written in some other language.
W dealt with Java 's GUI issues by using the Bongo GUI toolkits from Marimba and Symantec 's Café development environment. Cafe gave our team an environment that is modeled after Smalltalk, and Bongo gave us a decent GUI class library. Our client easily accepted that Marimba charges a fee for using its technology. The Marimba tools had other advantages, such as ease of distribution, that far out- weighed the fees in the client 's eyes.
Our team consisted of several former Smalltalk programmers. Out of that group one team also had extensive experience in Objective-C,C++,CLOS, Object Pascal, and several other OO languages. The team found that learning Java was easy. It was the immaturity of the GUI libraries and development tools that they were most frustrated with. Our team has not found one development environment in which we can debug reliably, so we constantly fall back to the old standard of printing our state to the console.
Despite the immaturity of Java and the associated tools, the team came away with an overall good feeling. Yes, we are living on the bleeding edge, but we do get quick turn-around on editing, compiling, and debugging; it 's not quite as fast as Smalltalk or Lisp environments, but much better than any C++, Pascal, or other statically compiled environment. Our turnaround time even beats Visual Basic.
Hey, Java 's the future, so why fight it? Plus, it 's easier than C++!
Case material contributed by Sam Griffith, formerly of Interactive Web Systems, used with permission of author.
Exactly what should you want from your object modeling tool? The following subsections describe three of the possible answers.
DRAWINGS. You will want to discuss the object model in open meetings, with managers, testers, database designers, domain experts, user representatives, object designers, and others. It is not practical to bring the Smalltalk browser or C++ listings to the meeting and expect the different parties to make decisions by looking at the text. A drawing of the classes, their relationships, responsibilities, and attributes facilitates the conversation, and helps to draw out questions.
Do you need a CASE tool for this? Many people use a drawing package, one specially designed for diagramming or drawing designs. A CASE tool costs more and is usually harder to use. As Dave Thomas of OTI repeatedly says, "Why should I turn my best OO designers into draftsmen?" Whichever tool you use for drawings, understand what you get for your money.
CONSISTENCY. Some CASE tools track changes to class, relationship, and function names so that changing the name on one part of the drawing changes it on all parts. Some tools will alert you when a class grows past a certain size. Again, take a look at how easy it is to enter and change the drawing. I provide a test for this issue shortly.
CODE GENERATION. On a recent, fairly large IS application, my colleagues and I estimated that perhaps 5 percent of the total system code could have been generated from the business object model. That was primarily names of data items and simple accessor functions for them. In the case of C++, there is enough boilerplate header material to be written that a tool's assistance is quite handy.
The problem comes when you must add some computation that cannot be generated automatically. At that moment, the drawing of the object model, which moments before was an asset, becomes a liabilitya burden of double maintenance. From then on, your team will have to maintain both the drawing and the final code. I have yet to interview an industry project that successfully managed double maintenance. On one project, I set up selected resynchronization points between the drawing and the code by designated individuals. Before I left the project, the drawings were already out of date compared with the code, and were not put right again.
The double-maintenance problem is diabolical, because multiple design documents are almost impossible to keep in sync, and there is no known way around them, yet. Double maintenance pertains to non-CASE material as well, such as simple drawings. The only difference is that drawing editors do not claim to solve it.
Every year I hear about a tool that is "about to come out that will take care of your concerns." A year later, the tool proves hard to use, causes as much work as it saves, and creates a double-maintenance burden. Is there a fair test for these tools? I think so, and the next section describes it.
The Scanner Challenge
How many changes are required before the tool beats a pencil drawing scanned into Lotus Notes? The pencil (pen) is still the fastest drawing tool, and still can put the most information into a square inch. As changes are made to the design over time and the drawings are updated, a good drawing tool, or CASE tool, will start to pay for itself. I estimate that 30 to 40 major changes would have to be made to a single page of a pencil drawing before the CASE tool would pay back the initial cost of entering the drawing. If nonerasable marker were used, and the entire page had to be redrawn, then the CASE tool might pass the pen after about 15 changes to the page.
That in itself is not a fair comparison because the CASE tool drawing is made available to everyone on the project over the computer network, whereas the pen drawing is stuck in someone's filing cabinet. To make the pen drawing available, it has to be scanned into a distribution tool or document editor. It takes time to turn on the scanner, scan in the drawing, and paste it into a note. I would estimate that a CASE tool drawing might beat a scanned drawing at about 10 changes per page.
As CASE tools improve, that number should drop. And so it becomes a fair challenge to ask each year, "How many changes does it take before the CASE tool beats a pencil drawing scanned into Lotus Notes?'
Estimate how often you plan to update each page of drawings. That will help you see how much effort the CASE tool will cost you, or save you. On my last project, drawings were made four times: once before design review, once after design review, once before shipping to QA, and once on product release. The CASE tool did not pay its way in simplified changes.
Minimum CASE Tool Requirements
When will an upper-CASE tool offer value and not get in the way? When it meets the following three minimal criteria. They are not easy to meet, and to date, no upper-CASE tool I have seen meets them.
Single point of maintenance. The tool generates code from the drawing, but need not generate all of the code just from the drawing. It is permissible and probably necessary for the tool to support a mixture of drawings, text, high-level notation, and programming language code. What is important is that each item (drawing item or text item) be the only source code for whatever it describes. It must never be necessary to edit two places for one change (this is called single-point of maintenance). If the drawing item changes, the program code should change also; if it is necessary to change some program text, the tool should preserve and protect that change. This requirement eliminates the double-maintenance burden of current tools. No tool I know of can do this.
Tailorable mappings. The code the tool generates has to be tailorable. Each project has its own programming conventions and standards, and within a single project, there may be different generation requirements for different kinds of classes (persistent versus nonpersistent, for example). At this time, some CASE tools support tailorable mapping.
Logical versus physical views. Drawing a line between two objects should not always result in code being generated. Sometimes a line means, "I don't yet know how this object talks to that object," or "We already know, and it is in the model, but I wish to hide how this object talks to that object to simplify the picture for the moment." The first is used in early stages of design, or analysis. The second is used to present a finished design in a "rolled-up" or "high-level" view. I know of no tool that does this.
Without at least these capabilities, busy developers will continue to ignore or resist the tool. There are other desiderata for such CASE tools. However, these three are already hard enough.
A useful tool records, tracks, or analyzes work to give people insights into what they did.
The Cutting Edge
The newest, most marvelous idea in the industry will probably do you more damage than good.
In the early 1990s IBM's System Object Model (SOM) was hot, then it was Taligent's OO operating system, then SOM again, then the CORBA standard, Distributed Objects Everywhere, Visual Programming, and so on. Many organizations considering object technology felt that to be current, they were obliged to use or consider each of these. In 1996, Taligent suddenly quit. Companies dependent on Taligent had to scramble to work out a new strategy. Project Stanley (see Chapter 2) was one in which its leaders' insistence on using cutting-edge technology was a major cause of its failure.
If you are an expert in an area, you know it. If you have done distributed programming, then you can evaluate Distributed Objects Everywhere, CORBA, Distributed SOM, Distributed Smalltalk, or distributed whatever. If you have not done distributed programming, then you cannot evaluate these technologies, and probably should stay away from them. If IBM and HP have not come to market with, for example, a CORBA platform, assume CORBA is not a good place to put your money just now. Select a cutting-edge technology only if you can handle the cut.
If it is not part of your specialty, buy it. This first part of the strategy is easy. Spend your development money where you have an opportunity to gain an edge on your competitors. Let a company that specializes in the new area work through the learning and development curve. You will be able to buy its solution for 1 to 5 percent of the development cost. If someone has not figured out how to bring it to market yet, assume you will not have time to do it on your busy project.
If you cannot buy it today, assume it cannot be done currently. The average project is less than two years long. If you rely on another company to deliver this new software in its advertised time frame, you add significant risk to your project. History is not on your side. Assume it will not become a reality during the timespan of this project. Investigate it again at the start of the next project.
Beware of programming houses and consulting companies eager to work on the latest industry fad using your dollars. Contracting companies habitually learn new technology by bidding it for your project. Visit a project for which the technology was previously deployed. If it has not been deployed yet, you are in the same situation as before. Try to work through this project without the new technology.
Be on the cutting edge only if you can handle the cut.