Eliminate Unisys dependencies
There are numerous dependencies on the Unisys OS 2200 operating system that must be systematically identified and eliminated. It is necessary to have a plan for dealing with each type of dependency to assure conformity in the final system and reduce the potential for introducing defects during the porting process.
Micro Focus COBOL does not have the same system functions as Unisys COBOL. This requires that Unisys system function calls be converted to Solaris system functions. Each invocation of Unisys system functions must be carefully reviewed to determine the best way of handling them in the new environment. The legacy system uses over 150 different system calls for Unisys Executive subroutines, the DPS, messages, and other functions.
Except for Unisys system functions, the existing Unisys COBOL 85 and MicroFocus COBOL are quite similar. The difference rests largely in file assignments, the handling of binary fields for bit manipulation, and the use of Unisys versus Solaris system subroutines. Automated source code translation ensures that language differences are handled the same way from one program to the next, making maintenance easier. However, the ability to automate such changes depends on the difficulty of the change and the sophistication of the tools being used.
Simple changes, such as changing "SOURCE-COMPUTER. UNIVAC- 1100-60" to "SOURCE-COMPUTER. SUN-SOLARIS" in the CONFIGURATION SECTION of program elements, can be automated easily using either editor macros or simple text-processing scripts.
Simple text replacement is not always viable when the program text to be changed depends on the surrounding context. To successfully automate these changes, the tool must often be aware of both syntactic and semantic elements of the language. Still, most changes that involve identifying a pattern and performing a text replacement can be automated.
In many cases, it is difficult or impossible to find a pattern that can be identified and replaced. This may be simply because there was no standard pattern of usage applied in building the legacy system. These cases can be handled by either manually modifying each problem instance or even writing special case code that "automates" the modification. While this almost always involves more work than simply making the change, it provides the porting team with the ability to evolve the modification process and then re-execute the entire process on the original source base.
Before deciding on an automated or manual approach, it may be useful to assess the extent of the problem, that is, how many times the problem occurs and how many source files are affected. This may help to determine if taking the time to automate the modification is necessary or worthwhile.
The Unisys system uses the Execution Control Language (ECL) to process a run. These control statements can invoke Executive functions or cause the execution of a program. In the new Sun Solaris environment, control statements are usually written in UNIX scripts using a shell programming language, such as the Korn shell. Programs must be developed to convert Unisys ECL to UNIX shell scripts, or the conversion must be done by hand. In some cases, a database trigger or stored procedures written in Oracle's PL/SQL language may replace the ECL. In other cases, the functionality of the original ECL may no longer be needed. All cases must be analyzed to determine the intended functionality before the optimal migration strategy can be selected.
Some of the existing COBOL programs actually create and run ECL dynamically on the Unisys system. These programs build ECL statements and then call the Unisys Executive subroutine. A total of 293 programs and 24 procs were identified in the legacy system that make calls to this subroutine. During the migration, these generated ECL statements must be replaced with the appropriate UNIX statements and calls to the Unisys Executive subroutine replaced with calls to the UNIX system() call.
Port Existing Functionality to MicroFocus COBOL
The legacy system uses a technique referred to as pseudo processing. Normally, a transaction is processed as soon as the user inputs it into the system by entering it on the screen. Some transactions, however, are not processed right away, but rather are stored in a transaction file and processed at a later time.
The purpose of the pseudo processing is to avoid flooding the system with hundreds of transactions all at once. The Pseudo Reader programs ensure that only one transaction executes at a time by waiting for a transaction to complete or time out before starting the next one.
The question of how to handle pseudo processing in the migration is an interesting one. The existing mechanism appears to be, at best, a crude solution to the problem. It is possible that the new system will have sufficient bandwidth to process all transactions as they arrive. If this is not the case, there are a number of solutions to this problem offered by commercial infrastructure products. For example, Oracle incorporates a Database Resource Manager (DBRM) that can prevent the execution of operations that are estimated to run for a longer time than a predefined limit, as well as provide other mechanisms for guaranteeing performance. One or more of these mechanisms may be used to address the problem. However, it is important to consider the goals of the effort when deciding how to handle this problem. Any solution implemented as part of this pre-componentization effort is likely to be transitional. Therefore, the best solution may be the solution that requires the least effort. If the pseudo-processing capability can be migrated easily as part of the overall system migration, it may be simpler to retain the (obsolete) functionality than to eliminate it.