Home > Articles > Operating Systems, Server > Solaris

  • Print
  • + Share This
Like this article? We recommend

Strategies

In the following sections, we examine the various strategies available for a migration effort, including the following.

  • Refronting
  • Replacement
  • Rehosting—technology porting
  • Rearchitecting—reverse engineering
  • Interoperation
  • Retirement

When planning a migration project, consider how your environment could benefit from the strategies described in this section.

Refronting

Many legacy applications have excellent functionality, but are not user friendly. Data entry for the application is accomplished by using a series of screens that frequently contain cryptic names for fields and unintuitive menus, which result from limited screen space. These interfaces were based on CRT technology that was available 20 to 30 years ago.

Rather than rewriting an entire application, it might be possible to change just the data entry portion of the application. Refronting, or adding a more aesthetic interface to an exiting application without changing the functionality, is an option. Users will have access to the same data, but will be able to access it in a more efficient fashion without the use of expensive terminals, cabling, or peripheral interconnects.

Where desired and appropriate, a browser-based solution can be developed. In the case of mainframe replacement, 3270 data entry screens can be replicated over a network. Web-enabling an application can provide significant cost reduction. Different approaches for refronting include screen scraping, HTML generation, source code porting, and other techniques.

Modern graphical user interface (GUI) technology can also be integrated into a legacy application to support a clearer representation of the required input. Conversely, for reasons of efficiency, the data entry screens might be replicated in the new technology "as is" to eliminate the need to train the data entry staff.

Some of the user acceptance issues identified by this example might reveal themselves when a rehost strategy is adopted and a COTS product upgrade involves a change to the input form's hosting technology (for example, when ASCII forms are replaced by a web browser).

The refront strategy requires an architectural model so that new components can invoke old, migrated components with minimum change to the migrated components. Such a migration project requires the application of architectural skills.

Replacement

The refronting strategy is really a variation of the much broader replacement strategy. Using the replace approach, the legacy application is decomposed into functional building blocks. Once it is broken down in this manner, portions of a generic and often complex, custom-written legacy application can be replaced with a COTS application. Of course, the package must be able to run on the target OS.

When evaluating replacement strategies, consider packages that offer better functionality and robustness than the existing, deployed application components. Make sure the vendor's solution is well tested and accepted in the marketplace, and verify that it is configurable, enhanceable, and well supported by the vendor. Product longevity and backward compatibility must also be taken into account.

One of the key drivers for the applicability of this strategy is the competitive dynamics of the software supply industry. Any custom (or bespoke) applications owned by users are always competing with the market, whether the competitive position is implicitly or explicitly evaluated. The marketplace is also driven by a sedimentation process. ISVs are seeking to maximize the value in business terms of their software products. Sedimentation refers to supporting functionality moving from the application implementation space to middleware (or utility software). From there, the functionality moves to the OS and often to hardware. For example, print spoolers and job schedulers are good examples of components that have been extracted from the application space and are now usually provided by utility software suppliers or by infrastructure vendors. Sometimes this moves even further, for example, in the case where web server load balancing functionality moved from the application layer to become an OS feature, and is now implemented in networking hardware.

The sedimentation process is an opportunity for migration planners because it enables their state-of-the-art business functionality to become available to the enterprise. This occurs because the ISV developers can outsource the functionality development and maintenance to alternative providers and can concentrate on transactional logic. By migrating some functionality through the use of the replace strategy, this trend can be copied, and the in-house code maintenance problem can be reduced, albeit through transfer to a third party. For example, cost can be reduced or developer wages can be more focused on benefits, but cost is not eliminated. The replace strategy enables in-house developers and maintainers of the utility code lines to be redeployed on more business-critical code lines and modules.

When considering replacement as a strategy, the option to replace the application's code with a new third-party software product might be attractive. If this approach is chosen, the migration project must do the following:

  • Document the current business process and data model.

  • Perform a gap analysis between the proposed application and the current state with respect to business process.

  • Create a transformational data model, where appropriate.

These steps require traditional systems and business analysis skills.

A more powerful option might be to adopt a new application solution and change the organization's business logic when it no longer yields competitive advantage to match the package's optimum business process. This leaves migrators with the problem of identifying the still-used legacy data, and migrating to the new software solution. It also requires the development of a rollout plan that encompasses the enterprise's user community. Such a rollout is likely to be expensive, so the cost/benefit analysis of this approach needs to be solid and substantial. This involves data modeling skills and potentially programs for transforming the data into the target data model and populating the new database. The latter approach allows the enterprise to transform applications built to deliver functional competitive advantage to software built to allow the user organization to compete through superior cost advantage. This mandates that the replacement product be competitively inexpensive to deploy and run.

Replacement can be a quick, low-risk solution, although the replacement of complete applications will have large implications in terms of business acceptance and rollout. However, replacing a homegrown ERP solution with a COTS package can also take upwards of two years. Effort is required to ensure that business processes and logic now conform to the capabilities of the COTS component, rather than the other way around. For some applications, the cost of acquiring custom logic for these software packages can be equivalent to maintaining and modifying a custom code base, depending on the function provided by that package, which is why it might be more appropriate to adopt the COTS vendors' assumed business process. Not all business processes, and hence not all applications, are designed to enable functional competitive advantage. For instance, a customer relationship management (CRM) package deployed to replace a specific business function will require more maintenance than configuring a replacement print spooler. If the proposed source modules for the migration are only a subset of the target package's functionality, it might make more sense to identify additional business processes to encapsulate within the CRM solution. For example, you might replace more code and increase the potential benefits case. This shows the trade-offs available when planning replacement strategy-based migrations.

There are three clear options within the replacement strategy.

  • Use a COTS package to replace or retire the source modules.

  • Use a COTS/utility package to replace sedimented functionality.

  • Use operating system functionality to replace sedimented application functionality.

The last option in the preceding list uses functionality that has been integrated into the existing OS. Examples of this include complicated memory management schemes that have been implemented due to older memory limitations, coarse-grained parallelism that is used instead of threading models, or shared memory that is used as an improved IPC shared memory.

The advantages of moving from a homegrown solution to a COTS-based solution include:

  • Integration with other internal and third-party external applications

  • The release of the budget associated with inflexible development resources

  • An improved opportunity to tap skilled resources from established labor markets supporting both the business and IT communities

Replacement can act as a strategy on its own, and it can also be applied to components within an alternative strategy. Interestingly, as a strategy, it potentially yields the highest benefits and involves the highest degree of cost, yet when applied to components within an alternative strategy, it can be a quick and low-risk strategy.

Rehosting—Technology Porting

Rehosting involves moving complete applications from a legacy environment with no change in functionality. There are several ways this can be accomplished for custom-written applications.

  • Recompilation. As previously mentioned, an application can be ported to the new environment. There are two approaches for doing this. The first approach is is primarily associated with developing or acquiring a compatibility library that provides identical functionality to that of the APIs found on the original OS and supports third-party products. For example, Sun provides compatibility libraries for some of the major competing operating systems such as HP/UX. An alternative approach is to use intelligent code transformation tools to alter the original source code to correctly call the new operating system's APIs. Both of these approaches have the benefit of capturing the changes required during the migration, although the second might limit backward compatibility.

  • Emulation. This approach introduces an additional software layer to emulate the instruction set used in the source binaries. While introducing another software layer between the application and the hardware can affect performance, it eliminates the need for recompilation. When adopting this strategy, it is important to understand that the old environment has not really been left behind. The application will be developed and compiled using the old environment and will only execute in the new environment. By their nature, emulation solutions incur additional cost above that of the target platform environment. This results from the need to supplement the OS with the emulator, which is rarely free.

Although emulation is a useful approach, if source code is available, it is more common for an application to be recompiled to the native instruction set because native code runs faster. Emulation is most useful when migrating applications that are written in interpreted languages or when the original execution environment was tightly coupled within the OS. BASIC, PICK, or MUMPS are examples of development or runtime environments that are suitable for emulation solutions.

Most emulators are for interpreted languages; therefore, the source code is available to the organization. However, source code engineering and reverse engineering rights might not have been granted in the right-to-use license. If you intend to use an emulation solution or reverse engineer a solution, ensure that you are licensed to do so in the environment where it will be used.

  • Technology porting. Emulation is a technique that supplements the target environment with the capability to execute code (usually interpreted) that runs natively on the original system. Many applications are developed and written in a superstructure software environment that is installed as a layered product on the source system. The most common of types of these applications are created by relational database management system (RDBMS) vendors, many of whom support an array of hardware platforms and guarantee a common API across those platforms. The advantage of this approach is that one common API owner, the software ISV, owns the API on both the source and target systems. While the discovery stages of a migration project are still required, the APIs on the source and target systems remain the same.

The leveraging of the ISV solution is often an opportunity to upgrade the ISV product version to obtain new functionality or to obtain superior support from the ISV. For instance, the transaction processing system known as CICS relies on a well-understood series of APIs. These APIs and their functionality have been ported or reimplemented on the new target Solaris OS. Applications using these APIs are compiled to run native instructions on the new system.

Rehosting offers the advantage of low development risk and enables familiar legacy applications to be quickly transferred to a more cost-effective platform that exhibits lower TCO and a faster return on investment (ROI). Extensive retraining of users is not needed because the architecture, interface, and functionality do not change. Rehosting is an excellent approach for companies desiring to decrease their maintenance and support costs.

Rehosting is, by definition, a quick fix. Rehosting does not change the application or the architecture. This means that new technology that is available in the target environment might not be properly utilized without some modification of the application. Rehosting is a preferred solution when the current business logic and business process remain competitive in the enterprise's markets and are worth preserving. Rehosting offers the possibility of using cost savings accrued through switching development and runtime environments to fund full rearchitecture projects, where warranted.

Rearchitecting—Reverse Engineering

Rearchitecting is a tailored approach that enables the entire application architecture to migrate to the new OS, possibly using new programming paradigms and languages. Using this approach, applications are developed from scratch on a new platform, enabling organizations to significantly improve functionality, and thereby take full advantage of the full potential of a target system.

Applications poor in IT effectiveness and functionality are the best candidates for rearchitecting. This approach is best utilized when time is not a major factor in the decision. Most rearchitecture projects require a skilled development staff that is well versed in the new technology to be implemented.

The downside to this approach is that it requires new or additional training for users, developers, and technical staff. In addition, rearchitecting requires the most time and is the most error prone of all of the possible solutions. Sometimes, business rules can be well hidden in user interface or database management systems. For example, this was the case with DECforms, DEC FMS, and any RDBMS triggers. The ability to extract all the necessary business logic from the application's source can be severely inhibited by poor coding methodology and practice. An example is the hard coding of business parameters.

Despite these problems, it remains that re-architecture and reverse engineering are perceived to be the correct strategies, and these problems become project risks. These risks can be mitigated by the application of appropriate business acceptance testing with internal and external users.

Rearchitecting does, however, open the opportunity to improve the business logic and processes and to change the developer productivity model.

A technique particularly appropriate to rearchitecture is reverse engineering. It is an axiom that the business logic encapsulated in the source code is the business logic implemented, and thus the source code is the most accurate place to discover the business logic. One of the key problems of software development is that most usability errors in software are introduced by poor business process documentation and even poorer translation into software idioms. Some software environments have embedded dictionary or repository functionality. Where these exist, they may be supplemented with original author or third-party tools to enable the extraction of business logic and the recreation of that business logic in new environments. With these tools, the process can be reversed, the dictionary can be parsed, the user world view can be generated, and the implementation source code can be generated. A classic example is the RDBMS world where data definition language scripts for a database implementation can be generated from the database implementation itself. This is tool based, and tools might be proprietary to a single RDBMS or environment, or they might be open, running across multiple environments. Database schema generators are particularly useful for migrating from one RDBMS to another, such as from Microsoft's SQL Server to Oracle or Sybase.

Interoperation

In certain cases, it might be advantageous to leave an application where it is and surround it with new technology when it is required by an enterprise. Interoperability is a strategy that should be considered in the following cases:

  • If business requirements are being met and IT effectiveness is high, it might be desirable to leave the application in its current environment, provided that environment is capable of interacting with current technology.

  • Unfortunately, business drivers—for example, the existence of a leasing or outsourcing contract—might dictate that an application should stay where it is for some period of time. This is one of the risks of abandoning your IT environment to a third party. Over time, outsourced applications become orphans within the IT infrastructure. They are not fully integrated into the IT environment, and they most likely do not have a development staff and run on outdated hardware that is no longer cost effective.

Many ISVs currently provide technology that enables legacy applications and storage technology to interoperate with newer technology. Intelligent adapters exist that support interactions between the mainframe and more modern computing alternatives. It is also possible to compile an older language such as COBOL or PL/1 into Java&—; byte-code, enabling it to seamlessly interact with a modern application server and other components of a Java&—; 2 Enterprise Edition (J2EE) environment.

When choosing this strategy, it is important to understand the vendor's commitment to the existing product line, as well as any future maintenance and product licensing costs. In addition, consider the availability of third-party software and current technological trends. Where possible, open standards should be favored to allow a wide choice of competitive options.

Retirement

Changes in technology can obviate the need for specific functionality in an application or an overall solution. As middleware or third-party products mature, they might render the functionality implemented in the application obsolete. In this case, legacy utilities or legacy application functionalities can be retired because they are no longer required or are implemented elsewhere in the solution.

  • + Share This
  • 🔖 Save To Your Account