Home > Articles > Programming > Windows Programming

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book

5.3 COM+ Component Services

The .NET Framework leverages many existing Windows services to make it a more robust application environment. A particular technology that deserves attention is COM+ Component Services. These technologies were the predecessors to the .NET Framework. To see how COM+ Component Services fits into the .NET Framework arena, let's explore a little about these technologies.

5.3.1 Overview of COM

The Component Object Model (COM) was designed to address the shortcomings of conventional object-oriented languages like C++ and traditional binary software distribution of code. COM is about not a particular type of software but rather a philosophy of programming. This philosophy is manifested in the COM specification. The specification explicitly states how a COM object should be constructed and what behaviors it should have.

COM objects are roughly equivalent to normal classes, but COM defines how these objects interact with other programs at the binary level. By binary, I mean compiled code, with all the methods and member variables of the class already built into an object. This "binary encapsulation" allows you to treat each COM object as a "black box." You can call the black box and use its functionality without any knowledge of the inner workings of the object's implementation. In the Windows environment, these binary objects (COM objects) are packaged as either DLLs or executable programs. COM is also backed by a series of utility functions that provide routines for instantiating COM objects, process communication, and so on.

COM was the first methodology to address object-oriented software reuse. COM has enjoyed great commercial success; many third-party software vendors provide COM objects to perform a wide range of tasks, from e-mail to image processing. COM is also highly useful for creating components called business objects. Business objects are COM objects in the strict sense, but they are used to encapsulate business rules and logic. Typically these business objects are tied to database tables. The objects move around the database according to the business rules implemented in the COM object.

Generally, several smaller business objects work together to accomplish a larger task. To maintain system integrity and to prevent the introduction of erroneous data into the application, transactions are used. A software service called Microsoft Transaction Server (MTS) is used to manage these transactions. We'll cover the function of MTS (and its successor, COM+) in Section 5.3.3.

5.3.2 Overview of Transactions

Simply stated, a transaction is a unit of work. Several smaller steps are involved in a transaction. The success or failure of the transaction depends on whether or not all of the smaller steps are completed successfully. If a failure occurs at any point during a transaction, you don't want any data changes made by previous steps to remain. In effect, you want to initiate an "undo" command, similar to what you would do when using, say, a word processor. A transaction is committed when all steps have succeeded. A failed transaction causes a rollback to occur (the "undo" operation).

Well-designed transactions conform to ACID principles. ACID is an acronym for Atomicity, Consistency, Isolation, and Durability.

  • Atomicity means that either the operation that the component performs is completely successful or the data that the component operates on does not change at all. This is important because if the transaction has to update multiple data items, you do not want to leave it with erroneous values. If a failure occurs at any step that could compromise the integrity of the system, the changes are undone.

  • Consistency deals with preserving the system state in the case of a transaction failure.

  • Isolation means that a transaction acts as though it has complete control of the system. In effect, this means that transactions are executed one at a time. This process keeps the system state consistent; two components executed at the same time that operate on the same data can compromise the integrity of the system.

  • Durability is the ability of a system to return to any state that was present before the execution of a transaction. For example, if a hard drive crash occurs in the middle of a transaction, you can restore the original state from a transaction log stored on another disk to which the system recorded.

A classic example of a transaction operation is a bank transfer that involves a transfer of funds from one account to another (a credit and a debit). Such a transaction moves through the following steps.

  1. Get the amount to be transferred, and check the source account for sufficient funds.

  2. Deduct the transfer amount from the source account.

  3. Get the balance of the destination account, and add the amount to be transferred to the balance.

  4. Update the destination account with the new balance.

Suppose a system failure occurs at step 4. The source account had the transfer amount deducted but the amount was not added to the destination account. Therefore, the money from the source account gets lost. Clearly, this is not good because the integrity of the database has been damaged.

Each of the account transfer's steps can be checked for success or failure. If a failure occurs before all values have been updated, the program needs to undo the deduction made to the source account. That's a rollback. If every step succeeds, the program needs to apply all the changes made to the database. That's when a commit operation is performed.

5.3.3 Automatic Transactions

Transactions have been in widespread use since the early days of enterprise computing. Many database systems include internal support for transactions. Such database systems contain native commands to begin, abort, and commit transactions. This way, several updates to database data can be made as a group, and in the event of a failure, they can be undone. Using a database's internal transaction-processing system is referred to as manual transaction processing.

Automatic transactions differ from manual transactions because automatic transactions are controlled by a system external to the database management system (DBMS). Earlier versions of Windows (95/98/NT) provide automatic transaction services using Microsoft Transaction Server (MTS). MTS works by coordinating database updates made by COM components grouped into a logical unit called a package. An MTS package defines the boundary of the transaction. Each component in the package participates in the transaction. After a component performs a piece of work (such as updating the database), it informs MTS that it successfully (or unsuccessfully) performed its share of the transaction. MTS then makes a determination to continue based on the success of the last component's signal of success or failure. If the transaction step was unsuccessful, the transaction is aborted immediately, and MTS instructs the DBMS to undo any changes made to data. If the step was successful, the transaction continues with the other steps. If all steps execute successfully, MTS commits the transaction and tells the DBMS to commit changes to the data.

5.3.4 COM+ Applications

With the release of Windows 2000 came the next version of COM, dubbed COM+. COM+'s raison d'être is the unification of COM and MTS. COM+ also offers performance improvements over MTS by implementing technologies such as object pooling, which maintains an active set of COM component instances. Other performance-enhancing features include load balancing, which distributes component instances over multiple servers, and queued components, which uses Microsoft Message Queue Server to handle requests for COM+ components.

The services that were formerly provided by MTS are known as COM+ Component Services in the COM+ model. COM+ Component Services works in a similar manner to MTS. Packages are now referred to as COM+ applications. Participating transactional components are grouped into applications in the same way components were grouped into packages under MTS.

Individual COM+ components in an application can be assigned different levels of involvement in an automatic transaction. When setting up COM+ applications, each component can have the levels of automatic transaction support shown in Table 5-2.

Table 5-2 COM+ Automatic Transaction Support Levels

Transaction Support



-No transaction services are ever loaded by COM+ Component Services.

Not Supported

-This is the default setting for new MTS components. Execution of the component is always outside a transaction regardless of whether or not a transaction has been initiated for the component.


-You may run the component inside or outside a transaction without any ill effects.


The component must run inside a transaction.

Required New

-The component needs its own transaction in which to run. If the component is not called from within a transaction, a new transaction is automatically created.

5.3.5 COM+ Security

Security is of paramount importance, especially for applications intended to run on the Internet. In the past, programming security features into an Internet application was largely a manual effort. Often it consisted of custom security schemes that did not necessarily leverage the existing security infrastructure provided by the operating system. Besides being difficult to maintain, such security systems are typically costly to develop.

COM+ Component Services provides a security infrastructure for applications that uses Windows 2000/XP users and groups. COM+ security is declarative, which means you designate which users and groups have permission to access a COM+ application. This is done by defining roles for application access.

A role is a defined set of duties performed by particular individuals. For example, a librarian can locate, check out, and shelve books. The person fulfilling the librarian role is permitted to perform such duties under the security policies defined for that role. An administrator is responsible for assigning users and groups to roles. The roles are then assigned to a COM+ application.

This role-based security is not only easy to implement (it can be done by the system administrator) but it also typically doesn't require the programmer to work on the components to implement any security code. When a call is made to a component running under COM+ Component Services, COM+ checks the user/group identity of the caller and compares it against the roles assigned to the component. Based on that comparison, the call is allowed or rejected.

You can provide additional security checking by using procedural security. This type of security is implemented programmatically using special .NET classes designed for interaction with COM+ Component Services.

5.3.6 .NET Classes and COM+ Component Services

Thus far, our discussions about COM+ Component Services, transactions, and security deal specifically with COM+ components. COM+ predated the .NET Framework and has had much success in enterprise-wide applications developed using Microsoft Visual Studio 6.0. But how does .NET fit into all of this?

COM+ still remains a dominant technology and is a significant part of Windows. The architecture for .NET managed components was designed to take advantage of all the features COM+ Component Services has to offer (object pooling, transaction processing, security, and so on) by providing classes to implement those features. These concepts are very important when developing Web applications, too.

  • + Share This
  • 🔖 Save To Your Account