Home > Articles > Programming > Visual Basic

Client/Server Basics

Visual Basic 6 Client/Server How-To is a practical step-by-step guide to implementing three-tiered distributed slient/server solutions using the tools provided in Microsoft's Visual Basic 6. It addresses the needs of programmers looking for answers to real-world questions and assures them that what they create really works. It also helps to simplify the client/server development process by providing a framework for solution development.

How do I...

 

by George Szabo

How do I...

Clearly, the future of application development lies in standardized distributed components where the business logic can reside within its own tier and be located on centralized servers. Rather than recompiling and deploying 1,000 client applications, you would modify and redeploy your business services on their own centralized servers. How will client applications be able to use business logic that exists on another machine? An object framework is essential to achieving this goal. This object framework is embodied in the Component Object Model (COM) and Microsoft's ActiveX standard. The Visual Basic 6 Professional and Enterprise Edition enables developers to exploit this object framework as they create a new generation of client/server solutions that take advantage of the latest technologies.

Microsoft has crafted Visual Basic to allow the creation of reusable components--invisible to the client--that can be deployed and accessed on remote machines of the components' services. This is done through support for the distributed Component Object Model as well as HTTP. Services can be grouped into three logical categories: user services, business services, and data services. These logical areas can contain numerous physical components that can reside anywhere from the client's machine to a remote server across the world, depending on what the business problem is that needs to be solved. Robust, scalable, maintainable systems are what it's all about.

1.1 Discover Client/Server and Other Computing Architectures

This section introduces three system architectures: centralized, file server, and client/server. You will explore a high-level view of these architectures and the weaknesses that encouraged the introduction of a client/server option.

1.2 Understand File Server Versus Client/Server Database Deployment

Some developers believe you can develop a client/server application by using a Microsoft Access database file (MDB) placed on a network file server. This chapter reviews the significant differences between deploying a database file on a file server and deploying an SQL database engine on a network server.

1.3 Learn About the Two-Tier Versus Three-Tier Client/Server Model

Many client/server systems have been developed and deployed. Most of them have been two-tier applications. The two-tier model has benefits as well as drawbacks. This section explores the advantages and disadvantages of a two-tier versus a three-tier approach.

1.4 Investigate the Component Object Model

The key to a distributed client/server application is the capability to break apart the physical restrictions of a single compiled EXE and to partition the business model into sharable, reusable components. This section explores the critical importance of the Component Object Model (COM) in making this possible.

1.5 Discover the Service Model

When an application is no longer a single physical entity but rather consists of a collection of partitioned logic, it is important to have a design strategy that allows a structured approach to the creation of client/server applications. The Service Model promotes the idea that, based on the services they provide, all physical components fall into one of three categories: user services, business services, and data services.

1.6 Understand Client/Server Deployments Using Components

Understanding that application logic must be partitioned into physical components and that these components should be designed logically according to the service model, this section illustrates three typical client/server deployments: single tier, two tier, and a multitier distributed deployment model.

1.7 Learn More About the Client/Server Development Tools Included with Visual Basic 6

To help implement new ideas, you need new tools. Visual Basic 6 Enterprise Edition comes with some special tools that enable you to create and remotely deploy components. This section introduces you to these tools and their role in the development of a Visual Basic 6 client/server solution.

1.8 Create a SourceSafe Project

Developing component-based client/server applications provides the opportunity for powerful teamwork. Each person will build a piece of the puzzle. Visual Basic comes with Visual SourceSafe, which enables you to catalog code as well as manage team projects in Visual Basic. This section provides a glimpse of this useful tool, which extends Visual Basic into a new league of application development tools.

1.1 How do I...

Discover client/server and other computing architectures?


COMPATIBILITY: VISUAL BASIC 4, 5, 6

Contrary to many predictions over the past decade, the mainframe computer is here and is not going away any time soon. During the `60s and `70s, companies that needed real computing power turned to the mainframe computer, which represents a "centralized" system architecture. Figure 1.1 shows a diagram of two critical components: the server and the client machines.

Figure 1.1 The centralized architecture

Of course, in this centralized architecture the only thing that moves between the client and the host machine is the marshaling of keystrokes and the return of terminal characters. Marshaling is the process of packaging interface parameters and sending them across process boundaries. In the mainframe environment, keystrokes are marshaled from the terminal to the host. This is arguably not what people are referring to when they discuss client/server implementations. Pros for a centralized architecture include excellent security and centralized administration because both the application logic and the data reside on the same machine. Cons begin with the price tag. Mainframe computers are expensive to buy, lease, maintain, use--the list goes on. Another disadvantage of this centralized architecture is the limitation that both the application and database live within the same mainframe process. There is no way to truly partition an application's logic beyond the mainframe's physical limitations.

During the 1980s, the personal computer charged into the business world. With it came a wealth of computing resources like printers, modems, and hard-disk storage. Businesses that could never have afforded a mainframe solution embraced the personal computer. Soon after the introduction of the personal computer to the business world came the introduction of the local area network (LAN) and the use of file server architectures. Figure 1.2 demonstrates a simple file server architecture.

Figure 1.2 File server architecture

The file server system created a 180-degree change in implementation from the mainframe. As depicted in Figure 1.2, application logic was now executed on the client workstation rather than on the server. In the file server architecture, a centralized server, or servers, provided access to computing resources such as printers and large hard drives. Pros of this architecture are a low-cost entry point and flexible deployment. A business could buy a single computer, then two, and so on. A file server architecture is flexible; it enables you to add and reduce computer resources as necessary. Cons of a file server architecture include the fact that all application logic is executed on the client machine. The file server serves files; that is its job. Even though an application's files might be located on a network drive, the application actually runs in the client machine's memory space and using the client's processor. This means that the client machine must have sufficient power to run whatever application is needed or perform whatever task needs to be performed. Improving the performance and functionality of business applications is always a hot topic until the discussion includes the need to upgrade personal computers to take advantage of new application enhancements.

Even after personal computers became a powerful force in the business workplace, they still lacked the powerful computing resources available in a mainframe. The client/server application architecture was introduced to address issues of cost and performance. Client/server applications allowed for applications to run on both the user workstation and the server--no longer referred to as a file server (see Figure 1.3).

Figure 1.3 The client/server architecture

In this architecture, two separate applications, operating independently, could work together to complete a task. A well-known implementation of this concept is SQL-based database management systems (DBMS). SQL stands for structured English query language. In Figure 1.3 you can see that, unlike the file server architecture, the request that goes out to the server is not simply a request for a file (in the form of disk input/output requests, which are returned as a series of input/output blocks). Instead, actual instructions can be communicated to an application running on the server, and the server can execute those instructions itself and send back a response.

Client/server refers to a process involving at least two independent entities, one a client and the other a server. The client makes a request of the server, and the server services the request. A request can take the form of an SQL query submitted to an SQL database engine. The database engine in turn processes the request and returns a resultset. In this example, two independent processes work together to accomplish a single task. This exemplifies the client/server relationship.

Windows printing and the Print Manager is an example of a client/server relationship. A Windows application, such as Word or Excel, prepares your document and submits it to the Print Manager. The Print Manager provides the service of queuing up requests and sending them to your printer, monitors the job's progress, and then notifies the application when the job is complete. In this example, the Print Manager is the server; it provides the service of queuing and processing your print job. The application submitting the document for printing is the client. This example demonstrates how a client/server relationship can exist between applications that might not be database-related.

The most popular client/server applications today revolve around the use of SQL database management systems (DBMS) such as Oracle and Microsoft SQL Server. These applications, often referred to as back ends, provide support for the storage, manipulation, and retrieval of the businesses' persistent data. These systems use structured query language (SQL) as a standard method for submitting client requests. If you are not familiar with SQL, you can learn more from several good books that are available, such as Sams Publishing's Teach Yourself SQL in 21 Days. Microsoft's SQL Server comes with an online help file that also can help you with proper SQL syntax.

Comments

Although both the mainframe and file server-based systems continue to provide service to business, they fail to provide a truly scalable framework for building competitive business solutions. The major factor is that logic must be executed on either the mainframe in a centralized architecture, or on the client in a file server-based architecture.

As stated earlier, a client/server application is composed of at least two pieces: a client that makes requests and a server that services those requests. For faster, more cost-effective application performance these pieces can be separated and application logic can be distributed between them. In the next section you will review a critical difference between database deployment on a file server and implementing a database system such as SQL Server or Oracle on a network server that explains why the performance difference can be so dramatically better for client/server applications.

1.2 How do I... Understand file server versus client/server database deployment?


COMPATIBILITY: VISUAL BASIC 4, 5, 6

With the popularity of Microsoft Access and the proliferation of systems that use the Microsoft database file (MDB) to store data, it must be mentioned that even though the MDB allows multiuser access, it is not a true client/server implementation. When you use an MDB as part of your application, you are using a file server implementation. To take full advantage of what a client/server architecture has to offer, you must understand the difference between a file server-based implementation and a client/server-based implementation. Figures 1.4 and 1.5 demonstrate the fundamental difference between these two architectures.

In Figure 1.4, the query is never sent to the server; instead, the query is evaluated and processed at the client. The query logic to access the MDB realizes that it needs a table of data to process the request, so it requests the entire 30,000-row table across the network before it applies the WHERE clause of the Select statement, which specifies that you are looking for a record with a Social Security Number equal to 555-55-5555. When an SQL statement is used against an MDB, it is processed by the client machine and only a file I/O request is sent across the network to retrieve the required data in the form of disk blocks. No logic is executed on the server except the transferring of file disk blocks. This is not what is referred to as "client/server," but is simply a file server. Placing an MDB out on a network drive does allow multiuse, but only because of client-side logic that references a shared record-locking file for the MDB file in question. The lock file is comprised of the MDB name and an extension of LDB.


NOTE: Don't confuse Microsoft Access the application with Microsoft Access the database. In the example used here the MDB refers to a situation in which you use Access to store and retrieve information directly in its own proprietary database format. Access also allows you to attach to real DBMS systems like SQL or Oracle databases. In this case you can achieve true client/server performance.

Figure 1.4 An SQL process on a file server-based system using an MDB

In the server-based architecture, the actual SQL statement is sent across the network and processed by an application running locally on the server machine, as shown in Figure 1.5. Because the SQL statement is processed on the server, only the results must be sent back to the client. This is a vast improvement over the file-based architecture. If your query is looking to find an individual based on Social Security Number, a resultset of one matching record (rather than the whole 30,000-record table) would be passed back over the network. A major benefit of a client/server application is reduced network traffic and, in most cases, an incredibly faster execution time.

The differences shown in Figures 1.4 and 1.5 clearly illustrate a significant advantage of a client/server implementation.

Consider the following: It would be impractical to give each employee a high-speed duplex laser printer but, by centralizing the printer and allowing people to share it as a resource, everyone benefits from it. The same is true with the database server. Because the query is processed by the server where the database engine is located and not on the clients' machine, a company can throw money into a powerful server and all the clients will benefit from the extra muscle.

Figure 1.5 An SQL process on a client/server-based system using an SQL database management system

Comments

Use of an MDB on a file server does not mean that queries are actually processed by the server. Only when full back-end database systems like Oracle and SQL Server are deployed does a query actually get processed by the server instead of the client.

Deploying an MDB on a network file server is not always the wrong thing to do. Implementation of true back-end database systems requires higher levels of expertise than a simple Microsoft Access MDB deployment. If the amount of data being stored and retrieved is small (you must be the judge of this), then a file server solution might be a better solution. Clearly, network traffic will become an issue as a system grows, but you can always graduate your MDB file to an SQL Server database when the time is right.

Tools like the Upsizing Wizard (available from Microsoft) make the migration easier. After you decide that implementing a client/server solution is the right choice, you will need to choose between a two-tier and three-tier model. In the next section you will review the differences between two-tier and three-tier or n-tier client/server models.

1.3 How do I... Learn about the two-tier versus the three-tier client/server model?


COMPATIBILITY: VISUAL BASIC 4, 5, 6

It is becoming clear that the issue is not one of simply processing transactions and generating reports, but rather of creating an information system that can change with business needs--needs that mandate tighter budgets and higher quality. To respond to the challenges being presented by the business environment as well as the Web, a new three-tier or N-tier client/server approach has been introduced. N-tier refers to the idea that there are no limits to the number of tiers that could be introduced to the client/server model. To begin this discussion, it is important to review the current two-tier approach.

Two-Tier Client/Server Model

The two-tier model is tied to the physical implementation: a desktop machine operating as a client, and a network server housing the back-end database engine. In the two-tier model, logic is split between these two physical locations, the client and the server. In a two-tier model, the front-end piece is commonly being developed in PowerBuilder, Visual Basic, or some other 4GL. The key point to remember is that, in a two-tier model, business logic for your application must physically reside on the client or be implemented on the back end within the DBMS in the form of triggers and stored procedures. Both triggers and stored procedures are precompiled collections of SQL statements and control-of-flow statements. Consider a situation in which you set up a series of stored procedures to support a particular application's needs. Meanwhile, the developers of five other applications are making similar efforts to support their own needs, all in the same database. Sure, there are naming conventions and definitions of who owns what object, but the bottom line is that this scenario makes implementing and maintaining business rules downright ugly.

Paradigms, which implement a strict two-tier architecture, make the process of developing client/server applications look easy, such as the data window in PowerBuilder where a graphical window of fields is magically bound to the back-end data source. In Visual Basic, use of any data controls that provide a graphical link to the back-end data source creates a two-tier client/server application because these implementations of application development directly tie the graphical user interface to back-end data access. The upside is that data access is simplified, and very rapid development of applications is therefore possible. The GUI is bound directly to the data source, and all the details of data manipulation are handled automatically. This strength is also a weakness. Although data access is simplified, it is also less flexible. Often you will not have complete control over your interactions with the data source because it is being managed for you. Of course, this extra management uses additional resources on the client and can result in poor performance of your applications.

A two-tier client/server model has several critical limitations:

  • Not scalable. The inability of a two-tier approach to grow beyond the physical boundaries of a client machine and a server machine prevents this model from being scalable.

  • Unmanageable. Because you cannot encapsulate business rules and deploy them centrally, sharing common processes and reusing your work is difficult at best.

  • Poor performance. The binding of the graphical interface to the data source consumes major resources on the client machine, which results in poor performance and, unfortunately, unhappy clients.

Three-Tier Client/Server Model

The limited effectiveness of two-tier client/server solutions ushered in an improved model for client/server development. The three-tier client/server model is based on the capability to build partitioned applications. Partitioning an application breaks up your code into logical components. The service model, discussed in How-To 1.5, suggests that these components can be logically grouped into three tiers: user services, business services, and data services. After an application has been developed by using this model and technique, each component can then be deployed to whichever machine will provide the best performance, depending on your situation and the current business need. Figure 1.6 shows a physical implementation of the three-tier client/server model. How-To's 1.4 and 1.5 discuss partitioning, components, and the service model in depth.

The following benefits illustrate the value of distributed three-tier client/server development:

  • Reuse. The time you invest in designing and implementing components is not wasted because you can share them among applications.

  • Performance. Because you can deploy your components on machines other than the client workstation, you have the ability to shift processing load from a client machine that might be underpowered to a server with extra horsepower. This flexibility in deployment and design enables you, as a developer, to take advantage of the best possible methods for each aspect of your application's execution, and results in better performance.

  • Manageability. Encapsulation of your application's services into components enables you to break down large, complex applications into more manageable pieces.

Figure 1.6 A three-tier client/server physical implementation

  • Maintenance. The centralization of components for reuse has an added benefit. They become easier to redeploy when modifications are made, thus keeping pace with business needs.

Comments

Three-tier development is not the answer to every situation. Good partitioning and component design take time and expertise, both of which are in short supply. Additionally, three-tier client/server development, like any development, requires the support and commitment of the enterprise's powers that be. Two-tier client/server development is a much quicker way of taking advantage of SQL database engines and can fit the bill if both money and time are running out.

On the other hand, if you are looking to create systems to support a business as it grows and competes in today's marketplace, or a Web-based application that must be ready for success, a component-based client/server model gives a great return on investment. As mentioned earlier, the benefits of a three-tier approach are the ability to reuse your work, manage large projects, simplify maintenance, and improve overall performance of your business solutions. The following section introduces you to the Component Object Model and the concept of partitioning, which play a key role in making three-tier client/server applications possible.

1.4 How do I... Investigate the Component Object Model?


COMPATIBILITY:VISUAL BASIC 4, 5, 6

The Component Object Model (COM) is a general architecture for component software. This means that it is a standard, not an implementation. COM says this is how you should allow components to intercommunicate, but someone else (ActiveX) has to do it. ActiveX accomplishes the physical implementation of COM. Originally, ActiveX was called OLE (Object Linking and Embedding). ActiveX not only includes the OLE implementation of COM but also improves on the OLE implementation by extending capabilities to take advantage of the Internet. This is done in the form of ActiveX controls, as well as support for DCOM (Distributed Component Object Model), discussed in the "Distributed Component Object Model" section of this How-To.

Why is this important? So far, the discussion of client/server has shown the need for a design model that allows encapsulation of critical business logic away from the mire of database design and front-end code. An example of logic used to support the business could be a rule that prohibits orders for amounts of more than $500 to be placed without a manager's approval. This business rule can now be implemented with code in a component that is centralized on its own server, which makes it easier to modify if necessary. If the rule changes to also allow supervisors to approve orders of more than $500, the change can be made much more easily and quickly to a centralized component of code rather than by redeploying a new executable to every desktop.

So the answer suggested here is to partition the business logic out of the front-end and back-end applications and into its own set of components. The question is, how are these components supposed to talk to each other? How are you going to install these components on a network where your client applications can use them as if they were running locally on their computers? OLE and the Component Object Model are the answer.

In creating the COM, Microsoft sought to solve these specific problems:

  • Interoperability. How can developers create unique components that work seamlessly with other components regardless of who creates them?

  • Versioning. When a component is being used by other components or applications, how can you alter or upgrade the component without affecting all the components and applications that use it?

  • Language independence. How can components written in different languages still work together?

  • Transparent cross-process interoperability. How can developers write components to run in-process or out-of-process (and eventually cross-network), using one simple programming model?

If you are using Visual Basic today, then you have no doubt experienced the benefits of COM. All the third-party controls, as well as Visual Basic itself, take advantage of standards set by COM and implemented through what are referred to as ActiveX technologies. What this means to you is that objects based on the Component Object Model, objects you can write in Visual Basic, C++, or some other language, have the capability to work together regardless of the language used to create them. Because all these components know how to work together, you can purchase components from others or build them yourself and reuse them at any time during the business-system life cycle.

In-Process and Out-of-Process Servers

A component, also referred to as a server, is either in-process, which means that its code executes in the same process space as the client application (this is a DLL), or out-of-process, which means that it runs in another process on the same machine or in another process on a remote machine (this is an .EXE file). From these scenarios you can see that three types of servers can be created: in-process, local, and remote. Both local and remote servers must be out-of-process.

As you create components you will need to choose the type of server, based on the requirements of implementation and deployment. Components can be of any size--from those that encapsulate a few functions to larger, very robust implementations of a company's way of doing business. The powerful aspect of these component objects is that they look the same to client applications as well as to fellow components. The code used to access a component's services is the same, regardless of whether the component is deployed as in-process, local, or remote.

Distributed Component Object Model (DCOM)

The Distributed Component Object Model (DCOM) was previously referred to as Network OLE. DCOM is a protocol that enables applications to make object-oriented remote procedure calls (RPC) in distributed computing environments (DCE). Using DCOM, an ActiveX component or any component that supports DCOM can communicate across multiple network transport protocols, including the Internet Hypertext Transport Protocol (HTTP). DCOM provides a framework for the following:

  • Data marshaling between components

  • Client- and server-negotiated security levels, based on the capabilities of distributed computing environments' (DCEs) remote procedure calls (RPCs)


VERSIONING OF INTERFACES THROUGH THE USE OF UNIVERSALLY UNIQUE IDENTIFIERS (UUIDS)

Comments

It is interesting to note that originally OLE was said to stand for Object Linking and Embedding. Microsoft backed off that definition and said that objects written to support the Component Object Model are collectively called component objects. Because OLE supports the Component Object Model, OLE objects are referred to as component objects. Now Microsoft refers to these component objects as ActiveX. ActiveX components have been extended to support DCOM.

ActiveX is a physical implementation of the Component Object Model that provides the foundation for the creation of components which can encapsulate logic and be distributed to operate in-process, local, or remote. Visual Basic 6 has been extended to enable the creation of ActiveX servers. Visual Basic's capability to create components in the form of ActiveX DLLs (in-process servers) and ActiveX EXEs (local or remote servers) makes three-tier client/server applications easier to create than ever before.

Using Visual Basic, you can create applications that are partitioned into several separate physical components. Those components can then be placed transparently on any machine within your network, as well as across the Internet, and they can talk to each other. The following section introduces you to the service model, which suggests a logical rather than a physical way of viewing how applications should be partitioned into components.

1.5 How do I... Discover the service model?


COMPATIBILITY: VISUAL BASIC 4, 5, 6

The service model is a logical way to group the components you create. Although this model is not language specific, this book discusses the service model and how it is implemented by using what is available in Visual Basic 6. The service model is based on the concept that every tier is a collection of components that provide a common type of service either to each other or to components in the tier immediately adjacent.

The following three types of services are used in the creation of business solutions:

  • User services

  • Business services

  • Data services

Each of these types correlates to a tier in a three-tier client/server architecture. Figure 1.7 shows physical components (DLLs, EXEs, database triggers, and database-stored procedures) grouped logically into the three service types. Note that DLL components and EXE components can be used to encapsulate logic in any tier. In fact, the only objects that are not in every tier are triggers and stored procedures because they are database specific.

Figure 1.7 Physical components grouped logically within the services tiers

Figure 1.7 also shows a very important benefit of using components, which is the ability to make a component's services available to more than a single application. Notice that the shaded areas overlap where components are used by both application 1 and application 2. Reuse is a powerful aspect of the service model. The following basic rules for intercomponent communication must be followed in the service model:

  • Components can request services from fellow components in their current tier and any tier below or above a component's tier.

  • Requests cannot skip tiers. User services components cannot communicate directly with components in the data services tier and vice versa.

Often the service model is referred to as a layered approach. The typical use of the term layer refers to a process in which one layer must speak to the next layer and move from top to bottom and then back up. This does not correctly describe the way components communicate within the service model because a component can interact with other components in the same layer as well as those above and below it. The service model is meant to help you decide how to partition application logic into physical components, but it does not deal with the actual physical deployment of the software components. By understanding the three service tiers, you can begin to make decisions about which application logic you should encapsulate within a single component as well as within the various tiers. The following sections discuss the different types of services defined in the service model.

User Services

Components in the user services tier provide the visual interface that a client will use to view information and data. Components in this layer are responsible for contacting and requesting services from other components in the user services tier or in the business services tier. It is important to note that even though a component resides in the user services tier, one of the services provided to a user is the ability to perform business functions. User services play a role in doing business. Even though the business logic may be encapsulated in components of the business services tier, the user services component enables the user to have access to the whole process.

User services are normally, but not always, contained in the user application. A user service such as common company dialog boxes could be compiled into a DLL and made available locally on a client's machine. Perhaps you want to implement a standard set of error messages, but you don't want to deploy it to every machine. You could take that user service, compile it into an ActiveX EXE, and deploy it remotely on a shared server so that everyone could use it.


WARNING: If the Error message component is placed on a central server, it should only contain text string error messages and should not display a dialog box. If a remote ActiveX server displays a dialog box, it appears on the server rather than on the user's workstation. Refer to Chapter 6, "Business Objects," for more information on this topic.

Business Services

Because user services cannot directly contact the data services tier, it is the responsibility of the business services components to serve as bridges to alternative tiers. Business components provide business services that complete business tasks such as verifying that a customer is not over his or her credit limit. Rather than implementing business rules through a series of triggers and stored procedures, business components provide the service of implementing formal procedures and defined business rules. So why go through all the trouble of encapsulating the business logic in a business component or set of components? For robust, reusable, maintainable applications.

Business services components also serve to buffer the user from direct interaction with the database. The business tasks that will be executed by business services components, such as entering a patient record or printing a provider list, should be defined by the application's requirements. One overwhelming reason to partition out business services into components is the knowledge that business rules have the highest probability for change and, in turn, have the highest probability for requiring the rewriting and redeployment of an application.

Business rules are defined as policies that control the flow of business tasks. An example of a business rule might be a procedure that applies a late charge to a person's bill if payment is not received by a certain date. It is very common for business rules to change more frequently than the tasks they support. For this reason, business rules are excellent targets for encapsulation into components, thus separating the business logic from the application logic itself. The advantage here is that if the policy for applying late charges changes to include the stipulation that they cannot be sent to relatives of the boss, then you will need to change the logic in your shared business component only, rather than in every client application.

Data Services

Data services involve all the typical data chores, including the retrieval and modification of data as well as the full range of other database-related tasks. The key to data services is that the rules of business are not implemented here. Although a data service component is responsible for managing and satisfying the requests submitted by a business component, or even a fellow data services component, implementing the rules of business is not a responsibility.

Data services can be implemented as objects in a particular database management system (DBMS) in the form of triggers or stored procedures. Alternatively, the data services could provide access to heterogeneous data sources on multiple platforms on any number of servers or mainframes. A properly implemented data services tier should allow changes to take place in the data services tier and related data sources without affecting the services being provided to business services components.

Comments

The service model is a logical--not a physical--view of working with components and application partitioning. Sometimes physical deployment of components might parallel the component's tier assignments, but this is neither necessary nor desired. In Figure 1.8, components in the service model have been mapped to one of three physical locations. They can reside on the client, on a network server (typically a business server), or on a second network server (typically a database engine server).

Figure 1.8 Simple physical deployment of components on a network

Figure 1.8 helps to illustrate the following key points with regard to the service model and physical deployment:

  • Triggers and stored procedures from the data services tier must be deployed in the database back end. The reason is that these objects are direct implementations of the database engine and are stored within the database itself. This is shown by the direct mapping of the stored procedures and triggers symbols within the data services tier to the database server.

  • To share a single common source of services, a component must be an out-of-process remote Automation Server (ActiveX EXE) and be deployed on a central server so that everyone can access it. In Figure 1.8, an ActiveX EXE within the user services tier is mapped to the business server so that multiple users can access it.

  • Not all business logic is best deployed remotely. Due to performance considerations, an in-process DLL provides better performance than an ActiveX EXE and might be a wiser choice. In Figure 1.8, a DLL in the business services tier represents this type of scenario. DLLs generally cannot be utilized remotely, so the business services DLL must be deployed to the user's workstation rather than to the business server.

One final comment about the idea of dual personality is that dual personality is a powerful aspect of components that affects their design. Any component can be both a client and a server. Consider a situation in which you design a business services component to calculate book royalties. The royalty calculation is a service that the Visual Basic component provides to any requester. To create the calculation, the business component must have information about the author's contract with the publisher. Your business services component becomes a client of services rather than a server as it requests contract information from a data services component. In this example, the royalty component is acting not only as a server of royalty information but also as a client of data services. This is a much more powerful implementation than the typical and more rigid client/server relationship in which a client can't be a server, and vice versa. In the next section you will review three physical deployment scenarios available for client/server applications that utilize a three-tier client/server architecture.

1.6 How do I... Understand client/server deployments using components?


COMPATIBILITY: VISUAL BASIC 4, 5, 6

The service model encourages the creation of components that encapsulate common reusable functionality in a physical package--the ActiveX DLL or ActiveX EXE. VB6 makes possible also the creation of ActiveX controls in the form of OCXs, as well as the creation of ActiveX documents which are applications that live inside a browser. Compiled into these physical formats, these pieces can be deployed on a practically infinite number of topologies. Before you review deployment options, you should understand the characteristics of the ActiveX DLL and ActiveX EXE. Both are ActiveX component objects and share a common interface based on the standards defined by the Component Object Model.

When considering which is the proper container for a particular component, you should consider the following about these two physical component implementations. A DLL is an in-process server. In-process refers to the fact that the DLL operates in the same process space as the application using it. Because a DLL is operating in the same process, it loads much more quickly than an EXE would. Additionally, a DLL cannot be deployed remotely, at least not without a trick or two. The trick being referred to here is that a DLL can be deployed remotely if the DLL is parented by an EXE component. The EXE would instantiate the DLL on the remote machine and provide an interface to the DLL's methods and properties. This process is referred to as containment. The parent component marshals requests between a client and the DLL.

In the case of an EXE, it is important to know that there are basically two types of ActiveX EXEs. The first type of EXE is deployed locally on a client machine (local ActiveX Automation Server); the second is a remotely deployed EXE (remote ActiveX Automation Server). ActiveX EXEs are always out-of-process servers, which means that they run in their own process space. Out-of-process servers can be deployed remotely on a network server and shared among all applications that have access to that server. On the downside, ActiveX EXEs take significantly longer to load than a DLL. Access to methods and properties of an ActiveX EXE component is much slower than when you're working with a DLL. After you place the EXE on a server, network traffic becomes an additional concern with regard to execution speed.

One final consideration is crash protection. This is an important consideration when you design components. A DLL operates in-process; if it dies, it takes the application with it because they share the same process space. On the other hand, an ActiveX EXE runs out-of-process; if a problem occurs, it might die but the application or component calling it will not die. This enables the calling application or component to handle the problem by either restarting the EXE or performing some other type of recovery. Fault tolerance can be designed into your systems to provide greater support for mission-critical execution. Finally, it is important to note that ActiveX EXEs can run on separate threads, while DLLs can only run within the thread of execution of the application or component calling it. Table 1.1 highlights the considerations presented thus far. You should keep these in mind when selecting the physical container for your component.

Table 1.1  ActiveX server types

COMPONENT TYPE PROS CONS
DLL In-process Quick execution Local deployment
in-process only; no crash protection
EXE Local out-of-process Crash protection Slower than DLL
EXE Remote out-of-process Remote execution, Crash protection Up to 100 times slower; affected by network traffic

Physical Deployments

The following four client/server deployments use the three-tier strategy shown here:

  • Single server

  • Business server

  • Transaction server

  • Web server

All figures in this section include the service model diagram from How-To 1.5. Each component has been given a letter (from A to L) to uniquely identify it. There is no specification as to whether the component is a DLL or EXE but you can refer to Figures 1.7 or 1.8 to reference this attribute.

Single-Server Deployment

In the single-server model shown in Figure 1.9, all components are split between the client machine and the network server. B, F, E, and J are all shared items; therefore, they had to be deployed on the network server so that others could have access to them.

It is true that components installed on a workstation can be shared with other workstations in a peer-to-peer configuration, but this is a very poor implementation idea. A workstation usually has less processing power than a server. Another reason to avoid deploying shared components on workstations is the headache it causes when you are trying to keep track of it all.

The single-server deployment model also runs the DBMS back-end engine on the network server. All the data services components are deployed on the network server as well. Application 1 is shown running on workstation 1. Notice that not only is there a user services component on workstation 1, but there is also a business services component, identified by the letter D. This is to suggest that there might be a need for locally deployed business services components, perhaps due to the need for speed of execution. Although it is not a good idea to partition components based on speed, which is not one of the factors considered on a logical service model level, you might find yourself in a situation in which speed is the number one concern. In this case, either local or in-process deployment of a component is possible. It doesn't take much to change a component from a DLL to an EXE with Visual Basic. For additional information, see Chapter 6, "Business Objects."

Figure 1.9 Single-server deployment of components on a network

Application Server Deployment

A second step in the deployment scenario is the business server deployment plan. Figure 1.10 shows the same service model diagram, but this time the physical deployment includes an additional network server, referred to as an application server. Its purpose is obvious: to provide a centralized location for all shared business components. Although meant as a home for your business components, the Application server usually houses all components that must be centrally shared. This might include user services components as well as those shown by component B in Figure 1.10.

Notice in Figure 1.10 that a user service component represented by the letter B is on the Application server. This makes sense because the Application server is a good centralized location for deployment and maintenance in this scheme.

All the data services components have been deployed to the Data server. If you refer to Figure 1.7 or 1.8 in the previous section, you will note that two of the four data services, trigger and store procedures, are labeled J and L in Figure 1.10. These components must be deployed on the same machine as the DBMS because they are integrated objects of the DBMS. In this deployment, all data services have been kept together to allow centralized administration of these pieces.

Figure 1.10 Business server deployment of components on a network

The components kept on each workstation have not changed from the previous deployment scenario to this one. This is worth noting because it suggests that after you set up your workstations and their applications, you can continue to enhance deployment schemes in the server arena transparent to the workstations. This powerful feature is made possible by the Remote Automation Connection Manager utility, which is discussed in the "Distributed Transaction Server Deployment" section.

One last point about this deployment diagram is that the connections of all workstations lead to the Application server. This might or might not be the actual physical implementation. Both the Business server and the Data server could be on the same network and be just as available, in which case Figure 1.10 simply shows the allowed communication path: User services talk to business services that talk to data services, and so on. This does not have to be only a logical deployment, however. If open database connectivity (ODBC) drivers are not installed on any of the workstations, and all communication with the data services components requires ODBC, then you have physically prevented this path. By eliminating a workstation's capability to directly access data you can create a much more secure environment, if that is a primary concern for your deployment. Remember that after ODBC is installed on a user's workstation a person could effectively install any number of data accessing packages that utilize ODBC drivers, thus providing a potential security risk. There are several security measures that you could take to prevent a renegade user from directly accessing production or warehouse data. Avoiding the installation of ODBC on every workstation is one step you could take.

Distributed Transaction Server Deployment

The third scenario discussed in this section is transaction server deployment. The word distributed is used in the name of this scenario to differentiate it from the use of Microsoft Transaction Server services, which can be utilized in all scenarios. Figure 1.11 shows a distributed transaction server deployment scheme. What does a transaction server do? It is an application whose purpose is to maintain and provide a pool of ActiveX server component objects in memory while providing security and context to the use of these components in a transaction. Remember that EXE components must be started and loaded into their own process space each time they are used. This is a huge cost to incur when you need to use one.

To offset the load-time cost of ActiveX EXEs, a transaction server creates a pool of these components. The transaction server then stands ready to pass clients an object reference to these preloaded components. After the client receives a reference to a preloaded component, the client can use the component directly without funneling requests through the transaction server. This is an important point. If all requests had to be funneled through the transaction server, the transaction server would soon become a bottleneck. The transaction server simply preloads components and hands out their addresses on request. When the client is finished with it, the component is released and a new instance of the component is loaded into the pool to await the next client request for a component.

What is a transaction? A transaction is a unit of work. If you make your components available, using Microsoft Transaction Server, you can create transactions using multiple components; if any of them fails you can roll all your actions backward. Let's take a banking transaction as an example. If you go to an automated teller machine and request money, it is very important that all the parts of that transaction be successful; otherwise, it's not a successful transaction (unit of work). Imagine your reaction if you inserted your card, entered your PIN number, and the machine debited your account but never gave you your money. Obviously, that would be an unsuccessful transaction. The understanding that a successful transaction contains multiple actions that must all be successful is what Microsoft Transaction Server provides. Of course, you must code the components and the transaction properly.

Figure 1.11 introduces the use of a transaction server. The transaction server becomes a type of switchboard operator, passing component references to requesting clients. If workstation 1 in Figure 1.11 requested use of component E, then workstation 1 would be able to use component E directly until the reference was released. This is depicted by the dotted line from workstation 1 to component E.

Figure 1.11 Physical deployment utilizing a transaction server and distributed components

Another important aspect of the transaction server deployment scenario is that components can be moved from one component server to another, based on load, in order to improve performance. As the components are moved from one location to another, you will only need to register the new location with the transaction server. The transaction server has the following responsibilities:

  • Keeping a pool of ActiveX servers instantiated

  • Passing requesting applications a reference to these servers

  • Terminating references to the ActiveX server when it is no longer being used

  • Validating usage of a component

  • Managing the transaction participation of components

The transaction server sits between the workstations and the application servers.

Keep a Pool of ActiveX Servers Instantiated

In order for the transaction server to do its job, it must first be able to instantiate the components that it must maintain on the server. You will need to set up the transaction server machine so that it has access to all the other component servers. Additionally, you will need to use the Remote Connection Manager application to configure the network locations of these components. The benefit of this scenario is that you configure the location of the components at the transaction server machine. If a component is moved, you reconfigure the address at the transaction server, not at the workstation level. This is of huge importance in large deployments.

Pass References to These Servers to Requesting Applications

When a client requests a service of a component being maintained by the transaction server, the transaction server hands a reference to an available component in the pool. Depending on how many different components are being managed by the transaction server, this might not be a minor activity. This introduces the question of granularity when designing and implementing components in your system.

Granularity refers to how finely you will partition your services. For example, will you put 100 services in a single component because they all have to do with financial calculations, or will you give each calculation its own component? The larger component is easier to locate because all the calculations are in a single physical package, but giving each calculation its own physical package makes it easier to test and debug. And smaller components or more granular components seem to have a higher probability for reuse. To truly benefit from Microsoft Transaction Server you must create small stateless components and allow MTS to maintain the context of what is going on. This is covered in more detail in Chapter 6, "Business Objects."

Terminate References to the ActiveX Server when It Is No Longer Being Used

When a client finishes using a component, it drops all references to the component. When there are no references to an ActiveX server, this causes termination and it shuts down. When a component is terminated, the transaction server must adjust the pool and prepare for more requests. The transaction server can be configured to maintain pool levels at different values throughout the day, depending on expected demand.

Validate Usage of a Component

Another aspect of the transaction server is its capability to implement security. Part of the transaction server's design can and should be to know who is requesting a service. This information can be used to implement a security model. The transaction server could use the login and password to validate use of a component. Microsoft Transaction Server provides a management tool that enables you to define access to components based on roles. If a person belongs to a particular role, like administrator, then the rules and security context assigned to the administrator role are given to this person along with access to the component.

Manage the Transaction Participation of Components

When you create components that will participate in a transaction it is important to make the components as atomic and stateless as possible, thus enabling them to be used and released quickly. If a transaction requires more than one component, which it usually does, then Microsoft Transaction Server provides the necessary transaction management. The participation of components is managed and actions are either committed or rolled back based on overall success of the total transaction. Remember that components must be coded to take advantage of participation in a transaction. Only some Database ODBC drivers, level three and above, provide this type of transaction participation support. SQL Server provides transaction commit and rollback via MTS. If you are using a different DBMS you must verify that it will work properly through MTS. For more information about components and MTS please refer to Chapter 6, "Business Objects."

Web Server Deployment

The Web server deployment scenario is the fourth and final one discussed in this section. Figure 1.12 shows a Web server deployment scheme. This scenario can be as simple as a browser making a request for static pages or as complex as providing online banking. Unfortunately, the simplicity that the browser gives the user in accessing and using Web sites and applications translates directly into hard work for the developer. The key difference that the Web-based application architecture introduces into the picture is a standard application container on the client machine, usually a browser and the capability to download components to the user machine on demand.

Figure 1.12 Physical deployment utilizing IIS Web Server, MTS application servers, and distributed components

What does this mean? Well, it means that the first three scenarios focused on keeping business rules on centralized servers to make it easier to deploy changes. Because the Web server automatically downloads components to the client's machine, the need to run business logic from centralized servers is no longer as important. Now the focus is on running components of logic where they make the most sense from an execution standpoint. If you are running over the Internet you probably will want to download anything you can to the client and let it run locally with as few requests to the server as possible. This is recommended because you can't guarantee bandwidth and connectivity. If you are running on an intranet--a Web server available on a company's internal network--you might choose to mix up your deployment, based on the power of the client machines, your servers, and available bandwidth.

Comments

In the single-server, application server, and distributed transaction server deployment scenarios, the components deployed to the workstations did not change. This emphasizes the goal, which is to centralize the application logic that must be maintained and updated. It is much more cost effective to maintain components on a centralized server than to change the configuration on 100 workstations. This comes at a cost in execution time, however, so it is not a cure-all. Some components might have to be distributed to every workstation. The point here is that the physical deployment opportunities are vast. Unfortunately, they are also fraught with uncertainty.

The Web server deployment strategy adds a new twist to this whole understanding of where components can and should be deployed. Consider that components are not permanently installed on the client's machine but instead are downloaded and used as they are needed. The Web server can easily automate the installations of needed components and manage versioning on the client's machine, effectively reducing the need for centralized servers to run the components. The true value of this will be realized when the Web servers and application transaction servers can perform true load balancing by dynamically choosing where components should run best. In this scenario things like network traffic and server load and availability would play a role in deciding whether the component runs on server 1, server 2, or is downloaded to the client and run locally. Unfortunately, today's systems are not yet to this point--but they are moving in this direction. Be aware that until automatic load balancing is a reality, each situation in which you must deploy a component architecture and three-tier client/server application will require its own solution. Hopefully, this section has given you some ideas.

1.7 How do I... Learn more about client/server development tools included with Visual Basic 6?


COMPATIBILITY: VISUAL BASIC 6

There is a significant difference between what takes place in a two-tier client/server application and the implementation and deployment of a three-tier application. In a two-tier approach, business logic is integrated into the application that sits on the user's workstation, or the logic is integrated into the back-end database system in the form of triggers and stored procedures. With a three-tier approach, the business logic that represents what the company is all about is given its own tier. This tier is made possible by facilitating the capability to partition executable logic out of both the front-end application and the back-end database engine. The difference resides in the partitioning of applications and the creation of components. The use of components, both on the local machine and deployed remotely, introduces a serious need for new tools. Visual Basic 6 comes with a variety of new tools:

  • Microsoft Visual Modeler

  • Application Performance Explorer

  • Visual Component Manager

  • Remote Automation Connection Manager

  • Automation Manager

  • Client Registration Utility

  • Microsoft Transaction Server and NT Option Pack 4.0

  • SQL Server 6.5 (Developer Edition)

  • SQL Server debugging service

  • Microsoft Data Access Controls

  • Posting Acceptor

  • SNA Server

  • Database Access Methods and Tools

  • Visual SourceSafe Client and Server Components

The rest of this section provides a brief overview of the above-mentioned tools and data access methods that accompany Visual Basic 6.0 Enterprise Edition. These tools play a vital role in making three-tier client/server application development feasible and desirable.

Microsoft Visual Modeler

Developing systems with components requires a great deal of planning. It is critical to understand both the logical and the physical aspects of the solutions you are designing. This tool is a subset of a fuller featured product, Rational Rose 8.0, from a company called Rational.

Visual Modeler enables you to create a logical view of your solution that contains the classes and their relationships to each other. You can also create a component view that describes the physical structure of the system being created; finally, you can roll all this into a deployment view that shows the physical location of the components and how they will connect. Figure 1.13 shows Microsoft Visual Modeler with its three-tiered diagram. When you are ready, Visual Modeler can translate your work into Visual Basic classes and code. You can reverse-engineer into the modeler, as well.

Figure 1.13 Visual Modeler in 3-tier presentation mode

Application Performance Explorer (APE)

Testing the performance of component-based systems has been a difficult if not impossible task. The Application Performance Explorer (APE) enables you to specify different scenarios to get a true gauge of performance on your equipment over your network. Figure 1.14 shows the Application Performance Explorer running a test. It is critical that you understand the benefits and consequences of your design decisions. The Application Performance Explorer enables you to do this in a test environment. You can set up tests to run automatically and even target peak times on your network.

Figure 1.14 Application Performance Explorer (APE) running a test

Visual Component Manager

The promise of components is that they will make the long hours of work you put into building them pay off by letting you reuse them. The sad truth is that reuse is not a sure thing. For reuse to take place, you must make sure that everyone can easily find components that can be reused and then, having found them, that everyone can use them. The Visual Component Manager, shown in Figure 1.15, is provided as a tool to help you accomplish this task.

The Visual Component Manager enables you to add and remove components from a catalog that everyone can share. You can also track important information about each component, allowing people to reuse the component. Finding the component is only half the trick to using it; the other half is understanding the component's interface.

Figure 1.15 Visual Component Manager in tree view with wizard

Each component has a property sheet that enables you to enter information that describes the component and its interface. Importing components into the Visual Component Manager registers them for use. You and the people who will be creating and reusing components will have to decide how to implement the Component Manager's features. This is well worth the effort, even if the person you are sharing the components with is yourself.

Client Registration Utility

The Registry of both NT and Windows 95 provides a library in which all objects used are registered. In order for a component to be available for use under Windows 95 or NT, it must be registered in the Registry. This registration can take place in several ways. If the component is an EXE, you simply execute it; it will register itself on your system. If you have Visual Basic on your machine and compile a component into an EXE or DLL, that component will be registered automatically. You can also use the Setup Wizard to create a setup program that not only installs the component and registers it in the Registry, but also provides a method for uninstalling it.

The Client Registration Utility provided with Visual Basic enables you to register components from the command line. To use the Client Registration Utility, you must compile your components with the Remote Server Support Files option checked. This option is available when you generate an EXE in Visual Basic by choosing the Options button. The Remote Server Support Files option will generate a file with the .VBR extension. This file provides a client machine's Windows Registry with information it needs to run an ActiveX server that exists on a remote computer.

In today's world of choice, you get two versions of the Client Registration Utility: CLIREG32.EXE and CLIREG16.EXE. CLIREG32.EXE allows for registration that enables 32-bit applications to access the component you are registering. CLIREG16.EXE is used to register your component for use by 16-bit applications. If you will be using both 16- and 32-bit applications to reference this component on a single machine, you must run both registration utilities. If you must register a DLL on a client machine to run locally, you should use Regsvr32.e XE for 32-bit Windows environments or Regsvr16.e XE for 16-bit machines. This utility allows for local registration of ActiveX servers. Because a DLL cannot be executed like an EXE, nor accessed directly via Remote Automation, you must install it by using this utility. Regsvr32 and Regsvr16 can be found under your Visual Basic directory in the Clisvr subdirectory.

Remote Automation Connection Manager

The Remote Connection Manager is similar to a phone directory; it is provided as an easy way to tell your system where to find a component. Figure 1.16 shows the Remote Automation Connection Manager's Server Connection tab and ActiveX classes list. You put in the connection information and this utility stores it in the Registry. When an application or component tries to contact a component, it looks in the Registry to find the information about the component. In this case, the important information is security related.

Figure 1.16 A remote configuration, using the Remote Automation Connection Manager

The list contains all the ActiveX classes that have been registered. To use the Remote Automation Connection Manager to set up access to a remote component class, you highlight the component's class name on the list and enter the network address, network protocol, and authentication level to be used (see Table 1.2). Additionally, you can use either standard Remote Automation or the distributed component object protocol by simply selecting it from the Server Connection tab.

Table 1.2  Sample entries for the Remote Automation Connection Manager

OPTION SAMPLE ENTRIES
Network Address IP Address or Associated Name
Network Protocol TCP/IP, Named Pipes, or other installed option
Authentication Level No Authentication

The real power of the Remote Automation Connection Manager is in its capability to easily repoint the Registry from a local reference to a remote reference of your component. Figure 1.16 shows the Remote Automation Connection Manager highlighting a component set up to be accessed remotely.

When the component's address in the Registry has been changed to a remote machine, the Remote Automation Connection Manager displays two component symbols connected by a line, and the label saying "remote." To switch from local to remote access of a component is easy; you simply select Local or Remote from the Register menu list. What is taking place is that entries are being changed in the Registry to point requests for service to a remote or local location. When you call someone on the phone, it really doesn't matter where they are as long as you have the phone number and they pick up the phone when you call. That is the idea with Remote Automation and components. Everyone uses a phone to talk to others. Even if you are in the same house (on the same computer), you use the phone to talk. Doing this enables components to be deployed anywhere, and all that must be done is to change the number in the phone book (Registry) to the current phone number.

Security is worth mentioning here. The two types of security in Remote Automation are as follows:

  • Access control. This type of security ensures that only certain types of objects are remotely available. You can also make sure that only specified users can have access to certain objects.

  • Authentication. This type of security, which ensures that data sent from one application is identical to the data received by the other, protects against someone intercepting your data as it goes from one point to another.

There are many ways to implement security. On one end of the spectrum, you could implement no security and just trust that people won't access things they shouldn't. This method is easy to maintain because you simply ignore the risk.

On the other end of the spectrum, you could lock everything up and assign access to only the logged-on person at a single station. Figure 1.17 shows the contents of the Client Access tab of the Remote Automation Connection Manager.

Figure 1.17 The Client Access tab of the Remote Automation Connection Manager

If you are running Windows NT, you will want to set the System Security Policy to Allow Remote Creates by ACL. ACL stands for Access Control List and is a method used by NT to determine whether a user running an application has adequate permission to access the class. This is very powerful because it enables the NT operating system's security model to kick in, allowing for a centralized method for handling security.

With Allow Remote Creates by ACL selected, a request for a remote ActiveX component object will be processed as usual by the Automation Manager (as discussed later in this section). The Automation Manager impersonates the client user and tries to open the remotely deployed object's class identification (CLSID) key with query permissions. If the open fails, the Automation Manager returns an error. On the other hand, if the open succeeds the Automation Manager no longer impersonates the client user. It creates the requested object and returns a reference as usual.

If you are using Windows 95, the client's ACL is not a valid choice; you can only specify Allow Remote Creates by Key. If you select this option in the Remote Automation Connection Manager on the client and also check the Allow Remote Activation, then the system will grant access to an application that has the correct value stored under the object's CLSID in the Registry. If you use Windows 95 as a server, you should also set the authentication level to No Authentication. Windows 95 does not support the full security model that NT does.

Automation Manager

The Automation Manager is an application responsible for connecting remote clients to ActiveX automation servers. This multithreaded application must be running on a machine that acts as a server for components, making them available for use by other machines. Figure 1.18 shows the Automation Manager and the two visible values it presents. If you are not using Microsoft Transaction Server to provide access to components, you will need to use this program.

Figure 1.18 The Automation Manager waiting for a client request

The Automation Manager must be running on the server machine. As requests are received, the Automation Manager tracks and increments the number of connections. As ActiveX component references are passed, the object's count will be incremented and as the referenced objects are released, the counts are correspondingly reduced. The Automation Manager (a 32-bit application) can be found in your Windows systems directory with the name AUTMGR32.EXE.

Database Management Tools

Communication between the business services components and the data services tier is a critical piece of the client/server puzzle. Visual Basic comes with five data access methods: Data Access Objects (DAOs), Remote Data Objects (RDOs), Open Database Connectivity API (ODBC), Active Data Objects, and OLE DB. Following is a brief description of each method. Microsoft is recommending movement to ADO and OLE DB for the future. Be aware that not all functionality found in DAO, RDO, and ODBC data access formats is found currently in ADO and OLE DB, although future releases will change this. For a fuller discussion of these access methods, see Chapter 3, "Data Objects."

Data Access Objects (DAO)

Visual Basic 6 comes with support for the Joint Engine Technology (JET). JET provides an object-oriented implementation of data access called Data Access Objects (DAOs). This method of accessing data enables a developer to use data objects and collections to handle the tasks of data access. The implementation of data access objects is closely tied to the Microsoft database file structure called MDB. The MDB allows the storage of tables as well as query definitions, macros, forms, reports, and code. Data access objects enable you to get at only the tables and queries stored in the MDB. Data access objects automate much of the task of dealing with data, including managing connections, record locking, and fetching result sets; DAOs also provide for access to ODBC-compliant data sources.

Remote Data Objects

To optimize the methods for accessing ODBC-compliant data sources, while at the same time simplifying the process, is a huge task. Remote Data Objects (RDO)--a thin layer that sits on top of the ODBC API--is provided to accomplish this task. Because this layer is thin, it does not impact the speed of execution for performing data access. This is critical to a production environment in which one of the evaluating factors is the speed of execution. Remote Data Objects is similar to the DAO object model and enables developers to use objects and collections to execute data-related tasks like submitting a query, processing results, and handling errors.

Open Database Connectivity (ODBC) API

Of all the methods for accessing data, the Open Database Connectivity (ODBC) API is the most efficient in terms of execution speed. In terms of programming, it requires the most time and the most caution. Because this is an application programming interface (API) you have full control over the very intimate details of data access.

Both the DAO and the RDO use the ODBC API layer when accessing ODBC-compliant database engines like Oracle and Microsoft SQL Server. The ODBC API is cryptic and difficult to use but provides more control and better execution speed. It is generally recommended that remote data objects be used because it is the best-balanced method. Its data access speed rivals that of using the ODBC API directly, and object-oriented syntax makes programming easy. But if you are looking to the future, you might want to pay special attention to ActiveX Data Objects (ADO) (covered in more detail in Chapter 3,"Data Objects").

Microsoft Data Access Components (MDAC)

With all this discussion of data access objects, remote data objects and open database connectivity, the real story is found in what Microsoft is calling "universal data access." Microsoft Data Access Components are the cornerstone technologies that will allow universal data access. These technologies include ActiveX data objects (ADO), remote data service (RDS, which was previously known as advanced data connector, or ADC), open database connectivity (ODBC), and OLE DB.

Posting Acceptor

Component development relies on the capability to deploy solutions easily to a variety of servers. To deploy solutions with components that need to be installed and registered on the server to Windows NT machines, Microsoft is providing a solution that utilizes Internet Explorer 4.0 and Posting Acceptor 2.0 on the server machines where components will be deployed. Internet Information Server is necessary as well. If the components you are placing on a server will only be used locally by Active Server Pages, you won't need to use this solution. It is more likely, however, that components will be used in many more solutions than simply by local Active Server Pages.

SNA Server

Most companies have their share of legacy systems. Microsoft is providing SNA Server to ease the integration of legacy applications and data with modern network systems. You can install services for host connectivity and the SNA Server Software Development Kit. The version of SNA Server that comes with Visual Basic 6 includes the OLE DB Provider for VSAM and AS/400 as well as an ODBC driver for DB2, and COM Transaction Integrator for CICS and IMS. Serious enterprise solutions can use this functionality to create complete solutions that can leverage existing systems.

SQL Server 6.5 (Developer Edition)

A completely functional version of SQL Server is now included with Visual Basic 6. This version, called the Microsoft SQL Server 6.5 Developer Edition, is limited to a maximum of five simultaneous users. The license specifies that it is intended for use in designing, developing, and testing software products that are designed to operate in conjunction with Microsoft SQL Server.

SQL Server Debugging Service

One of the greatest challenges to developers is debugging. Distributed environments and toolsets only make it worse. Visual Basic 5 came with an SQL Server debugging service. Visual Basic 6 adds the capability to debug SQL code from within the Visual Basic programming environment itself. This is a very powerful step toward fully integrated debugging capabilities.

Visual SourceSafe Client and Server Components

With the release of Visual Basic 6, Microsoft included a complete system for team development of enterprise-wide solutions. Of course, this requires that the environment provide a mechanism for organizing team development. Visual SourceSafe provides the capability to manage large-scale team development.

SourceSafe comprises two applications. The first is the administrative module, shown in Figure 1.19, that enables you to maintain a roster of developers who work on various projects.

The following rights can be assigned per user and per project:

  • Read. User can use the file in read-only mode.

  • Check Out/Check In. User can check out files for use and make modifications, and can also check the files back in.

  • Add/Rename/Delete. User has the ability to add files to the project as well as to rename and delete files within the project.

  • Destroy. User has the ability to permanently remove files from the project and physically destroy them.

Figure 1.19 Visual SourceSafe Administrator

The second application is the SourceSafe Explorer, shown in Figure 1.20, which enables you to use a window that resembles the Windows Explorer.

Figure 1.20 The Visual SourceSafe Explorer

You can use this application to check in and check out files for a project. You can also view the history of activity with a file, see the differences between files that have been changed, and create reports that enable you to manage a project.

Visual SourceSafe has also been provided as an add-in to the Visual Basic development environment. You can add it by using the Add-In Manager from the Visual Basic menu bar. Visual SourceSafe adds the following options to your Tools menu:

  • Get Latest Version. Enables you to bring physical copies of files from the storage library maintained by SourceSafe on your network drive.

  • Check Out. Used for selecting files that you want to work with exclusively. Optionally, you can keep them checked out or release the files so that other people can check them out.

  • Check In. After you have finished working on a file or project, you use the Check In command to update the SourceSafe code library with your changes.

  • Undo Check Out. This option enables you to effectively cancel a check out on the files you are working with. In a situation in which you check out a file or project, make changes, and then decide that you want to begin again, you can cancel the check-out process.

Comments

The development of components is a very powerful idea whose time has come. Creating components is facilitated by the additional tools that make management and implementation possible. It is important to note that some of these tools are available only with the Enterprise Edition of Visual Basic. As a way to get started, the next section walks you through the process of installing and adding a project to Visual SourceSafe.

1.8 How do I... Create a SourceSafe project?


COMPATIBILITY: VISUAL BASIC 5, 6

Problem

I would like to use a simple and integrated source code control process with my team but I don't know where to start.

Technique

Using source code control is an important part of team development. It also provides the individual developer with the benefit of having a secure place for code, the ability to share files between projects, an online history of changes, and more. Visual SourceSafe is provided as part of Visual Studio. There are two methods for adding a project to Visual SourceSafe. The first method involves the use of the Visual SourceSafe Explorer; the second is performed in the Visual Basic environment when you have the Visual SourceSafe Add-In installed. For this quick start on using Visual SourceSafe, you will use the Add-In method from Visual Basic.

Steps

To use Visual SourceSafe from the Visual Basic development environment, you must make sure that Visual SourceSafe has been installed on your machine and that a valid login for you exists in the SourceSafe Administrator. To add a login to SourceSafe, start the Visual SourceSafe Administrator program. Press Ctrl+A to add a user. Enter your name and password and press OK. Now you have a valid login with SourceSafe. After that is finished, complete the following steps:

1. Start Visual Basic. You do not need to specify a project at this time.
2. Select Add-Ins from the menu, and then select Add-In Manager. A dialog box appears, showing the add-ins available on your system. If you do not see Source Code Control Add-In, then Visual SourceSafe has not been properly installed on your system. You will need to reinstall it before you can proceed.
3. Select Source Code Control, make sure that the Load Behavior is Startup/Loaded, and then click OK.
4. Open a project that you would like to add to Visual SourceSafe. As the project loads, you will be prompted automatically to add it to SourceSafe. For this How-To, reply No.
5. Select the Tools menu. You will notice an entry on the menu for SourceSafe. Select this option (see Figure 11.21).
The SourceSafe Add-In menu has the following options:
  • Create Project from SourceSafe. This option enables you to open a project already in SourceSafe but that has never been checked out to you.

  • Add Project to SourceSafe. This adds the current project to the SourceSafe Code library.

  • Run SourceSafe. This runs the Visual SourceSafe Explorer.

  • OPTIONS. USE THIS TO SET OPTIONS FOR SOURCESAFE.
6. Select Add Project to SourceSafe. You will be prompted by a login screen. Enter a valid login and password.

7. When presented with a SourceSafe dialog box, enter the name of this project in the Project field and click OK.

Figure 1.21 The SourceSafe menu

8. You will be prompted to select the files that make up the project. Select them and click OK. SourceSafe adds your project to the source code control library.

How It Works

When you install Visual SourceSafe on your machine, it also installs the Source Code Control Add-In. When you install this add-in into the Visual Basic development environment it enables you to add a project from the Visual Basic menus instead of starting the Visual SourceSafe Explorer and creating the project there. By selecting to add the currently open project to Source Code Control, you automatically start Visual SourceSafe and are prompted to create a project entry in the source code library. After the entry is made, you are prompted to add the files that make up the project, and then you are finished.

Comments

The add-in for Visual SourceSafe does a great deal to simplify the process of using source code control. Much of the process is automated, including prompts that urge you to add projects to the source code library. After a project becomes part of Visual SourceSafe, you will be able to check files in and out right from the Visual Basic development environment. Simply highlight the file in the Project window and use the right mouse button to see a menu of options for checking files in and out from SourceSafe (see Figure 1.22).

Figure 1.22 Menu choices available when you right-click on a file in the Project window

First, before you add your projects, you might want to add all developers to the SourceSafe Administrator. In this way, you can set access rights for them on the projects. The SourceSafe Administrator does allow you to set individual rights by project. A shortcoming of Visual SourceSafe is the lack of group rights. Everything is on an individual basis, which can be difficult if you deal with a large number of people or projects and want to control access to the code.

 

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020