Windows Enterprise Technologies
Windows 2000 introduced a number of important new technologies, as well as updating some core technologies that the .NET Enterprise Servers build upon. Windows .NET Server extends some of those technologies with new capabilities. These technologies build upon Windows's own core technologies to provide advanced services that are especially useful to large organizations. These so-called enterprise technologies include
The Microsoft Management Console. Also known as the MMC, this application provides a one-stop shop for administrative tools and utilities. Prior to the MMC, each Microsoft product included its own graphical user interface for administration and management. Administrators often had to open a half-dozen applications or more in order to perform their day-to-day management tasks. The MMC, however, acts as a universal framework for administrative tools. Each service or server productincluding DNS, WINS, and the .NET Enterprise Serversprovides one or more snap-ins. Snap-ins fit within the MMC, and provide the graphical user interface for a service or server product. You can configure the MMC with as few or as many snap-ins as you need, enabling you to manage your entire network from within a single window. Windows and the .NET Enterprise Servers provide preconfigured sets of snap-ins, called consoles, and you can create your own custom consoles, as well.
Clustering Services. First introduced in Windows NT Server 4.0, Enterprise Edition, Microsoft's clustering services enable multiple computers to work together as a single large server, and enable computers to back each other up, essentially taking over the workload of a failed computer to keep critical network services up and running.
Internet Information Services. Also introduced in Windows NT Server 4.0, Internet Information Services (IIS) is a Web server platform. In addition to providing HTTP services to Web browsers, IIS can act as a File Transfer Protocol (FTP) server, newsgroup server, and much more.
Certificate Services. This feature of Windows 2000 enables a server to acts as the basis of a public key infrastructure (PKI), issuing, managing, and revoking digital encryption certificates that can be used to encrypt information, act as digital signatures, and much more.
Directory Services. Windows 2000's most well-known new feature, Active Directory, provides enterprise-class directory services to the Windows operating system. Directory services are designed to act as a central repository for information about users, network services such as printers, and other information in an enterprise. Active Directory uses a distributed architecture to handle the workload of large organizations, and features an extensible structurecalled a schemathat enables applications to add their configuration information to the directory. Active Directory also provides fault tolerance, which protects the information in the directory from the failure of an Active Directory server (called a domain controller).
Terminal Services. First introduced as a standalone product named Windows NT Server 4.0, Terminal Server Edition, Terminal Services is fully incorporated into the base Windows 2000 Server operating system. Terminal Services is a remote control solution, enabling someone at a remote computer to see and control the server's desktop and other graphical user interface elements, just as if they were standing in front of the server itself.
In the next few sections, I'll discuss each of these technologies in more detail, show you how they work, and show you how to use them.
Microsoft Management Console
One of the biggest complaints administrators had about Windows NT was the number of tools they had to use in order to administer a network. Each major serviceDNS, WINS, DHCP, directory services, and so forthrequired the use of a separate administration tool, and each tool had a slightly different style of user interface. At first glance, a typical Windows 2000 Server computer doesn't seem to be any better. As shown in Figure 3.2, the Administrative Tools folder on the Start menu can still have more than a dozen icons for the various administration tools included in the operating system.
Figure 3.2 Whenever you install a new service in Windows, its administrative tool icon is added to the Start menu automatically.
In Windows NT, each of these icons represented a completely different application. In Windows 2000, however, each icon simply represents a different preconfigured MMC console. Each preconfigured console contains just one or two snap-ins, which enable the console to administer just one or two aspects of the operating system. These consoles offer a major improvement over Windows NT, because the snap-ins each offer a very similar look and feel, making it easy to learn new snap-ins simply because they behave so much like the snap-ins you're already familiar with. More importantly, though, you can use the MMC to configure your own custom consoles, which enables you to aggregate multiple administrative functions within a single window.
All About Snap-Ins
The important thing to remember about the MMC is that, by itself, it doesn't do anything. The MMC simply provides a shell for one or more snap-ins, and it's the snap-ins that do all the work of administering a particular portion of the operating system. You can think of the MMC as an empty meeting room in a large company. By itself, the room isn't very useful. When you want to do any actual work, you fill the meeting room with experts from throughout the company. For example, if you want to see how the company's sales are doing, you call in the sales manager. If you want to know about the latest marketing campaign, you call in the marketing manager. The meeting room is large enough and flexible enough to have multiple experts, enabling you to work with, for example, both the human resources department and the marketing department at once.
Snap-ins are simply DLL files that have been written to meet the Microsoft specifications for snap-ins. Those specifications are available in Microsoft's Platform SDK, which you can access online at http://msdn.Microsoft.com/library. The Platform SDK documentation is also available on CD-ROM or DVD-ROM through Microsoft's MSDN Library subscription program. Snap-ins are installed along with core operating system services, or along with third-party applications. For example, the DNS management snap-in is installed when you install the DNS service in Windows.
Windows actually installs a couple snap-ins that you can't access by default. One of these snap-ins is the Active Directory Schema Management snap-in, which enables you to work directly with the Active Directory schema. Windows installs the Schema Management DLL by default, but doesn't make it available for your use because schema management should be performed only by experienced administrators. By leaving the DLL hidden to start with, less experienced administrators aren't tempted to start messing around with the schema. To make the snap-in available, you simply have to register the DLL. Registering is a process that the DLL performs to make the rest of the operating system aware of the DLL's presence. To register the Schema Management snap-in, open a command line window and type regsvr32 schmmgmt.dll. Press Enter, and a dialog box should pop up saying that the self-registration process was successful. After registering the DLL, you'll be able to add the Schema Management snap-in to an MMC console.
Most third-party applications automatically register their snap-ins during installation. However, if you have problems locating a particular snap-in, you can try re-registering the DLL using the regsvr32 command-line utility. You can safely use regsvr32 even if the DLL is already registered; registering a snap-in DLL more than once won't do any harm.
Although you may be perfectly happy to use the built-in consoles that Windows provides, you'll be a much more efficient administrator if you learn to take advantage of the MMC's capability to create customized consoles. This is especially true if your environment will contain several of the .NET Enterprise Servers. For example, BizTalk Server relies both on SQL Server and on IIS. Rather than using a separate console to administer each product, you might want to create a console that has the snap-ins for BizTalk, SQL, and IIS all in one convenient window. Combining snap-ins into one console makes for more efficient administration of core network services, too. For example, let's create a console that includes the snap-ins for the three main network services associated with TCP/IP: DHCP, WINS, and DNS.
Before you start creating a new console, you need to know a little bit about the MMC's operating modes, and what they're used for. The preconfigured consoles provided by Windows run in User mode, which allows you (or another administrator) to use the consoles, but doesn't allow you to modify them. The MMC offers another mode, called Author Mode, which allows you to modify the snap-ins included in a console and save your changes.
If you're a senior administrator, you may use Author Mode to create customized consoles. You can then switch the consoles into User Mode and distribute them to junior administrators in your company. The other administrators will be able to use your customized consoles, but they won't be able to modify them.
So, before you can create a new console, you need to get the MMC running in Author Mode. You can't do that by launching a pre-configured console. Instead, you have to launch the MMC all by itself, by selecting Run from the Start menu, typing MMC, and clicking OK. Windows will launch the MMC with a completely blank console in Author Mode, as shown in Figure 3.3.
Figure 3.3 Notice that the MMC uses a multiple-document interface (MDI), which allows you to have several console windows open within the main MMC window.
With the new MMC window open, select Add/Remove Snap-ins from the File menu. The MMC will display a list of snap-ins that are already in the console, although at this point the list will be empty. Click the Add button to display a list of all registered snap-ins, as shown in Figure 3.4. Simply double-click a snap-in to add it to the console. When you're finished, click OK on the list of snap-ins, and then OK again on the list of installed snap-ins. The MMC will now display each of your snap-ins in the left-hand pane.
Depending on the version of the MMC that you're using, the Add/Remove Snap-ins option may be located on the Console menu instead of the File menu. Older versions of the MMCsuch as that included with Windows 2000will use the Console menu; newer versionsincluding with Windows XP and Windows .NET Serverwill use the File menu.
Figure 3.4 Only snap-ins which are installed and registered in your computer are displayed in the list.
You'll usually want to administer your network from your administrative workstation, rather than directly from a server console. That means you'll need to get all of your administrative snap-ins installed on your workstation. Most of the .NET Enterprise Servers offer a Client Tools installation mode, which enables you to install just the necessary snap-ins (along with preconfigured consoles, in most cases) on your workstation. Simply insert the .NET Enterprise Servers' installation CD into your workstation's CD-ROM drive, and select the Client Tools installation option. Most third-party products have a similar installation option.
The core Windows 2000 (or Windows .NET Server) administrative snap-ins can be installed by using the Admin Tools installer package, which is located on the product CD under the Support\Tools folder. Insert the Server product CD into your workstation's CD-ROM drive, browse to the appropriate installer package (which is a file with an MSI filename extension), and double-click it. The installer package will add the administrative snap-ins to your system, enabling you to create consoles that contain the Server administrative snap-ins.
Most snap-ins will require a little bit of extra configuration before you can use them. For example, the DHCP, DNS, and WINS snap-ins need to be pointed at the DHCP, DNS, and WINS servers that you want to administer. In most cases, you can simply right-click the snap-in name and select Connect or Connect To from the pop-up menu. The snap-in will prompt you for a server name, and add that server's name to the list. Figure 3.5 shows our sample DHCP, DNS, and WINS console with a server configured for each snap-in.
Figure 3.5 Most snap-ins allow you to specify multiple servers to connect to, enabling you to administer, for example, all of the DNS servers on your network from a single window. Servers are displayed in a hierarchical list.
Some snap-ins only allow you to connect to a single server. Those snap-ins are mostly older ones, since Microsoft's newest snap-in specification requires snap-ins to allow multiple servers. If you do find yourself with a snap-in that will only connect to a single server, keep in mind that you can add multiple copies of the snap-in to a single console. Each copy can connect to a different server, effectively enabling you to administer multiple servers from the same window.
Having problems locating a snap-in in the list of snap-ins? See "Can't Find Snap-In" in the "Troubleshooting" section at the end of this chapter.
Once you've finished adding snap-ins to your console and configuring the snap-ins for your environment, you may want to change some of the console's properties, such as its mode. To do so, select Options from the File menu. You'll see the dialog box shown in Figure 3.6.
Figure 3.6 The Options dialog allows you to specify a name for your console, as well as changing its icon, if desired.
The MMC actually offers four modes:
Author mode provides full access, including the capability to add or remove snap-ins, create new windows, and so forth.
User modefull access allows users to do the same things an author can do, but allows you to configure the console so that users can't actually save their changes. What users can do is modify your console and then save their changes to a new console, leaving your original console intact.
User modelimited access, multiple window prevents users from changing the console, and only allows them to use the portions of the console that were visible when you saved it. Users do have the ability to open new windows in the MMC, but cannot close the windows you configured.
User modelimited access, single window is the same as User modelimited access, multiple window, but prevents the user from opening new windows.
When you're done configuring your console's options, select Save As from the File menu to save your console to an MSC file. You can then distribute the MSC file to other administrators for their use.
Other administrators will only be able to use your new console if the snap-ins within the console are already installed and registered on their machines. Those administrators will also need permissions to perform whatever tasks the console enables them to perform; the console is just a tool to perform those tasks and doesn't grant any permissions.
Creating and distributing custom consoles is a great way to help other administrators in your organization be more efficient. Unfortunately, though, each console provides each administrator with full access to the capabilities offered by the console's snap-ins. In other words, you can't use custom consoles to limit what a junior administrator can actually do. The MMC does offer a method of restricting functionality: Taskpads.
Normally, clicking on an item in the left pane of the MMC displays that item's information on the right side. For example, if you're using the Local Users and Groups snap-in, clicking the Users folder displays the local user accounts in the right pane. This default behavior allows any administrator with sufficient access permissions to do any task that the snap-in permits, such as creating new users, deleting users, and so forth. You may, however, wish to assign only a portion of those tasks (such as adding users) to junior administrators. You can restrict their access permissions, but they may become confused because the MMC seems to offer them the ability to perform tasks (such as deleting users) which they don't actually have permissions to perform. The MMC offers Taskpads to help restrict the information that another administrator can see, offering a more efficient environment for their administrative tasks.
While Taskpads restrict what an administrator can do in the MMC, they do not restrict an administrator's actual permissions. If an administrator were able to gain access to a fully enabled MMC console, they could do anything their permissions allowed them to do. Taskpads should be used to display the tasks an administrator has permission to do, not to prevent an administrator from doing things that they do, in fact, have permission to do.
In this respect, Taskpads work a little bit like a bank account. Suppose you're the joint owner of a bank account, and your co-owner doesn't want you to be able to write checks from the account. Simply taking your checkbook away might seem to accomplish their goal, but all you have to do is obtain more checks, because your permissions on the account allow you to withdraw the money. To really restrict you, the co-owner has to change the permissions on the account to prevent you from withdrawing money, even if you obtain more checks.
Taskpads replace the normal view that the MMC displays for a snap-in. Taskpads also offer the capability to add predefined commands, which can be a subset of the commands offered by a particular snap-in. To create a new Taskpad, follow these steps:
Configure the MMC with any snap-ins that are necessary.
Select the item in the left pane that you want to assign a Taskpad view to.
From the Action menu, select New Taskpad view. The MMC launches the New Taskpad View wizard.
On the Taskpad Display screen of the wizard (shown in Figure 3.7), select the type of view you want the Taskpad to use. Most Taskpads work well with a horizontal or vertical view.
Figure 3.7 Select a view type that best displays the information provided by the snap-in.
Complete the Taskpad wizard by providing a name and description for the Taskpad, and indicating whether the new view should apply only to the item currently selected in the left pane of the MMC, or whether it should apply to all items of the same type.
Once completed, the wizard will launch the New Command wizard, which enables you to add a new command to the Taskpad.
Select the command type, as shown in Figure 3.8. You can add a command that navigates to another portion of the console, runs a command-line command, or executes a task provided by a snap-in. In this case, I'm using the Local Users and Groups snap-in, and I'll create a command that enables users to add a new user, which is one of the functions provided by the snap-in.
Figure 3.8 Use the Navigation command type to link to other areas of the console.
Select the command that you want to execute. Since I indicated that I wanted to create a command for a snap-in function, the wizard allows me to select the appropriate function, as shown in Figure 3.9.
Figure 3.9 Use the lower half of the screen to select the snap-in that contains the desired function.
Provide a name and a description for the new command. The name will be displayed in the Taskpad itself, while the description will be displayed when the user hovers their mouse pointer over the command. Make your descriptions as useful as possible so that users will be able to figure out what the command does. Figure 3.10 shows an example for my Add Users command.
Figure 3.10 Keep command names relatively short so that they'll fit easily within the Taskpad display.
Finally, select an icon for the new command. The MMC enables you to select from a number of built-in icons, or you can use an icon from an external file. Figure 3.11 shows the icon I've selected for my Add Users command, which is a standard user icon.
Figure 3.11 Try to use standardized icons whenever possible, since most administrators will recognize them more easily.
Once you're done with the New Command wizard, the Taskpad will display the new command, as shown in Figure 3.12. You can run the wizard again and again to create additional commands for your new Taskpad.
Figure 3.12 In this Taskpad view, commands are grouped at the bottom of the screen, while view items are listed at the top.
You can further restrict the MMC by hiding the left pane, so that only the Taskpad is visible. This effectively makes the MMC a single-task interface, allowing users to navigate only through the navigation commands that you provide on the Taskpad.
Taskpads are a great way to simplify common operations, even for senior administrators. If you plan to distribute consoles to junior administrators and want to reduce the complexity of the user interface, Taskpads offer the perfect solution.
Microsoft uses the word cluster to mean any group of two or more computers that work as a single computer. Microsoft provides three clustering technologies, including Network Load Balancing (NLB), Cluster Load Balancing (CLB), and the Cluster Service. The Cluster Service and NLB are included with the Advanced and Datacenter editions of Windows 2000 Server and all editions of Windows .NET Server 2003; Application Center also includes NLB and adds CLB to the mix. The Cluster Service offers what many people consider to be the classic version of clustering, where two servers provide complete redundancy for one another, and essentially act as a single unit. I'll cover the Cluster Service in the next few sections.
→ For more information on NLB and CLB, see "Technology Capabilities," p. 180
How Clusters Work
Clusters consist of two or more servers (support for 3- or 4-way clustering is provided in Datacenter Server) that are physically connected to one another. Each server within a cluster is referred to as a node. While the nodes within a cluster do not need to use identical hardware, many administrators do prefer to use the same hardware to make administration and maintenance easier.
Each cluster node is required to have its own internal hard drives, which are used to run the operating system and any clustered applications. Each node is connected to a shared external storage subsystem (either by SCSI cables or by fiber channel connections). The nodes also have two network connections: One network connection allows clients to access the cluster, while the other is a shared network connection used only by the nodes. This shared network connection is used primarily to carry the cluster's heartbeat, which is a signal sent regularly by active nodes to tell the other nodes that everything is okay (more on the heartbeat in a bit).
It's possible for the clusters to use only one network, which would carry both client traffic as well as the heartbeat. That configuration is not recommended, because heavy client traffic could interfere with the heartbeat signal and cause undesired effects.
Figure 3.13 shows a typical cluster, which includes two computers, two network connections, and a shared external storage subsystem.
Figure 3.13 Note the dedicated network connection that carries the heartbeat signal between the cluster nodes.
Cluster nodes are regular Windows servers. They have their own unique IP addresses, their own unique server names, and so forth, which are referred to as private resources. They also run the Cluster Service, which is a special piece of software that makes clustering possible. Clusters have a number of shared resources, including a name for the cluster, a unique IP address for the cluster, the external storage subsystem, and so forth. Each node runs its own copy of the operating system, as well as a private copy of any clustered applications (such as SQL Server). Application data is shared between the nodes, and is stored on the external storage subsystem.
When you power up the first node in a cluster, it figures out that the other node isn't running, and takes control of the cluster's shared resources. That means the first node gains exclusive access to the external storage subsystem, and the first node responds to incoming traffic directed at the cluster's name and IP address. In this state, the first node is said to be the active node, since it is doing the work in the cluster. The first node starts sending out a heartbeat signal over the cluster's private network connection, as well. When you power up the second node, it detects the heartbeat signal and configures itself as the passive node. The passive node remains connected to the external storage subsystem, but doesn't actually have any access to the data on it. Also, any clustered applications on the second node are stopped. Figure 3.14 shows the cluster's operations at this point.
Figure 3.14 All incoming client traffic is seen by both nodes, but only the active node responds.
The passive node's job is a lot like the vice president's: Wait for the active node to fail. The passive node monitors the heartbeat coming from the active node. If the heartbeat stops for more than a second or two, the passive node sends a reset signal over the shared connection to the external storage subsystem. If the active node is still functioning, it will see that reset signal and perform its own reset, reasserting control over the cluster. However, if the active node has failed, the passive node will gain control of the external storage subsystem and begin the process of failover. During failover, the passive node takes ownership of all cluster resources that were previously owned by the active node. The node is aided in this task by the quorum resource, a special set of data written to the external storage subsystem that contains cluster configuration information. The now-active second node starts all clustered applications, which are able to access their data on the external storage subsystem. The active node also begins responding to the cluster name and IP address. The cluster's new operational mode is shown in Figure 3.15.
Figure 3.15 The formerly active node is now considered the passive node.
Failover generally takes about 30 seconds, depending on how long the clustered applications require to start. Client computers attempting to access the cluster may notice a brief delay, but the failover usually occurs quickly enough to prevent any client errors. With the failed node officially out of the loop, an administrator can take corrective action (such as replacing any failed hardware) without affecting the services that the cluster provides to the network. Once the failed node is fixed, the administrator can transfer cluster operations back to it by commanding the Cluster Service to perform a failback operation. The administrator can even configure failback to occur automatically at a designated time, such as during less-busy evening hours.
Some administrators will build clusters using different servers. A more powerful server acts as the primary node, while a less powerful (and less expensive) server acts as the failover node. This technique allows you to build less expensive clusters. In the event of a failure, the performance of clustered applications might not be as good, but the applications will continue to run until you can repair the more powerful primary node.
Different Cluster Configurations
At first glance, clusters seem to be an awfully expensive way to provide fault tolerance on a network. After all, one pretty powerful server is just sitting there most of the time, waiting for its cluster partner to fail. These so-called active-passive clusters are very expensive, and few companies choose to use them. Instead, many companies prefer to build active-active clusters. In these clusters, each node performs useful work, while the other nodes provide a failover option. In effect, you're really building two clusters out of the same machines. One server is the active node for one cluster, and the passive for another. The other server is the passive node for the first cluster, and the active for the second. This technique requires a separate external storage subsystem for each active node, as shown in Figure 3.16.
Figure 3.16 Each active node in an active-active cluster uses a unique cluster name and IP address.
In the event that one node fails, the survivor will become the active node for both clusters, taking control of both external storage subsystems, both cluster names, both cluster IP addresses, and doing the work of two servers. While it may not perform as quickly as two separate servers could, this situation is better than losing critical network services due to a server failure.
In order to maintain peak performance in the event of a node failure, plan your clusters so that each node only has to work at 5060% capacity on a normal basis. That will leave additional capacity to handle failures when they occur.
The active-active concept can be extended to 3- and 4-way clusters. Again, a separate external storage subsystem is required for each active node, as shown in Figure 3.17. Clustered applications must support 3- and 4-way clustering. For example, Exchange 2000 Server supports 4-way clusters, but only allows 3 active nodes per cluster. At least one node must remain passive in order for failover and failback to function correctly.
Only the Datacenter edition of Windows Server supports 3- and 4-way clustering. In addition, any applications you install may need to specifically support 3- and 4-way clustering.
Figure 3.17 In 3- and 4-way clusters, each active node uses a unique cluster name and IP address.
Creating and Administering Clusters
At first glance, creating a new cluster seems easy enough: Simply install the Cluster Service and run through the Create Cluster wizard on the first cluster node. The wizard will ask you for some key pieces of information:
The unique name that you want the cluster to use.
The unique IP address that you want the cluster to use.
Which network interface on the first node will be used for client communications, and which will be used for cluster communications.
Which drive letter represents a shared external storage subsystem.
Which drive letter will be used for the quorum resource. This must also be on a shared external storage subsystem, although it doesn't have to be the same one as used for application data.
On the second node, you simply point the configuration wizard at the quorum resource and tell it which network interfaces to use; the remaining cluster configuration information is read from the quorum resource.
Unfortunately, the reality of configuring a new cluster isn't quite as easy as the theory. That's because clustering is heavily dependent on cluster-compatible hardware, and that hardware usually requires special configuration that Windows simply can't help you with. While the correct steps to configure a cluster will differ depending on your server hardware vendor, here's a list of general issues that you'll have to be prepared for:
All hardware must be certified as cluster compatible. If you use uncertified hardware, Microsoft may not provide you with product support if you run into clustering problems. You hardware vendor can tell you if the hardware you've selected is compatible with Windows clustering.
External storage subsystems are usually connected to add-on storage adapter cards, rather than to the SCSI adapter that may be built in to your server. Those add-on cards often have to be installed in a specific expansion slot, and may require specialized drivers in order for clustering to work.
External storage subsystems usually have to be configured and formatted prior to installing the cluster service. That means you'll have to use the vendor's configuration utility, which is usually provided on CD-ROM along with the add-on storage adapter card.
Your hardware vendor may provide alternate instructions for installing the cluster service and running the configuration wizard. For example, IBM Netfinity servers using the IBM ServeRAID 4L storage adapter required you to start the cluster configuration wizard, cancel it at a specific point, run a special IBM-provided utility, and then re-run the wizard. These vendor-specific steps aren't usually easy to figure out without specific instructions, so make sure your vendor can provide the correct cluster setup steps.
Quirks in server hardware can make cluster installation so complex that many vendors offer preconfigured clusters. These usually consist of two servers, which are shipped with Windows already installed and configured for clustering. The vendors also offer on-site setup services to make sure your cluster gets up and running smoothly. I strongly recommend that you take advantage of these services: When I worked with one of Dell's first cluster offerings, two of their engineers spent an entire day and night getting the cluster up and running.
Most hardware vendors have gotten their cluster offerings down to a science, since clustering has now been available for more than five years. But hardware still plays such a crucial role that I'd rather the vendor's engineers spent their time getting a new cluster up and running, rather than me spending my time!
Once your cluster is up and running, you can manage it using the Cluster Manager, an MMC snap-in. When you run Cluster Manager, you'll have to tell it a server name to connect to. Keep in mind that the cluster itself has a server name, which will connect you to the cluster's active node. For that reason, I recommend you tell Cluster Manager to connect as follows:
Under normal circumstances, connect to the cluster name. That way you're assured of connecting to the cluster's active node.
If you want to manually fail the cluster over to the passive node (if, for example, you need to perform maintenance on the active node), connect directly to the passive node. You can order the failover by instructing the active node to fail over, causing thepassive node to accept control of the appropriate shared resources, and you'll wind up being connected to the active node of the cluster when the failover is complete.
If you have problems connecting to the cluster's name, try to connect directly to the server that you believe is the active node. If you're successful, then you need to check your network's name resolution systems to make sure the cluster name is correctly registered.
Having problems resolving the cluster's name and IP address? See "Cluster name resolution problems" in the "Troubleshooting" section at the end of this chapter.
Windows includes a number of applications and functions which can be clustered, and many other Microsoft and third-party applications work well in a clustered environment. For example, Windows enables you to create file shares on a cluster by using the Cluster Manager. These file shares must point to files located on the cluster's shared storage subsystem, and the file shares will be handled by the cluster's active node. You can also create clustered print shares by using Cluster Manager, provided that the appropriate print drivers are installed on each cluster node.
Other cluster applications fall into two categories: cluster-aware applications and clusterable applications. Cluster-aware applications are specifically designed to take advantage of Windows clustering. These applications usually provide a specialized installation routine that installs the application on both cluster nodes at once, and automatically create the necessary shared cluster resources in Cluster Manager. Cluster-aware Microsoft applications include SQL Server Enterprise Edition, Exchange Server Enterprise Edition, and some components of Commerce Server 2002.
Clusterable applications don't specifically support clustering, but work in such a way that they can be clustered if you want them to. Clusterable applications must exhibit the following characteristics:
The application must be a client-server or multi-tier application, and must communicate exclusively via TCP/IP.
The application's client piece must provide a minimum timeout of 30 seconds for network communications failures, and must be able to reconnect to the server piece after a timeout occurs. This behavior ensures that the client can continue to function after a cluster failover occurs.
The application's server piece must allow you to store the application's data files on one logical drive, while storing the program files on another. This ensures that the application can be installed on a cluster node, and that the application's data can be located on a cluster's shared storage subsystem.
The application's server piece must run as a Windows service. This behavior enables the Cluster service to stop and start the application on the appropriate cluster nodes.
Third-party software vendors will indicate whether their products are cluster-aware. You can also look for Microsoft's "Designed for Windows Advanced Server" or "Designed for Windows Datacenter Server" logos: Software carrying one of those designations has been tested for cluster compatibility.
Internet Information Services
Internet Information Services, or IIS, is the Web server platform included with every copy of Windows Server. Far more than just a plain Web server, IIS provides the capability to act as a File Transfer Protocol (FTP) server, a rudimentary Simple Mail Transport Protocol (SMTP) server, and as a basic Network News Transport Protocol (NNTP) server. While not all of the .NET Enterprise Servers require IIS, all of them provide better functionality if IIS is available, and most provide additional featuressuch as Web-based administrationthat rely on IIS.
IIS is also the portion of the Windows operating system most targeted for security attacks, and is one of the most publicly watched pieces of software Microsoft has ever released. Its tight integration with the operating system gives IIS capabilities that no other Web server has; unfortunately, that same integration can be used against you if IIS is compromised. For that reason, it's important that you understand exactly how IIS operates, and what features it provides, so that you can make an informed decision about the role IIS will play on your network.
When released, Windows .NET Server will provide a much more secure version of IIS. Much of that security comes from the fact that Windows .NET Server will not enable IIS by default, as previous versions of Windows have done. Many of the complaints leveled at IIS result from the fact that the software is installed by default in a relatively insecure configuration; by requiring administrators to actively install and configure IIS, Windows .NET Server will avoid "I didn't know it was there!" security problems.
IIS consists of a number of subsystems which each provide a specific type of Internet publishing capability. IIS itself acts as a framework in which these four subsystems operate. The subsystems are
Web publishing (http)
File transfer (FTP)
The basic unit of management within IIS is the site. A site is a single logical entity to which users connect to interact with a publishing subsystem. For example, when you connect to http://www.Microsoft.com using your Web browser, you are connecting to a Web site running under IIS. When you use an FTP client to connect to ftp.Microsoft.com, you're connecting to an FTP site running under IIS. A single Windows server can host multiple sites of the same or different types. On low-traffic intranet Web servers, for example, it's not uncommon for a single server to host a separate Web site for each department in the company. Each site can have its own configuration, Web pages, security permissions, and so forth. Figure 3.18 shows the basic IIS architecture, and how multiple sites can be hosted on a single computer.
High-traffic Internet Web sites usually use only a single Web site per server, simply because it's all the server can do to keep up with the one Web site. In fact, really high-traffic Internet Web sites often use multiple IIS servers to host a single Web site, distributing the incoming traffic among themselves by using a product such as Application Center.
Figure 3.18 Each IIS site can be stopped and started independently, as if each was running on a dedicated computer.
IIS also provides a programming interface called the Internet Services Application Programming Interface, or ISAPI. ISAPI allows software developers to customize IIS's behavior and to extend IIS's functionality. ISAPI applications are usually written as DLL files. Each IIS site can implement its own list of ISAPI applications, and you can also configure a list of global ISAPI applications that are used by all sites running under IIS. ISAPI applications are most commonly referred to as extensions, because they process information before IIS sends it to the requesting client. Figure 3.19 shows an example of this process. In the example, a user requests a Web page from an IIS Web site. IIS reads the Web page from the server's hard disk, and then passes the Web page to an ISAPI filter. The ISAPI filter does whatever it wants with the page, and then passes the information back to IIS, which transmits the finalized page to the user's Web browser.
Figure 3.19 If you use multiple ISAPI filters, you can configure the order in which they execute, to ensure that they work properly together.
The most commonly used ISAPI extension is asp.dll, which implements Microsoft's Active Server Pages technology.
Active Server Pages and ASP.NET
Microsoft introduced Active Server Pages, or ASP, in version 3.0 of IIS. ASP is implemented as an ISAPI filter, which processes all Web pages with an .asp filename extension. The extension scans through the Web page looking for specific tags, which enclose programming code. Once the extension finds the appropriate tags, it executes the programming code, returning the results to IIS for transmission to the client. Listing 3.1 shows a sample ASP page, which displays the current date and time.
Listing 3.1 Sample ASP page
<HTML> <BODY> <% Response.Write Date() & Time () %> </BODY> </HTML>
In 2002, Microsoft released an all-new version of ASP called ASP.NET. ASP.NET files have an .aspx filename extension, and are processed by a different ISAPI extension. Thanks to that architecture, a single IIS Web site can process both ASP and ASP.NET pages: .asp pages are processed by the ASP ISAPI extension, and .aspx pages are processed by the ASP.NET ISAPI extension. ASP.NET has the same purpose as ASPto enable dynamic, server-side Web pagesbut uses a completely different programming model. The new programming model is based on Microsoft's .NET Application Framework, and is designed to provide faster execution of dynamic Web pages and easier programming techniques for software developers.
Creating Web Sites
Creating a new Web site is easy: Just open the Internet Services Manager console (a preconfigured MMC console), right-click the IIS item in the left pane, and select New from the pop-up menu. Then, select the type of site you want to create: Web, FTP, SMTP, or NNTP.
You'll only be able to create new sites if you've installed the appropriate subsystem on the computer. For example, if you haven't installed the SMTP service, you won't be able to create new SMTP sites. To change the installed subsystems, use Add/Remove Programs from the Control Panel. Select Add/Remove Windows Components, locate the Internet Information Services entry, and modify the subcomponents of IIS to include the FTP, Web, SMTP, or NNTP subsystems, as appropriate for your needs.
Many of the .NET Enterprise Servers will attempt to create their own new Web site when you install them, or modify an existing site. For example, Exchange 2000 Server tries to set up Outlook Web Access under the Default Web Site. Read the product's installation guide carefully, and make sure that you've already installed the correct IIS subsystems to support the product's installation prerequisites.
Because IIS can host multiple sites of the same type, it needs some way to distinguish between them. That way, when a user connects to the server, IIS will know which site the user is trying to reach. When a user attempts to connect to a Web site, their browser sends three pieces of information: a target IP address, a host header, and a port number. Each IIS Web site must be configured with a unique combination of IP address, host header, and port number. For example, suppose that you create a new Web site on a server with only one IP address. Your first Web site can use the server's IP address, a blank host header (meaning that IIS won't check the host header at all), and the default Web port of 80. When you create a second Web site, though, you'll need to change at least one of those parameters. You have three choices:
Use a unique IP address. This is a popular choice, as it provides the easiest use and most compatibility. You can add an additional IP address to the Windows server, and then configure the new Web site to use the new IP address, a blank host header, and the default port of 80.
Use a unique port number. You can always change the port number that IIS uses. However, if your Web site uses any port number other than 80, your users will have to specify the port number in the URL they type into their Web browsers. For example, if you configure a Web site to use port 8080 (a popular alternative to 80), users will need to use a URL that includes the port number, such as http://www.company.com:8080.
Use a unique host header. When Web browsers attempt to access a Web site, they send along the name of the Web site they're trying to reach in a host header. IIS can read the host header from the request and direct the connection to the appropriate Web site. Host headers provide an easy way to host multiple Web sites with a single IP address. Unfortunately, host headers rely on version 1.1 of the HTTP protocol. Older Web browsers don't support version 1.1, and some Internet proxy servers strip version 1.1 information from outgoing requests, which limits the effectiveness of host headers. Note that host headers can't be used in conjunction with the encryption provided by Secure Sockets Layer (SSL), since the headers themselves are encrypted and can't be read.
To change either the IP address or port of a Web site, simply modify the site's properties, as shown in Figure 3.20. You can select an IP address from a drop-down list, which contains all of the IP addresses assigned to the server. You can also change the port number. To change the host header used by a site, you'll need to edit the advanced configuration properties, which are shown in Figure 3.21.
Figure 3.20 Select "All Unassigned" from the IP address drop-down list to have a Web site respond to all IP addresses that are not already assigned to another site.
Figure 3.21 Be sure to test host header functionality with the Web browsers your users will be using to ensure that no compatibility problems exist.
FTP sites, SMTP sites, and NNTP sites all follow the same rules, although they only allow you to configure a unique combination of IP address and port number, since none of those protocols support a host header.
Windows includes Certificate Services, an optional component that enables you to implement your own Public Key Infrastructure (PKI) (a system of certification authorities that can issue digital certificates for various purposes) or extend another company's PKI into your organization. The purpose of Certificate Services is to issue digital encryption certificates to computers, services, and individuals within your organization.
A certificate is a two-part, or asymmetric, digital encryption key. One part of the key is referred to as a public key, and is intended to be freely distributed. The other part of the key is referred to as a private key and is intended to be used only by the certificate holder. The exact uses of the public and private keys depend on the use for which the certificate was issued. Certificates are only issued for a few common purposes:
Digital signatures. Digital signatures are most often used for email. In this use, the sender's private key is used to encrypt a copy of an email. The email is sent unencrypted to the recipients, along with the encrypted copy of the email. Recipients obtain the sender's public key from a certificate authority (such as Certificate Services), and use the public key to decrypt the encrypted copy of the email. If the decrypted copy matches the original unencrypted email, then the recipients know that the sender is the one who signed it (because only the sender's private key could have encrypted the message), and that the message is unchanged since it was sent (because the two copies match).
Data encryption. In this use, a sender obtains the recipient's public key, and uses it to encrypt data. The recipient can then use their private key to decrypt the data. The data is secure, because only the protected private key can be used to decrypt the data. Anyone can encrypt the data, though, so data encryption does not take the place of a digital signature, which verifies the identity of the sender.
Identification. This use is very similar to digital signatures. Typically, a very small amount of data is encrypted by the sender, using his private key. The recipientoften a server that is attempting to authenticate the senderuses the sender's public key to decrypt the data, thus verifying the sender's identity. If the recipient is unable to decrypt the data, then the sender's identity is unverified.
Certificates are often used together. For example, one of the most common uses of certificates is to enable secure connections on the World Wide Web, through the Secure Sockets Layer (SSL) protocol. In SSL, a Web server is configured with a digital encryption certificate. When a user requests a secure connection, the Web server generates a new, unique encryption key, called a session key. That key is encrypted with the server's private encryption key from a digital certificate. Users obtain the server's public key to decrypt the session key. That process tells the user's Web browser that the server belongs to the correct company, that the certificate is still valid, and so forth. The now-decrypted session key is used to encrypt future communications between the Web server and the client, thus protecting sensitive information such as credit card numbers.
Sometimes, the user's Web browser may receive an expired certificate, or the certificate might not match the Web site that the user believes they are accessing. In those cases, the browser will display an error message, alerting the user to the discrepancy and allowing them to decide whether or not to proceed.
Certificate Services is capable of issuing certificates for all of these uses. Certificate Services is actually designed to be used in a hierarchy, to help distribute the load of certificate processing. Remember, Certificate Services must not only issue certificates, but also be available to verify certificates' authenticity and to provide public keys upon request. Figure 3.22 shows a typical Certificate Services hierarchy.
Figure 3.22 The Certificate Services server at the top of the hierarchy is said to be the root for the organization.
Certificate hierarchies are possible because of a fourth type of digital certificate: a certificate-signing certificate. This type of digital certificate enables a Certificate Services computer to issue certificates for the three common uses. In a certificate hierarchy, the root server creates its own certificate-signing certificate. Child servers obtain a certificate-signing certificate from the root, and then issue them to their own child servers, and so forth.
It's important to understand that a digital certificate is essentially a statement of authentication. In other words, if you obtain an identification certificate, everyone who accepts it is assuming that the issuing server did a good job of physically verifying your identity before handing you the certificate. Commercial certificate authorities (CAs) often require you to visit a notary or other official to verify your identify before they will issue you a certificate.
When a recipient receives a digital certificate as part of a communications exchange, the recipient has to decide whether or not they trust the CA that issued the certificate to have done a good job of verifying the certificate holder's identity. This trust is configured in a Certificate Trust List, or CTL. For example, Internet Explorer comes preconfigured with a number of CAs that it trusts, as shown in Figure 3.23. Internet Explorer will accept without question any certificate issued by a CA in the CTL. If you set up your own certificate hierarchy, you'll need to ensure that your users trust your certificate rootor CAso that they'll accept the certificates you issue. There are three ways to get your CA added to users' CTL:
Within an Active Directory domain, you can configure Group Policies to add your CA's root certificate to the CTL of domain users. This is a great way to set up a trusted internal certificate hierarchy, but it doesn't do any good for users who aren't a part of your domain.
You can obtain a certificate-signing certificate from an already-trusted commercial CA. This means that the commercial CA is actually the root, and the commercial CA will usually require a review of your certificate-issuing procedures before it will issue you the certificate. The review process can be long and complex, but it's the best way to become trusted by the general population of the Internet.
You can simply try to convince your users that you should be trusted. When users receive a certificate issued by a non-trusted CA (such as yours), they will usually have the option to accept the certificate and add the CA to their CTL. If you've convinced users to do so, you'll be on their CTL until they choose to remove you. This technique is effective when you're dealing with a relatively small user population, such as vendors or customers who are accessing a private Internet site.
Figure 3.23 Internet Explorer includes most popular commercial CAs on its default CTL.
Sometimes, a CA needs to revoke a certificate that it has issued. This commonly occurs when a user leaves an organization, or when the CA discovers that the certificate holder provided false information in order to obtain the certificate. Revoked certificates are added to a Certificate Revocation List, or CRL. Client computers can obtain the latest CRL from a CA, and use the CRL to automatically ignore any certificates contained in the list.
Installing Certificate Server
You install Certificate Services by using the Add/Remove Programs utility on the Control Panel, and selecting the Add/Remove Windows Components option.
After you install Certificate Services, you will not be able to rename the computer or change its domain membership. Make sure that you will not need to make these changes before installing Certificate Services.
When you install Certificate Services, you'll be prompted to select the type of CA you want to create. As shown in Figure 3.24, you can choose a standalone CA, which acts as a root server with a self-signed certificate-signing certificate, or a standalone subordinate CA, which requires an existing root server. If the server is a member of an Active Directory domain, you can also select from one of the two Enterprise CA options. These work exactly the same as the standalone CA, but store certificate information in Active Directory rather than on the local server.
Figure 3.24 As shown here, if you're not installing on a domain member, the Enterprise CA options won't be available.
If you're creating a root CA, you'll need to create the CA's information record. This includes information about the CA itself, the owning organization, and an administrative contact, as shown in Figure 3.25.
Figure 3.25 Be sure to fill in all of the requested information, or your CA's root certificate will be incomplete and possibly useless.
Once Certificate Services is installed, you can manage it using the Certification Authority MMC snap-in. Certificate Services creates a preconfigured MMC console with the snap-in for your convenience.
By default, Certificate Services sets up a Web site on the local server, which can be used to request certificates. Shown in Figure 3.26, this Web site is accessible from the URL http://servername/certsrv, and enables users to request basic certificate types through a Web-based interface. If you wish to accept certificates through other means, you'll need to obtain customized software that sends certificate requests directly to Certificate Services.
Figure 3.26 The default Web-based interface serves as a useful learning tool, and enables users in your organization to request basic types of certificates.
When users visit the default Certificate Services Web site, they will need to select the type of certificate they want to request. Figure 3.27 shows the certificate types that users can request by using the Web site.
Figure 3.27 The Certificate Services web site offers two types of certificates by default.
Once the user enters the necessary information to request a certificate, the request is submitted to Certificate Services. By default, Certificate Services accepts all requests and places them on hold pending administrator action. You can add customized Certificate Policy Modules (CPL) which automate the certificate-issuing process. For example, you might create a certificate request process that asks users for personal information, such as their social security number. A customized CPL could then verify that information against a company database to check the user's identity, and automatically issue their certificate.
By default, however, you'll have to launch the Certification Manager console and view the Pending Certificates folder to see certificate requests. As shown in Figure 3.28, you can right-click any request to issue or deny the certificate.
Figure 3.28 Issuing a certificate moves it to the Issued Certificated folder. Denied certificates are held separately for later review, if necessary.
Once issued, users can visit the Certificate Services Web site, as shown in Figure 3.29, to collect and install their new certificate. Once installed, users can verify their certificates' information by using Internet Explorer's Certificates screen, as shown in Figure 3.30.
Figure 3.29 The Web site will also show users a list of any denied certificates, ensuring that users know the current status of each request.
Figure 3.30 Issued certificates include a private key, which is stored in the user's local computer.
Certificates form an important part of any network infrastructure, and are utilized by many of the .NET Enterprise Servers. For example, Exchange 2000 Server supports the use of digital certificates for message signing and encryption, and all of the Web-integrated .NET Enteprise Servers can benefit from the security offered by SSL and digital certificates.
Windows includes Active Directory, which is a new generation of directory services based on a number of industry standards. Active Directory, or AD, is designed as a scalable, extensible directory that can provide a central repository for all kinds of enterprise information. Out of the box, Windows itself can use AD to store user and group information, DNS information, digital certificates, and Distributed File System configuration information. Several of the .NET Enterprise Servers rely on AD, including Exchange Server and SharePoint Portal Server, which use AD to store user and security information. In the future, you can expect all of the .NET Enterprise Servers to at least provide support for AD integration, if they don't require it outright.
Internet Security and Acceleration (ISA) Server is a good example of a .NET Enterprise Server that can integrate with Active Directory, but doesn't have to. ISA Server provides additional features and functionality when used in conjunction with Active Directory.
Microsoft recognized that slower-than-expected adoption of AD was hurting sales of products such as ISA Server (and other products that need Active Directory to work best). So, in Windows .NET Server 2003, Microsoft introduced Active Directory Application Mode (AD/AM), which can provide Active Directory support for applications without requiring you to implement a full Active Directory domain for authentication of your users.
AD architecture can fill a book by itself, so in the next few sections I'll simply provide an overview of AD domains, forests, and trees, and of the tools you can use to administer an AD environment.
The basic unit of organization in AD is a domain. A domain represents a common administrative area, such as an entire company's users and computers. Large companies may break their administration down across geographic or departmental lines, resulting in multiple domains. These domains can be organized into trees, where a single parent (or root) domain supports a hierarchy of child domains, as shown in Figure 3.31. Multiple trees can be joined together into forests, which enables the members of various trees to access the resources in other trees. Figure 3.32 shows an example of multiple domain trees joined into a forest. Note that trees must be joined to a forest when the tree is created.
Figure 3.31 Note the naming convention that requires child domains to build upon the domain name of their parent.
Figure 3.32 The trees in a domain can be of unequal size. All that's needed is for the root of each tree to trust the other trees' roots.
Domains themselves can be broken down into organizational units, or OUs. OUs enable a domain administrator to delegate the administrative burden of the domain across the organization. For example, Figure 3.33 shows a single domain with several OUs. Each OU represents one of the company's departments. A domain administrator could delegate the authority to reset user passwords in each department to that department's administrative assistant. Doing so helps to spread the IT management workload, providing better service to end users and reducing the overall cost of ownership for the network.
Figure 3.33 All OUs in a domain share certain domain-wide policies, such as password length requirements, user naming conventions, and so forth.
AD is physically implemented by domain controllers, which are Windows server computers that have AD installed. You install AD by running dcpromo.exe on any Windows 2000 Server (or higher) computer. The DCPromo utility installs Active Directory, and enables you to create a new domain or join an existing one.
AD is a multi-master directory, which means that each domain controller (DC) contains a fully writable copy of the AD database. Each DC can accept changes to the directory, and it replicates those changes in a predetermined fashion with other DCs. This replication process is continuous and automatic between the DCs in a single physical site. If your domain is split amongst multiple physical locations, then you can configure AD with a site topology that reflects your locations' physical wide-area network (WAN) connectivity. Between sites, you can control the frequency of AD replication, to help make the most efficient use of your WAN bandwidth.
Because each DC contains a complete copy of the AD database, each DC is essentially equal. If one DC fails, clients can continue to function by contacting other DCs. There are, however, a few functions that are performed by only one DC in a domain or forest. These functions, or roles, are called Flexible Single Master Operations roles, or FSMOs. By default, the first DC in a forest holds all of the FSMOs. You can move the FSMOs to another DC to reduce the "all of your eggs in one basket" risk. If a DC holding a FSMO fails, the other DCs do not automatically assume the role; you need to manually transfer the role to another DC if the failed DC will not be returned to service. Networks can operate for a while without certain FSMOs, although certain network operations may be hindered. The FSMOs are
PDC Emulator. This FSMO emulates the function of a Windows NT domain Primary Domain Controller. It provides backward-compatibility for older client computers, and enables Windows NT Backup Domain Controllers to participate in an Active Directory domain. The PDC Emulator is also the authoritative source for time information within a domain. Each domain has its own PDC Emulator.
RID Master. The Relative ID Master is responsible for generating relative ID numbers, or RIDs, for new AD objects. Each domain has its own RID Master. Each domain controller in a domain periodically contacts the RID Master to obtain a block of IDs, which the domain controller then assigns to newly-created objects.
Schema Master. Only one DC in an entire forest holds this FSMO, which is responsible for handling any updates made to the AD database schema.
Infrastructure Master. This FSMO is responsible for updating object references for other domains. For example, when you rename a user account, and that user is a member of a group in another domain, the infrastructure master contacts the other domain with the user's new name. Each domain has its own Infrastructure Master.
Domain Naming Master. This FSMO controls the addition and removal of domains to the forest. Like the Schema Master, only one DC in an entire forest holds this FSMO.
Another special function that a DC can hold is the global catalog, or GC. The GC is a special subset of the information in the AD database, and the GC is replicated across an entire forest. The GC contains information that is considered to be universally interesting, which refers to the most common subset of data within the AD database, such as user email addresses and logon names, but not less important information such as a user's street address or office phone number. DCs acting as a GC server play a special role during the logon process, and provide important support functions for .NET Enterprise Servers such as Exchange Server. For example, in an Exchange 2000 Server environment, GC servers provide address book name resolution for mail clients. Increasing the number of GC servers in a site increases replication traffic, but may be necessary to support other network operations, such as those performed by some of the .NET Enterprise Servers.
Windows includes four graphical user interface tools, in the form of MMC snap-ins, for managing AD. They are
Active Directory Users and Computers. This snap-in enables you to manage user and computer accounts within AD, as well as manage OUs and other security and organizational objects.
Active Directory Sites and Services. This snap-in provides the capability to configure sites and inter-site replication, as well as certain site-wide services, such as site-based Group Policies.
Active Directory Domains and Trusts. This snap-in enables you to manage domains and the trusts between them. All domains within a tree (or a forest) trust one another by default, enabling users in any portion of the tree to access resources elsewhere in the tree. Between trees, however, you have to manually configure trusts.
Active Directory Schema. This seldom-used snap-in (which is not registered by default) enables you to view and modify the AD schema.
Do not attempt to modify the AD schema unless you know exactly what you're doing. The slightest wrong keystroke or mouse click can destroy Active Directory for an entire forest. Only members of the Schema Admins group have permissions to modify the schema (by default), and to protect the schema you should carefully control the user accounts in that group.
Active Directory administration is beyond the scope of this book. However, your local bookstore carries a number of AD administration titles designed for all different levels of skill and experience. I've also written an eBook titled Active Directory Administration Tips and Tricks, which you can download for free from http://www.aelita.com/ebook.
Way, way back in Windows history, a company named Citrix licensed the Windows NT 3.51 source code from Microsoft. They used the source code to produce a customized version of Windows NT 3.51 called Citrix WinFrame. Microsoft cross-licensed the technology for Windows NT 4.0, producing Windows NT 4.0 Terminal Server Edition. For Windows 2000, Microsoft incorporated the technology into the base operating system, and Terminal Servicesan optional component of nearly every edition of Windows 2000 and .NET Serverwas born.
How Terminal Services Works
Essentially, Terminal Services is remote control software, not wholly unlike software such as pcAnywhere or Carbon Copy. The purpose of Terminal Services is to enable a user (or administrator) to sit at a desktop computer and work with the server as if they were sitting right in front of that server. Users see the server's desktop, have full control of the mouse and keyboard, and so forth. By default, Terminal Services even installs a printer that points to the user's locally installed printer, so that anything the user prints from a server-based application will come out of their local printer.
Windows .NET Server builds on the printer map-back capability by mapping the user's local hard drives to drive letters on the server, playing sounds from the server on the user's computer, and mapping the user's serial port to the server's. These capabilities require the new Terminal Services client, which is included with Windows XP Professional.
The big difference between Terminal Services and products such as pcAnywhere is that Terminal Services enables multiple users to connect to a server at the same time. Each user gets their own private desktop, and isn't aware of the other users' actions. Citrix named this capability "MultiWin."
In Windows 2000 and higher, Terminal Services operates in one of two modes. Remote Admin mode allows up to two administrators to connect to a server (this mode is installed by default in Windows .NET Server, and is optional in Windows 2000). Application Server mode allows multiple end users to connect to the server to run applications. Application Server mode requires the purchase of additional Terminal Services licenses, and the installation of a Terminal Services Licensing Server on your network. One popular use of Application Server mode is to allow remote users to connect and run complex business applications on the server. Because the application actually runs on the server, not the user's computer, the amount of network bandwidth required is very small, and quite suitable for a dial-up connection.
Users connect to a Terminal Services computer through the Terminal Services client, which uses the Remote Desktop Protocol (RDP) to communicate with the server. The client is configurable, enabling users to specify screen resolution, color depth, and other options, which can help further optimize the RDP protocol and improve the Terminal Services experience over a slow dial-up connection.
Windows .NET Server includes Terminal Services in Remote Admin mode by default, and for good reason: Remote administration gives you more flexibility and control as an administrator. I recommend installing Terminal Services in Remote Admin mode on all of your Windows 2000 Server computers, as well (unless you've already installed it in Application Server mode, of course).
Managing the .NET Enterprise Servers can be much easier when you can remotely access your servers' desktops from any location, including your office desktop computer, your home machine, or even portable devices running Windows CE and a wireless network.
Installing Terminal Services
Installing Terminal Services is straightforward: Simply use the Add/Remove Programs utility in the Control Panel, select the Add/Remove Windows Components option, and then select Terminal Services. Select the mode you wantRemote Admin or Application Serverand Windows takes care of the rest. In Windows .NET Server, of course, it's already installed in Remote Admin mode, so you only need to perform the installation process if you want to switch to Application Server mode.
When Terminal Services installs, it creates a special folder with the Terminal Services client for both 16-bit and 32-bit Windows environments (Windows CE versions are available for download from Microsoft's Web site). The client installation folder is located in \System32\Clients. A utility in that folder can create floppy disks for 32- or 16-bit Terminal Services clients. The same utility is available through a Start menu shortcut named Terminal Services Client Creator.
Configuring Terminal Services
Terminal Services provides two MMC snap-ins for Terminal Services administration. The first, Terminal Services Configuration, enables you to configure default Terminal Services settings such as the maximum amount of time a user can connect, how long disconnected sessions remain open (waiting for a user to reconnect), and so forth, as shown in Figure 3.34.
Figure 3.34 Terminal Services Configuration can also be used to configure server-wide options, such as the default level of security permissions.
The other snap-in, Terminal Services Manager, enables you to monitor and manage current user connections on the server. If you run Terminal Services Manager from within a Terminal Services session, you have the option to shadow other user connections. Shadowing enables you to watch what other users are doing, and, with their permission, take control of their session to perform tasks. Shadowing is a useful help desk tool, because it enables a support technician to "stand over the user's shoulder" and help solve problems, without having to physically go to the user's desk or location. As shown in Figure 3.35, Terminal Services Manager also enables you to view connection statistics and other information for each current session.
Figure 3.35 You can use Terminal Services Manager to send pop-up messages to current sessions, alerting users to upcoming maintenance or asking them to log off (for example).