InformIT

Setting Up a Front-End NLB Cluster

Date: Nov 21, 2003

Return to the article

Why do some Web sites sustain lots of activity with little or no downtime? The secret may be that the site is clustered to spread the workload across multiple servers.
***Production codes: - Title Page

I can't remember the last time either msn.com or msnbc.com was down. Can you? Have you ever wondered why?

For some time now, Microsoft and other large companies that offer services over the Web have had their sites clustered and load balanced, which helps to keep the sites up and running constantly. In this article, you'll learn how to set up a front-end cluster for your own system, using the Windows Server 2003 (Win2k3) Network Load Balancing (NLB) driver. Then maybe your site can stay up and stable, as these sites do.

Clustering Basics

Win2k3 offers several types of clustering services; for this article, we'll focus on the simple NLB cluster scenario.

Businesses typically cluster their servers into two parts: a front-end cluster and a back-end cluster. NLB is ideal for a front-end cluster configuration, and Microsoft Cluster Service (MSCS) is commonly used for a back-end cluster configuration.

Front-End Cluster Configuration

Think of the front end as what the user first encounters when reaching a Web site that has been clustered. The process goes something like this:

  1. The user issues a static request—that is, a request for a Web page with static content (content that's not generated dynamically, such as from a database).

  2. The NLB front-end clustered server group routes the user's request to an available Internet Information Services (IIS) server.

  3. The server handles the request.

  4. The process is repeated with each subsequent request, but with a different server in the group handling the new request.

Because the requests are divided among the group of clustered servers, thus improving both performance and availability of the Web site, the cluster is said to be load balanced.

NOTE

All these processes are transparent to the user, and it appears that just one machine is handling the requests; therefore, the set of clustered servers is often referred to as a virtual server.

NLB is good for a front-end Web cluster configuration because it handles such TCP/IP requests and distributes them across several machines at the network layer, but it's not ideal for a back-end clustered configuration. Why not?

Suppose a machine in this front-end cluster must be taken off the network to add new hardware or for maintenance. The NLB cluster won't route requests to that machine while it's offline, instead using the other servers in the group to handle requests. When you plug the machine back into the network, NLB detects it.

All well and good. But what happens if a service, such as IIS, is no longer functioning on a machine in the cluster? NLB can't detect that the service isn't running; instead, the NLB cluster keeps sending requests to a machine that no longer has the Web server running. Not only will this error cause a performance penalty, but the user who happens to get that machine will get a "Page cannot be displayed" message. Therefore, you shouldn't use NLB for mission-critical services such as messaging or database services.

Back-End Cluster Configuration

Think of the back end as the requests that come from your Web sites—for example, requests to an email server or a database server—in a process something like this:

  1. The user issues a dynamic request—that is, a request for a Web page with dynamic content generated from a database.

  2. The NLB front-end clustered server group routes the user's request to an available IIS server.

  3. IIS processes the request with a call to a database server on the MSCS configured back-end cluster.

The two clustered configurations—front end and back end—work together to create a "layered" cluster using the two types of clustering services. Makes sense, doesn't it? You have to access the front end to reach the back end.

If a service or machine in a back-end cluster fails, another in the cluster takes control and handles all the requests (unlike a load-balanced configuration, where several machines share the requests). In essence, a back-end cluster is more reliable than a front-end cluster; it's just not as scalable. Windows 2000 Advanced Server allowed an MSCS cluster with two nodes; in Win2k3, you can have up to four nodes per cluster.

Setting Up Your Front-End NLB Cluster

It's time to get our hands dirty. We'll set up a simple front-end NLB cluster using two Windows 2003 servers with IIS 6.0 installed.

NOTE

I won't go into details on every dialog box option in the following example; I'll just provide enough information to get you started.

Use the checklist below to make sure that you have all the proper hardware and software for setting up the NLB cluster:

This is all the hardware and software you need for the cluster, but you'll also need a DNS server running on your subnet if you're using NetBIOS over TCP/IP (enabled by default).

NOTE

NetBIOS over TCP/IP is the network component that performs computer name–to–IP address mapping. This service often checks your DNS server for machine–to–IP mappings when resolving a name request on your network.

Before entering parameters for the network connections, use the Add/Remove components applet from the control panel to make sure that IIS 6.0 is installed and running on each box (it's not loaded by default). Each server requires two network cards: one for the private network or heartbeat, and the other for the public network. The private adapter requires a crossover cable that runs between what will be the two clustered servers. The public adapter is the one with Internet connectivity—the adapter that handles all client requests.

TIP

In Win2k, there was no convenient way to set up or manage NLB clusters from one server. Win2k3 offers a new tool for this purpose, called the Network Load Balancing Manager, NLB Manager for short. I recommend learning more about NLB Manager for managing large NLB clusters. Because we're only setting up a two-node cluster in this example, we'll enter the parameters manually. This will give you a better understanding of what NLB Manager does automatically. (After we set up the cluster, you can connect to it using the NLB Manager if necessary.)

Private Connections

Assuming that you have everything ready to go, let's get started by entering the parameters for the private network. Designate one of the network cards in each box to be the private adapter. This adapter doesn't accept client requests; it communicates with the other cluster node. The numbers entered here must be private—and not in use by any other network cards on the network. A common practice is to use the 10.0.0.0 private IP range. Start by entering the TCP/IP properties for this connection as shown in Figure 1.

Figure 1Figure 1 Entering the private connection parameters.

No gateway is necessary, as this connection doesn't need Internet connectivity. You don't need a DNS server for this connection, but I like to use one on the network anyway. Confirm these parameters, and rename this connection as Private. Now set up your other private connection on the other server. Don't use the same IP address; instead, use the next one in the range (for example, 10.0.0.2).

Public Connections

Now we can move on to the public connections. Enter your network card's public IP address in the TCP/IP Properties dialog box for this connection. Figure 2 shows that I used another private IP range, but only because I'm working behind a firewall. The numbers I entered also have an external IP mapped to them; your situation may be similar. Be sure to include your Internet gateway and DNS server.

Figure 2Figure 2 Entering the public connection parameters.

Load Balancing Properties

In the Network Load Balancing Properties dialog box, the settings on the Cluster Parameters tab should look something like those shown in Figure 3. In the Cluster IP Configuration section, enter the cluster's virtual IP address—the IP number that ties together all machines in the cluster. Any machine in the cluster takes on this address when handling a request. Use an IP number that's not used anywhere else on your network. Because both servers in the cluster will use this address, the settings here must be identical to those of the other machine in the cluster.

Next, enter the subnet mask and cluster domain name. The cluster domain name is simply a way of keeping track of a cluster in Active Directory. It has nothing to do with a domain that resolves to a Web site. Just think of it as an internal name for your cluster, much like your machine gets when it becomes part of a domain. For the other parameters, accept the default selections.

NOTE

Multicast support is checked only if you're using one network card. That doesn't apply in our example.

Figure 03Figure 3 Entering the NLB cluster parameters.

On the Host Parameters tab, enter the server's static IP address—not the cluster IP address—and subnet mask in the Dedicated IP Configuration section (see Figure 4). Notice that these are the same numbers you entered in the TCP/IP Properties dialog box for the connection. Leave the unique host identifier set as 1. Each machine in our cluster will need a unique ID, and this is it. When setting up the parameters for the other server, we'll use 2 as the unique host identifier.

Figure 4Figure 4 Entering the NLB host parameters.

There's one last task before you can move on to setting up the other server's public connection. You may have received a message indicating that you should enter the cluster IP in the TCP/IP settings for the connection. To do this, click the Advanced button in the TCP/IP Properties dialog box for the connection, and add your cluster IP to the IP address list for this connection (see Figure 5).

Figure 5Figure 5 Adding the cluster IP to TCP/IP.

That's it! You're done with this machine. Set up the public connection for your other server in the same manner. All that changes is the static IP address for the server. The cluster parameters should be identical.

When both machines have been configured, go to your DNS server and enter a zone for the cluster, using the cluster's DNS name (in our example, cluster1.xeonlabs.com). Tie the cluster IP to this name using an A record, and make sure that each machine in the cluster is tied to its public static IP in DNS.

Wrapping Up

Okay, now you're really done. Test your cluster IP by pinging it. To test it with a Web site, assign the cluster IP on each server to a domain name for the Web site you want to test. For this test to work, the Web site's domain name must be registered with a DNS server. Make sure that the domain name is mapped to the cluster IP address. If you don't have a registered domain name to work with yet, no worries; just use the cluster IP address instead. The files for the same Web site should be on each server. Have a little fun with it—bring the site up, and then unplug the primary node. Notice that it only takes a couple of seconds before the second cluster node takes over. To test failback, plug the primary node back in and unplug the second node.

Something I didn't get into in this article is how to control traffic distribution across the nodes in your cluster. For example, you may want to have a node in your cluster handle all the traffic for a given user at any given time. Sessioned Web applications, such as those using Active Server Pages, require the user to keep making requests to the same server so that the application can maintain the user's session state.

NOTE

The term sessioned applications refers to Web applications that maintain user information across Web pages, using the Web server's memory.

The evolution of ASP to ASP.NET has fixed this problem. Sessioned ASP.NET applications work with clusters, thus maintaining session state across the machines in a cluster. To set up how much traffic and on what ports(if necessary) each cluster node will handle, see the Port Rules tab in the NLB Properties dialog box (the dialog box was shown earlier, in Figure 3).

800 East 96th Street, Indianapolis, Indiana 46240