Home > Articles > Certification > CompTIA

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book


On the surface, a switch looks much like a hub, although the price tag might be a giveaway—switches are considerably more expensive than hubs. The main reason for the price disparity is that switches can do much more and offer many more advantages than hubs. Figure 3.4 shows an example of a 32-port Ethernet switch. If you refer to Figure 3.2, you'll notice few differences in the appearance of the high-density hub and this switch.

Figure 3.4 A 32-port Ethernet switch. (Photo courtesy TRENDware International, http://www.trendware.com.)

As with a hub, computers connect to a switch via a length of twisted-pair cable. Multiple switches can be used, like hubs, to create larger networks. Despite their similarity in appearance and their identical physical connections to computers, switches offer significant operational advantages over hubs.

As discussed earlier in the chapter, on a hub, data is forwarded to all ports, regardless of whether the data is intended for the system connected to the port. This arrangement is very inefficient; however, it requires very little intelligence on the part of the hub, which is why hubs are inexpensive.

Rather than forwarding data to all the connected ports, a switch forwards data only to the port on which the destination system is connected. It looks at the Media Access Control (MAC) addresses of the devices connected to it to determine the correct port. A MAC address is a unique number that is programmed into every NIC. By forwarding data only to the system to which the data is addressed, the switch decreases the amount of traffic on each network link dramatically. In effect, the switch literally channels (or switches, if you prefer) data between the ports. Figure 3.5 illustrates how a switch works.

Figure 3.5 How a switch works.

You might recall from our the discussions of Ethernet networking in Chapter 2, "Cabling and Connectors," that collisions occur on the network when two devices attempt to transmit at the same time. Such collisions cause the performance of the network to degrade. By channeling data only to the connections that should receive it, switches reduce the number of collisions that occur on the network. As a result, switches provide significant performance improvements over hubs.

Switches can also further improve performance over the performance of hubs by using a mechanism called full-duplex. On a standard network connection, the communication between the system and the switch or hub is said to be half-duplex. In a half-duplex connection, data can be either sent or received on the wire, but not at the same time. Because switches manage the data flow on the connection, a switch can operate in full-duplex mode—it can send and receive data on the connection at the same time. In a full-duplex connection, the maximum bandwidth is double that for a half-duplex connection—for example, 10Mbps becomes 20Mbps and 100Mbps becomes 200Mbps. As you can imagine, the difference in performance between a 100Mbps network connection and a 200Mbps connection is considerable.

The secret of full-duplex lies in the switch. As discussed previously in this section, switches can isolate each port and effectively create a single segment for each port on the switch. Because there are only two devices on each segment (the system and the switch), and because the switch is calling the shots, there are no collisions. No collisions means no need to detect collisions—thus, a collision-detection system is not needed with switches. The switch drops the conventional carrier-sense multiple-access with collision detection (CSMA/CD) media access method and adopts a far more selfish (and therefore efficient) communication method.

To use a full-duplex connection, you basically need three things: a switch, the appropriate cable, and an NIC (and driver) that supports full-duplex communication. Given these requirements, and the fact that most modern NICs are full-duplex-ready, you might think everyone would be using full-duplex connections. However, the reality is a little different. In some cases, the NIC is simply not configured to make use of the driver. For example, NetWare 4 required that a parameter be passed when the driver was loaded to take advantage of a full-duplex connection.


It's important to remember that a full-duplex connection has a maximum data rate of double the standard speed, and a half-duplex connection is the standard speed. The term half-duplex can sometimes lead people to believe that the connection speed is half of the standard, which is not the case. A simple way to remember this is to think of the half-duplex figure as half the full-duplex figure, not half the standard figure.


The process that switches perform is referred to as microsegmentation.

All Switches Are Not Created Equal

Having learned the advantages of using a switch and looked at the speeds associated with the network connections on the switch, you could assume that one switch is just as good as another. This is not the case. Switches are rated by the number of packets per second (pps) they can handle. Good-quality, high-end switches can accommodate 90 million pps and higher. When you're buying network switches, be sure to look at the pps figures before making a decision.

Troubleshooting Network Connection Speed

Most NICs can automatically detect the speed of the network connection they are connected to. However, although the detection process is normally reliable, on some occasions it may not work correctly. If you are troubleshooting a network connection and the autodetect feature is turned on, try setting the speed manually (preferably to a low speed) and then give it another go. If you are using a managed switch, which is discussed later in this chapter, you might have to do the same thing at the switch end of the connection.

Switching Methods

Switches use three methods to deal with data as it arrives:

  • Cut-through—In a cut-through configuration, the switch begins to forward the packet as soon as it is received. No error checking is performed on the packet, so the packet is moved through very quickly. The downside of cut-through is that because the integrity of the packet is not checked, the switch can propagate errors.

  • Store-and-forward—In a store-and-forward configuration, the switch waits to receive the entire packet before beginning to forward it. It also performs basic error checking.

  • Fragment-free—Building on the speed advantages of cut-through switching, fragment-free switching works by reading only the part of the packet that enables it to identify fragments of a transmission.

As you might expect, the store-and-forward process takes longer than the cut-through method, but it is more reliable. In addition, the delay caused by store-and-forward switching increases with the packet size. The delay caused by cut-through switching is always the same—only the address portion of the packet is read, and this is always the same size, regardless of the size of the data packet. The difference in delay between the two protocols is very high. On average, cut-through switching is 30 times faster than store-and-forward switching.


The time it takes for data to travel between two locations is known as the latency. The higher the latency, the bigger the delay in sending the data.

It might seem that cut-through switching is the obvious choice, but today's switches are fast enough to be able to use store-and-forward switching and still deliver high performance levels. On some managed switches, you can select the switching method you want to use.

  • + Share This
  • 🔖 Save To Your Account