Home > Articles > Networking

  • Print
  • + Share This
  • 💬 Discuss

How Ethernet Works

If you ask a bona fide network engineer how Ethernet works, he'd say something like this: "Using MAC addresses to distinguish between machines, Ethernet transmits frames of data across baseband cables using CSMA/CD." This is an accurate description, and after we go over all the concepts in it, it will even make sense. You can go ahead and underline that definition and come back in 10 minutes. You'll be a new person.

Media Access Control (MAC) Address

Every Ethernet network card has, built into its hardware, a unique six-octet (48-bit) hexadecimal number that differentiates it from all other Ethernet cards in the universe. This unique name allows data to be specifically sent to one computer at the exclusion of all others. The MAC address of my desktop computer, for example, is 00-C0-4F-33-Af-7C. You can find out what yours is by running ipconfig /all from a DOS prompt, or ifconfig -a from a UNIX prompt.


The basic unit of data transmission on an Ethernet network is a frame. The Ethernet frame defines the data layout at the OSI Layer 2 link level. The length of an Ethernet frame is normally no less than 64 octets and no more than 1,518 octets. The exception to the minimum length is the gigabit Ethernet standard 802.3z, which increased the minimum packet size to 512 bytes. The format of an 802.3 Ethernet frame consists of these components:

  • A preamble, which is comprised of 56 bits of alternating 0s and 1s. The preamble provides all the nodes on the network a signal to synchronize with.

  • A start frame delimiter, which marks the start of a frame. The start frame delimiter is 8 bits long with the pattern 10101011.

  • A destination, the MAC address of the network node to which the frame is addressed.

  • A source, the MAC address of the transmitting node.

  • A Length/Type field two octets long. If the value in this field is 1500 (0x05dc hex) or less, it indicates the number of octets to follow in the data field. Network engineers know this as the original Ethernet 2.0 (Ethernet II) frame type. If the value is 1536 (0x0600 hex) or greater, it indicates the network-layer protocol. In most networks today, this value will be 2048 (0x0800 hex), which is the assigned protocol type for IP. The other most commonly found protocol type values are 33079 and 33080 (0x8137, 0x8138 hex), which is for Novell IPX.

  • Data, the reason the frame exists and the information being sent across the network. The minimum length of this field is 46 octets, and the maximum is 1500. If the Data field is less than 46 octets long, then the pseudofield Pad is used.

  • Pad, a "field" that is used to lengthen the data field to a minimum size of 46 octets. The pad is normally filled with a zero-octet pattern.

  • A Frame Check Sequence field four octets long. Ethernet frames use CRC-32, or "cyclic redundancy check 32-bits long" to test for errors. CRC-32 is a more accurate error-detection method than simple checksums.

Two additional alternate Ethernet frames are far less common to see on your network: the IEEE 802.3 Subnetwork Access Protocol (SNAP) and Novell 802.3 Raw Encapsulation frame types. Unless you have legacy network operating systems on your network, such as older NetWare servers (pre-4.x) or Apple Macintosh computers speaking AppleTalk (802.3 SNAP), you most likely will not run across these frame types. Another source for these alternate frames comes from certain network-connected printers with IPX enabled. The networked printer will try to autodetect the frame types in use by broadcasting its existence using all four frame formats.

The normal maximum length of an Ethernet frame does not include the preamble or the start frame delimiter. The frame check sequence is calculated starting with the destination MAC address through the data/pad field. With a maximum data payload of 1,500 octets, a 6-octet source and destination fields, a 2-octet length/type field, and a 4-octet frame check sequence, you total 1518 octets. The 802.3z standard added an "extension" field that basically just appends bits to the end of the frame to bring the total length up to a minimum of 512 octets, if necessary.

In 1998, the IEEE 802.3 working group published the 802.3ac standard that added an optional four octets used for virtual local area network (VLAN) tagging. VLAN tagging is achieved by using the reserved value 0x8100 in the location that would normally be the Length/Type field. The next two octets are composed of the following three fields:

  • User Priority field—This field is 3 bits in length and is used to define the priority of the Ethernet frame. This is utilized to bring quality of service (QoS) classifications to a variable-length packet-based networking medium such as Ethernet.

  • Canonical format indicator—This is 1 bit in length.

  • VLAN Identifier field—This field is 12 bits in length. It is used to identify the VLAN associated with the Ethernet frame.

The Length/Type field will then follow the inserted VLAN tag.

Carrier Sense Multiple Access/Collision Detection (CSMA/CD)

The big thing about Ethernet is CSMA/CD, which stands for carrier sense multiple access/collision detection. This governs how computers talk to one another over an Ethernet network. It means this in plain English:

  • Computers listen before they talk (carrier sense). If another computer is talking, they keep their mouths shut.

  • Any computer on the network can talk as long as no one else is talking (multiple access).

  • If two computers talk at the same time, a collision occurs. The computers recognize that they both tried to talk at the same time (collision detection). They wait a random amount of time and then retransmit.

Several key parameters of CSMA/CD are needed to work correctly.

First, there must be a carrier signal on the medium that other nodes can sense (thus the frame's preamble).

Next, the transmitting node must be capable of completely transmitting the minimum packet size and reliably detecting whether a collision has occurred. This minimum transmission time necessary to transmit the minimum packet size is known as the slot time. For 10Mbps and 100Mbps Ethernet, the slot time is 512 bit times, which not coincidentally happens to be the minimum length of an Ethernet packet (64 octets is 512 bits). For gigabit Ethernet, the slot time is 4096 bit times.

If a collision occurs, several things happen. If a transmitting node detects a collision, it transmits a jam signal that is 32 bit times in length. The node stops transmitting and waits a random amount of slot times before trying to transmit again. This is known as back-off. Each collision that occurs while trying to transmit the same packet will trigger an increase in the back-off delay that the node may wait to try transmitting again.

The back-off delay is a binary exponent range limited by the back-off limit, which is 10. Therefore, for the first collision the node may wait 0 or 1 slot times to retransmit. After the second collision, it will wait 0 to 3 slot times, all the way up to a maximum possible 1,023 slot times. If the frame cannot be transmitted after 16 attempts, the packet is discarded and will need to be handled by higher layers in the OSI stack.

The Inter-Frame Gap (IFG) is the minimum quiet period between frames; it should be 96 bit times in length. This means that the IFG is 9.6 microseconds for 10Mbps Ethernet, 960 nanoseconds for 100Mbps Ethernet, and 96 nanoseconds for gigabit Ethernet.

Any frame received by a node that is less than 64 octets in length is automatically assumed to be a fragment from a collision and is discarded. The astute among you will notice that timing is critical in an Ethernet network and that the slot time effectively defines the maximum physical size of your network segments. If transmitting stations receive the jam sequence of a collision after they have fully transmitted their packet, this is known as a late collision. Late collisions are bad and require a reworking of your Ethernet architecture.

CSMA/CD-Free, or Full-Duplex

In 1997, the IEEE 802.3 working group published the 802.3x standard that defined the specifications for full-duplex Ethernet. In half-duplex Ethernet operation, only one node can transmit at a time—thus the need for CSMA/CD. This is similar to most speakerphones; the person at the far end cannot hear you speak if he is speaking. This is because the microphone is automatically squelched so that the remote person does not receive feedback.

Full-duplex Ethernet can be used only in point-to-point links connecting two nodes. Because it does not use CSMA/CD, it cannot be used in a shared medium. The cabling medium must support two transmission paths, usually unused pairs in UTP suffice.

Full-duplex Ethernet, therefore, is not bound by the limits of CSMA/CD; thus, physical network segment size can be significantly larger. Additionally, because there is no possibility of collisions, full-duplex Ethernet links achieve a greater throughput. Finally, because both nodes can theoretically transmit at the same time, many marketing specialists list the total aggregate speed of the full-duplex link as twice that of half-duplex links.

Popular Ethernet Is Baseband

Ethernet is a baseband network, which means that it has only one channel for data transfer. As with a telephone line, all the computers on the network have to share this one data path. The opposite of baseband is broadband, in which several data streams can travel through a wire at the same time using a technique known as frequency modulation. What the heck is frequency modulation? Well, it's changing the signal in a way that makes it distinctly different from other signals so that the receiving hardware can distinguish multiple signals, the way your cable TV box receives a hundred channels at once over the same wire.

  • + Share This
  • 🔖 Save To Your Account


comments powered by Disqus

Related Resources

There are currently no related titles. Please check back later.