Home > Articles > Networking > Network Administration & Management

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book

Transport Virtualization—VNs

When segmenting the network pervasively, all the scalability, resiliency, and security functionality present in a nonsegmented network must be preserved and in many cases improved. As the number of groups sharing a network increases, the network devices must handle a much higher number of routes. Any technologies used to achieve virtualization must therefore provide the necessary mechanisms to preserve resiliency, enhance scalability, and improve security.

Chapter 2, "Designing Scalable Enterprise Networks," discussed network design recommendations that provide high availability and scalability through a hierarchical and modular design. Much of the hierarchy and modularity discussed relies on the use of a routed core. Nevertheless, some areas of the network continue to benefit from the use of Layer 2 technologies, such as VLANs, ATM, or Frame Relay circuits. Thus, a hierarchical IP network is a combination of Layer 3 (routed) and Layer 2 (switched) domains. Both the Layer 2 and the Layer 3 domains must be virtualized, and the virtualized domains must be mapped to each other to create VNs.

One key principle in the virtualization of the transport is that it must address the virtualization of the network devices and their interconnection. Thus, the virtualization of the transport involves two areas of focus:

  • Data-path virtualization— Refers to the virtualization of the interconnection between devices. This could be a single-hop or multiple-hop interconnection. For example, an Ethernet link between two switches provides a single-hop interconnection that can be virtualized by means of 802.1q VLAN tags; for Frame Relay or ATM transports, separate virtual circuits provide data-path virtualization. An example of a multiple-hop interconnection would be that provided by an IP cloud between two devices. This interconnection can be virtualized through the use of multiple tunnels (generic routing encapsulation [GRE] for example) between the two devices.
  • Device virtualization— Refers to the virtualization of a networking device or the creation of logical devices within the physical device. This includes the virtualization of all processes, databases, tables, and interfaces within a device.

In turn, within each networking device, there are at least two planes to virtualize:

  • Control plane— Refers to all the protocols, databases, and tables necessary to make forwarding decisions and maintain a functional network topology free of loops or unintended blackholes. This plane could be said to draw a clear picture of the topology for the network device. A virtualized device must posses a unique picture of each VN it is to handle, hence the requirement to virtualize the control-plane components.
  • Forwarding plane— Refers to all the processes and tables used to actually forward traffic. The forwarding plane builds forwarding tables based on the information provided by the control plane. Similar to the control plane, each VN will have a unique forwarding table that needs to be virtualized.

Furthermore, the control and forwarding planes can be virtualized at different levels, which map directly to different layers of the OSI model. For instance, a device can be VLAN aware and therefore virtualized at Layer 2, but yet have a single routing table, Routing Information Base (RIB), and Forwarding Information Base (FIB), which means it is not virtualized at Layer 3. The different levels of virtualization come in handy, depending on the technical requirements of the deployment. Sometimes Layer 2 virtualization is enough (a wiring closet, for instance). In other cases, virtualization of other layers might be necessary.

For example, providing virtual firewall services requires Layers 2, 3, and 4 virtualization, plus the ability to define independent services and management on each virtual firewall, which some may argue is Layer 7 virtualization. We delve into firewall virtualization in Chapter 4. For now, we focus on the virtualization of the transport at Layers 2 and 3.

VLANs and Scalability

Time and experience have proven the scalability benefits of limiting the size of Layer 2 domains in a network. A large amount of this experience comes from campus networks, where highly resilient topologies with redundant links are possible. This link redundancy intrinsically creates network loops that must be controlled by mechanisms such as spanning tree. The broadcast nature of a Layer 2 domain is the main reason these redundant links behave as loops rather than redundant active paths capable of load balancing. Hence, the lack of load balancing and the complexity involved in managing large and highly resilient spanning-tree domains makes a routed infrastructure much more appropriate for large-scale highly available networks. Thus, experience has taught us that meshed Layer 2 domains have their role in the network, but they must be kept small in scale. Keep in mind that we are referring to highly meshed resilient Layer 2 domains such as those you would find in a campus. This type of problem is faced less in the WAN, where point-to-point connections tend to be at the base of the architecture and are for the most part routed. Nevertheless, the introduction of technologies that extend Layer 2 domains over an IP infrastructure has brought many of the spanning-tree concerns to the table in the metro-area network (MAN) and WAN.

When you are virtualizing a network, it is tempting to revisit ideas such as end-to-end VLANs. After all, mapping a group of users to a specific VLAN to create an isolated workgroup was one of the original thoughts behind the creation of VLANs. Should the VLAN traverse the entire enterprise, we could say the transport has been virtualized. This type of solution will have all the scalability problems associated with large Layer 2 domains and is therefore not desirable.

Nevertheless, the use of VLANs has its place as a way of segmenting the Layer 2 portion of the network. In an enterprise campus, this is generally the mesh of links between the access and the distribution. Remember, the recommendation is to reduce the size of the broadcast domains to something manageable, not necessarily to eliminate the broadcast domains, because too much IP subnet granularity would also represent a management challenge. So, to segment the access portion of the network, VLANs are of much use.

The network must preserve its hierarchy and therefore its routed core. As the periphery (access/distribution) continues to be switched (as opposed to routed), VLANs must be used for segmentation purposes. Thus, a VLAN in a wiring closet would represent the point of entry into a VN.

Because these VLANs are terminated as they reach the routed core, it is necessary to map them to segments created in the routed core. The next section looks into what is necessary in the core. From the access perspective, the VLANs must map to the corresponding segments created in the core to achieve an end-to-end VPN that spans both the switched and routed portions of the network.

We focus our analysis on a network with a routed core and a switched access. This model is widely adopted because it has been proven, optimized, and recommended by Cisco for many years.

Virtualizing the Routed Core

You can achieve the virtualization of the routed portion of the network in many ways. At the device level, the available traffic separation mechanisms can be broadly classified as follows:

  • Policy-based segmentation
  • Control-plane-based virtualization

Policy-Based Segmentation

Policy-based segmentation restricts the forwarding of traffic to specific destinations, based on a policy and independently of the information provided by the control plane. The policies are applied onto a single IP routing space. A classic example of this uses an access control list (ACL) to restrict the valid destination addresses to subnets in the VN.

Policy-based segmentation is limited by two main factors:

  • Policies must be configured pervasively.
  • Locally significant code points are currently used for policy selection.

The configuration of distributed policies can be a significant administrative burden, is error prone, and causes any update in the policy to have widespread impact.

The code point used for policy selection has traditionally been an IP address and therefore locally significant. Because of the diverse nature of IP addresses, and because policies must be configured pervasively, building policies based on IP addresses does not scale well. Thus, policy-based segmentation using IP addresses as code points has limited applicability. However, other code points could potentially be used. If the code point is independent of the IP addressing and globally significant (uniformly maintained throughout the network), all policies would look alike throughout the network, making their deployment and maintenance much simpler.

Policy-based segmentation with the tools available today (ACLs) can address the creation of VNs with many-to-one connectivity requirements; it would be hard to provide any-to-any connectivity with such technology. This is the case for segments providing guest access to the Internet, in which many guests access a single resource in the network. This is manageable because the policies are identical everywhere in the network (allow Internet access, deny all internal access). The policies are usually applied at the edge of the Layer 3 domain. Figure 3-4 shows ACL policies applied at the distribution layer to segment a campus network.

nv100304.gif

Figure 3-4 Hub-and-Spoke Policy-Based Segmentation

As a creativity exercise, you could attempt to design an IP-based policy to provide any-to-any connectivity between guests, while keeping them separate from the rest of the users!

Control-Plane-Based Virtualization

Control-plane-based virtualization restricts the propagation of routing information so that only subnets that belong to a VN are included in any VN-specific routing tables and updates. Thus, this type of solution actually creates a separate IP routing space for each VN. To achieve control-plane virtualization, a device must have many control/forwarding instances, one for each VN. An example of control-plane-based device segmentation is a VRF.

A VRF could be looked at as a "virtual routing instance." Each VRF will have its own RIB, FIB, interfaces, and routing processes. Figure 3-5 illustrates VRFs.

nv100305.gif

Figure 3-5 Virtual Routing and Forwarding

The VRF achieves the virtualization of the networking device at Layer 3. After the devices have been virtualized, the virtual instances in the different devices must be interconnected to form a VN. Thus, a VN is a group of interconnected VRFs. In theory, this interconnection could be achieved by using dedicated physical links for each VN (group of interconnected VRFs). In practice, this would be inefficient and costly. Hence, it is necessary to virtualize the data path between the VRFs to provide logical interconnectivity between the VRFs that participate in a VN. The type of data-path virtualization will vary depending on how far the VRFs are from each other. If the virtualized devices are directly connected to each other (single hop), link or circuit virtualization is necessary. If the virtualized devices are connected multiple hops apart over an IP network, a tunneling mechanism is necessary. Figure 3-6 illustrates single-hop and multiple-hop data-path virtualization.

nv100306.gif

Figure 3-6 Single- and Multiple-Hop Data-Path Virtualization

The many technologies that virtualize the data path and interconnect VRFs are discussed in Chapters 4 and 5. The different technologies have different benefits and limitations depending on the type of connectivity and services required. For instance, some technologies are good at providing hub-and-spoke connectivity, whereas others provide any-to-any connectivity. The support for encryption, multicast, and other services will also determine the choice of technologies to be used for the virtualization of the transport.

The VRFs must also be mapped to the appropriate VLANs at the edge of the network. This mapping provides continuous virtualization across the Layer 2 and Layer 3 portions of the network. The mapping of VLANs to VRFs is as simple as placing the corresponding VLAN interface at the distribution switch into the appropriate VRF. The same type of mapping mechanism applies to Layer 2 virtual circuits (ATM, Frame Relay) or IP tunnels, which are handled by the router as a logical interface. The mapping of VLAN logical interfaces (switch virtual interface [SVI]) to VRFs is illustrated in Figure 3-7.

nv100307.gif

Figure 3-7 VLAN-to-VRF Mapping

So far, we have created a virtualized transport that can keep the traffic from different groups separate from each other. The next section introduces the functionality required at the edge to place or authorize endpoints into the appropriate groups.

The LAN Edge: Authentication and Authorization

At the edge of the network, it is necessary to identify the users or devices logging on to the network so that they can be assigned to the right groups.

The process of identifying the users or devices is known as authentication. Two parameters affect the assignment of a user or devices: the identity of the user or device and the posture of the device. The posture of the device refers to the health of the device, measured by the level of software installed, especially operating system patches and antivirus.

When identified, the endpoints must be authorized onto the network. To this effect, the port on which an endpoint connects is activated and configured with certain characteristics and policies. This process is known as authorization. One example of authorization is the configuration of a port's VLAN membership based on the results of an authentication process. Another example is the dynamic configuration of port ACLs based on the authentication.

In this two-phased process, authorization is the most relevant to virtualization. When an endpoint is authorized on the network, it can be associated to a specific VN. Thus, it is the authorization method that will ultimately determine the mapping of the end station to a VN. For example, when a VLAN is part of a VN, a user authorized onto that VLAN will therefore be authorized onto the VN.

The main authentication scenarios for the enterprise could be summarized as follows:

  • Client-based authentication, for endpoints with client software
    • — 802.1x
    • — NAC
  • Clientless authentication, for endpoints without any client software
    • — Web-based authentication
    • — MAC-based machine authentication

Regardless of the authentication method, the authorization could be done in one of the following ways:

  • Assigning a port to a specific VLAN
  • Uploading a policy to a port, in the form of ACLs, policy maps, or even the modular QoS command-line interface (MQC)

VLANs map into VRFs seamlessly and are the authorization method of choice when using a VRF-based transport virtualization approach. ACL authorization could be used to achieve policy-based transport virtualization. For a transport virtualization approach based on class-based forwarding, the ability to dynamically load a QoS policy onto the access device could prove useful.

The current state of the technology provides broad support for VLAN assignment as an authorization alternative. In the cases where policy changes based on authentication are required and there is only VLAN assignment authorization available, a static assignment of a policy to a VLAN will provide the required linkage between the user authorization and the necessary policy. The policy will in effect be applied to the VLAN; as users are authorized onto different VLANs, they are subject to different policies.

  • + Share This
  • 🔖 Save To Your Account