This is a Tutorial excerpt from LAN Switching by Leigh Anne Chisholm.

If you're not a Certification Zone Subscriber and you would like complete, unrestricted access to the rest of this and every other Tutorial, Study Quiz, Lab Scenario, and Practice Exam available at Certification Zone, become a Subscriber today!

Segmenting a Network Using LAN Switches

LAN switches have replaced bridges in the marketplace as the preferred layer 2 segmentation option and have gained in popularity, replacing layer 1 Ethernet hubs and Token Ring MAUs, in shared media environments. LAN switching offers network administrators a simple way to increase bandwidth availability to end users, providing dedicated bandwidth on each switch port. Today's LAN switches offer the functionality of their predecessor, but have incorporated new features, which make LAN switches truly powerful network tools.

The beginning of this Tutorial introduced five characteristics of a well-designed infrastructure:

1. Functional

2. Reliable, Available, and Manageable

3. Scalable and Adaptable

4. Accessible and Secure

5. Efficient and Cost Effective

The selective deployment of LAN switches within a network can help a network administrator design a network that embodies each of these characteristics. With an intelligent network fabric, LAN switches can provide for a multitude of alternate pathways, backing up each switch path within an internetwork, and at a cost well below that of conventional routers that have traditionally been used to provide redundancy. LAN switches can be part of the network infrastructure that provides sufficient bandwidth for end-users, is capable of automatic recovery from failure with little or no impact on end-users, is scalable and adaptable, and cost-effective yet secure.

The transition from shared-media networks such as Ethernet or Token Ring to dedicated-media switched networks can be compared to the transition experienced by telephone companies when party lines (shared telephone networks) were replaced by dedicated subscriber access. Although now generally obsolete, early telephone networks often utilized "party lines" to provide access to the public telephone system. Like shared media networks, a party line was shared by several subscribers and could be used by only one person at a time; other subscribers had to wait until the line was available. The party line lacked privacy -- an intrusive neighbor could listen in on a conversation simply by picking up any handset connected to the party line. Shared media Ethernet and Token Ring share the same susceptibility of uninvited intrusion of network information by network neighbors.

Figure 9.

The dedicated-service telephone system used by most of us today provides dedicated, on-demand access to the telephony network, and offers a degree of protection from unwanted intrusion. LAN switches provide a similar type of dedicated communication, within a local area network environment. Each port on a LAN switch provides a dedicated connection, or collision domain. The switch can be used as a collapsed backbone -- interconnecting hubs or repeaters -- or can be used to provide dedicated connection to end-nodes.

Ethernet Switching

An Ethernet switch is essentially a multi-port transparent bridge that incorporates all of the functionality of a traditional transparent bridge, but brings forth new, innovative enhancements to create a powerful networking tool. Today's Ethernet LAN switches typically operate at speeds of 10 and 100 Mbps. However, switch vendors have started offering Gigabit (1,000 Mbps) Ethernet uplink capability in their switch products. Some switches offer port configurations that support either 10 or 100 Mbps operation. Each port can operate at 10 or 100 Mbps, with the selected data rate operating independently of the data rate of its neighboring ports (asymmetric switching). These switches are capable of converting between 10 and 100 Mbps data rates, providing connectivity between different bandwidth segments.

One of the most attractive features with respect to adding LAN switches to an existing Ethernet network infrastructure, is that deployment does not require changing cabling, network interface cards, re-configuring routers, etc.

Autonegotiation

When a switch has a port capable of operating at either 10 or 100 Mbps, typically the port will support autonegotiation, i.e., the port will be able to determine the type of Ethernet signal of the end-system and will select the appropriate Ethernet implementation.

When a network interface card has been configured for 10Base-T operation, it will send a single pulse, called a Normal Link Pulse (NLP) to the switch port, testing the integrity of the link. If the link is operational, the indicator light on the NIC (if present) will light. If a switch receives an NLP, it recognizes that the end-station is only capable of 10 Mbps operation. If this process does not identify the Ethernet implementation, the switch port can transmit a Fast Link Pulse (or FLP) identifying the highest performance Ethernet implementation it has available. The FLP consists of a series of up to 33 pulses (17 clocking pulses interspersed with up to 16 signal pulses) that forms a 16-bit code word. The end-station also transmits an FLP identifying its maximum capability. The two end-points compare 16-bit code words, determining the highest compatible speed. The IEEE has established a priority system ranking from the most desirable mode of Ethernet operation to least desirable. It is:

- 100BASE-TX full duplex

- 100BASE-T4

- 100BASE-TX

- 10BASE-T full duplex

- 10BASE-T

Switching Mechanisms

Like bridges, the operation of a LAN switch is transparent to the end-user. LAN switches operate like transparent bridges. The switch creates a forwarding table of the source addresses of frames received. The switch associates each end-node MAC address with the switch port on which the source MAC address was identified, creating a map of the network topology. However, unlike a bridge, the LAN switch does not need to receive the frame in its entirety before it makes the decision to forward.

Store-and-Forward Switching

Store-and-forward switching is the traditional frame forwarding method used by bridges. When using store-and-forward switching, the switch receives the entire frame before the frame is forwarded. The switch reads the destination and source addresses, and computes a cyclic redundancy check value on the frame received to determine the integrity of the frame. If the CRC value is bad, the frame is discarded. Otherwise, the switch applies all relevant filters then switches according to the information contained in its addressing table.

Latency for store-and-forward switching is dependent on the frame size, i.e., the time it takes to receive a 64 byte frame is different from the time it takes to receive a 1518 byte frame. Latency values for store-and-forward switching can be calculated using the data rate of the port:

• For a 10 Mbps port, latency can be between 51.2 microseconds to receive a minimum-length Ethernet frame (64 bytes) and 1.21 milliseconds for a full-length Ethernet frame (1518 bytes).

• For a 100 Mbps port, latency can be between 5.12 microseconds to receive a minimum-length Ethernet frame (64 bytes) and 121 microseconds for a full-length Ethernet frame (1518 bytes).

Latency values must also take into consideration the latency of the switching process itself. A Cisco Catalyst 1900 series switch requires from 3 to 7 microseconds to switch a frame between ports when using store-and-forward.

Cut-Through Switching

The technique of cut-through switching was originally pioneered by Kalpana and was implemented on Kalpana switches. In December 1994, Cisco acquired Kalpana and Kalpana's cut-through functionality.

Cut-through switching improves throughput performance by beginning to forward frames before the entire frame has been received. Since the port does not wait to receive the CRC at the end of the frame, it cannot determine the integrity of the data received. Switches operating in cut-through mode can propagate invalid frames through a network.

Cut-through switches can perform a CRC check as the frame passes through the switch, keeping track of the number of bad frames the port receives. Some switches support the capability for a port to automatically switch from cut-through packet switching to store-and-forward switching if error rates exceed a user-defined threshold. When the error rate falls below the user-defined value, switching reverts back to cut-through switching.

FastForward cut-through switching begins forwarding a frame as soon as the destination address is read and determined to be a valid address. FragmentFree cut-through switching waits until the first 64 bytes of the frame have been received. Most collisions occur within the first 64 bytes of the frame. FragmentFree switching attempts to reduce the number of collision frames (and runt frames -- illegal frames less than 64 bytes in length) it propagates through a network.

FragmentFree is the default switching mode on the Catalyst 1900 series switch.

Latency for cut-through switching is relatively simple to calculate:

• For FastForward switching, a port receives 14 bytes before it begins forwarding the frame. For a 10 Mbps port, latency is 11.2 microseconds. For a 100 Mbps port, latency is one-tenth the time or 1.12 microseconds.

• For FragmentFree switching, a port receives 64 bytes before it begins forwarding the frame. For a 10 Mbps port, latency is 51.2 microseconds. For a 100 Mbps port, latency is one-tenth the time or 5.12 microseconds.

The latency period of the switching process when cut-through switching is used can be quite a bit higher than the latency of the switching process for store-and-forward switching. For example, a Cisco Catalyst 1900 series switch requires 70 microseconds to switch a frame between 10 Mbps ports.

A major constraint of cut-through switching, is that it does not support 10 Mbps to 100 Mbps port switching. A 10 Mbps Ethernet switch cannot utilize cut-through switching on 100 Mbps or FDDI uplink ports, nor cut-through switch with a peer port operating at 100 Mbps.

Not all switches, and not all Cisco switches, support cut-through switching. This mode of switching is seen primarily on edge rather than core switches.

Half-Duplex and Full-Duplex Ethernet

Ethernet's original design was based around a single media -- thick coaxial cable. Unlike today's twisted pair media, separate transmit and receive circuits did not exist. A single pathway existed to carry data; access to the media had to be controlled to prevent more than one node from transmitting at the same time. Ethernet was designed as a half-duplex technology. Half-duplex transmission provides for transmission of a signal in either direction, but only one direction at a time.

Half-duplex operation is similar to radio operation between a pilot and a control tower. When a pilot wishes to speak, he presses the transmitter on his microphone, and addresses the control tower. When the control tower responds to the pilot, the air traffic controller depresses his transmitter on his microphone, and addresses the pilot. When the two attempt to transmit at the same time, neither side receives the transmission. All that is heard, is a loud tone indicating the transmissions have collided.

An Ethernet controller must, however, be able to listen to the data channel while it is transmitting -- in much the same manner that the pilot and air traffic controller must continue to monitor their surroundings to detect the tone signaling that both are attempting to speak at once.

An Ethernet controller can only detect collisions while it is in transmit mode. After the transmission of a data frame, an Ethernet controller must wait a minimum of 9.6 microseconds before attempting to transmit a second frame. The 9.6 microsecond interframe gap serves a two-fold process: 1) to provide an opportunity for another device to transmit its data; 2) to ensure the Ethernet controller remains in transmit mode long enough for a collision to propagate back to it from the farthest point on the wire.

When the Ethernet specifications were updated to include support for twisted-pair and fiber-optic media, separate circuits for sending and receiving data existed, but network devices (or end-nodes) were still connected together via repeaters or hubs, in a logical bus topology. When a collision occurred, it was still propagated along the entire length of the bus.

When a network is segmented, i.e., when a collision domain is split into two or more collision domains using a layer 2 device, the logical bus topology is segmented. When a layer 2 bridge or switch is added to the network, each port divides the collision domain into separate segments. Should a switch be used between two end nodes, each node exists in its own collision domain. No other device is in contention for the media. In such a configuration, where an end-node is the only device in a collision domain, it is possible for the device to transmit and receive simultaneously. The ability to transmit and receive simultaneously is known as full-duplex operation.

Full-duplex increases Ethernet's throughput by creating two collision-free 10 Mbps paths or two collision-free 100 Mbps paths -- one for sending and one for receiving. The collision detection on the Ethernet interface is not required, and is thus disabled.

There are a few important points that must be remembered with respect to the operation of full-duplex Ethernet:

1. Full-duplex Ethernet is a point-to-point, dedicated link between switches, or end-nodes. Hubs or repeaters are shared media devices, and are not capable of supporting full-duplex Ethernet.

2. Although two 10 Mbps or two 100 Mbps pathways may exist, end-systems typically are either client systems (receive more data than is sent) or are server systems (send more than they receive). As the balance of transmissions to/from end-systems are generally not equal, devices often are not able to benefit from the full 20 Mbps/200 Mbps available bandwidth.

3. The Ethernet controllers of each end-device must be capable of supporting full-duplex operation. Not all Ethernet network cards in use today have drivers that support this functionality.

Fast EtherChannel is a technology that supports the "bundling" of Fast Ethernet connections, grouped together to increase bandwidth up to 400% (up to 800 Mbps).

While full-duplex transmission is a simple way of increasing network bandwidth, it can readily create network congestion problems when switches are deployed without regard for the bandwidth requirements for any given port on a switch -- especially the uplink ports that provide dedicated bandwidth to servers.

Consider the case of the 24 port, 10 Mbps Ethernet switch with a 100 Mbps Ethernet uplink port. Each port provides dedicated 10 Mbps access to 24 end-stations. The theoretical aggregate bandwidth of the 24 10 Mbps ports is 240 Mbps. If each port is operating in full-duplex mode, the theoretical aggregate bandwidth increases to 480 Mbps. The uplink to the server is only capable of 100 Mbps -- 200 Mbps when operating in full-duplex mode. It is very important to monitor bandwidth usage on ports to ensure that actual aggregate bandwidth does not exceed the bandwidth of the uplink or any port where throughput requirements may exceed available bandwidth.

When congestion does occur, LAN switches have several techniques that can be used to control or minimize congestion. Note that not all congestion control features are available on all switches.

Token Ring Switching

Token Ring LAN switches offers the same performance benefits as Ethernet switches: improved network throughput through microsegmentation, dedicated media access, and low latency for inter-segment communication. They also offer robustness in that transparent bridging, source-route bridging, and source-route transparent bridging standards are supported. Additionally, like the Source-Route bridge, the Token Ring switch is capable of supporting multiple redundant paths through the network fabric.

Deployment of Token Ring switches, however, has lagged significantly behind the deployment of Ethernet switches in the network fabric. While Ethernet's demand for inexpensive segmentation options to control collisions has fuelled their demand, Token Ring switches have generally lacked a "killer application" -- something that would make them indispensable for every Token Ring network. Token Ring's ability to prioritize data has helped Token Ring networks keep up with the demand for bandwidth better than Ethernet networks, but as today's networks continue to implement real-time protocols for voice and video, Token Ring switch deployment will help maintain the quality of service required of these protocols.

There are two options for deploying a Token Ring LAN switch within an existing Token Ring network infrastructure: 1) the LAN switch can directly connect end-stations to its switch ports (sometimes referred to as port switching) or 2) the switch can be connected to a port on a MAU, acting as a multi-port bridge joining an existing ring (sometimes referred to as segment switching). When port switching is implemented, all ports on the Token Ring LAN switch have the same ring number. When the switch examines a RIF field of a Token Ring frame, the switch never modifies the RIF field of the frame so the ring number outbound frame will always be the same as the inbound frame.

Figure 10.

A Token Ring LAN switch learns only the MAC addresses of all stations connected to its ports. The switch also checks the Route Descriptor field for information concerning the next hop ring and bridge number. When a Token Ring switch receives a frame that contains a RIF, it switches the frame based on the RIF information. If a frame does not contain a RIF field, it is transparently bridged according to MAC address information. If the MAC address does not exist within the switch's directly-connected MAC address table, the frame is forwarded to the next hop.


[NA-LANS-WP1-F05]
[2002-07-25-01]

This is a Tutorial excerpt from LAN Switching by Leigh Anne Chisholm.

If you're not a Certification Zone Subscriber and you would like complete, unrestricted access to the rest of this and every other Tutorial, Study Quiz, Lab Scenario, and Practice Exam available at Certification Zone, become a Subscriber today!