Packet Switching Methods on Cisco Networks

A Little History Lesson

A number of different methods have been developed to improve the performance of networking devices, both by increasing packet-forwarding speed and by decreasing packet delay through a device. Some higher-level methods focus on decreasing the amount of time needed for the routing process to converge; for example, by optimizing the timers used with the Open Shortest Path First (OSPF) protocol or the Enhanced Interior Gateway Routing Protocol (EIGRP).

Optimizations are also possible at lower levels, such as by optimizing how a device switches packets, or how processes are handled. This article focuses at this lower level, specifically by examining how vendors can decrease forwarding time through the development and implementation of optimized packet-switching methods.

The three main switching methods that Cisco has used over the last 20 years are process switching, fast switching, and Cisco Express Forwarding (CEF). Let’s take a brief look at these three methods.

Process Switching

Of the three methods, process switching is the easiest to explain. When using only process switching, all packets are forwarded from their respective line cards or interfaces to the device’s processor, where a forwarding/routing and switching decision is made. Based on this decision, the packet is sent to the outbound line card/interface. This is the slowest method of packet switching because it requires the processor to be directly involved with every packet that comes in and goes out of the device. This processing adds delay to the packet. For the most part, process switching is used only in special circumstances on modern equipment; it should not be considered the primary switching method.

Fast Switching

After process switching, fast switching was Cisco’s next evolution in packet switching. Fast switching works by implementing a high-speed cache, which is used by the device to increase the speed of packet processing. This fast cache is populated by a device’s processor. When using fast switching, the first packet for a specific destination is forwarded to the processor for a switching decision (process switching). When the processor completes its processing, it adds a forwarding entry for the destination to the fast cache. When the next packet for that specific destination comes into the device, the packet is forwarded using the information stored in the fast cache—without directly involving the processor. This approach lowers the packet switching delay as well as processor utilization of the device.

For most devices, fast caching is enabled by default on all interfaces.

Cisco Express Forwarding (CEF)

Cisco’s next evolution of packet switching was the development of Cisco Express Forwarding. This switching method is used by default on most modern devices, with fast switching being enabled as a secondary method.

CEF operates through the creation and reference of two new components: the CEF Forwarding Information Base (FIB) and the CEF Adjacency table. The FIB is built based on the current contents of a device’s IP routing table. When the routing table changes, so does the CEF FIB. The FIB’s functionality is very basic: It contains a list of all the known destination prefixes and how to handle switching them. The Adjacency table contains a list of the directly connected devices and how to reach them; adjacencies are found using protocols such as the Address Resolution Protocol (ARP).

These tables are stored in the main memory of smaller devices, or in the memory of a device’s route processor on larger devices; this mode of operation is called Central CEF.

An additional advantage when using CEF on supported larger Cisco devices is that the CEF tables on those devices can be copied and maintained on specific line cards; this mode of operation is called Distributed CEF (dCEF). When using dCEF, the packet switching decision doesn’t have to wait for the Central CEF lookup information; these decisions can be made directly on the line card, thus increasing the switching speed of the traffic going from interface to interface on any of the supporting line cards. This design results in decreased utilization of the backplane between the line card and the route processor, providing additional room for other traffic.

Leave a Reply

Your email address will not be published. Required fields are marked *