TCP Offload Engine

From Free net encyclopedia

TCP Offload Engine or TOE is a technology for the acceleration of TCP/IP, specifically by moving TCP/IP processing to a separate dedicated sub-system from the main host CPU, the overall system TCP/IP performance is improved. Originally TCP was designed for unreliable low speed networks (such as early dial-up modems) but with the growth of the internet in terms of internet backbone transmission speeds (OC-48, OC-192, GigE and 10GigE links) and faster and more reliable access mechanisms (such as Digital Subscriber Line and Cable Modem) it is now used commonly in datacenter and desktop PC environments at speeds of up to 1 Gigabit per second. The TCP software implementations on host systems require extensive computing power. Gigabit TCP communication using software processing alone is enough to fully load a 2.4 GHz Pentium processor, which results in little or no processing resources left for the applications to run on the system.

As TCP is a connection oriented protocol, this adds to the processing overhead of the protocol, these aspects include:

In addition to the protocol overhead that TOE can address, it can also address some architectural issues that affect a large percentage of host based (Server and PC) endpoints. Currently most end point hosts are PCI bus based, which provides a standard interface for the addition of certain peripherals such as Network Interfaces to Servers and PCs. PCI is inefficient for transferring small bursts of data from host memory across the PCI bus to the network interface ICs but its efficiency improves as the data burst size increases. Within the TCP protocol, a large number of small packets are created (e.g acknowledgements) and as these are typically generated on the host CPU and transmitted across the PCI bus and out the network physical interface, this impacts the host computer IO throughput.

A TOE solution, located on the Network Interface, is located on the other side of the PCI bus from the CPU host so it can address this IO efficiency issue, as the data to be sent across the TCP connection can be sent to the TOE from the CPU across the PCI bus using large data burst sizes with none of the smaller TCP packets having to traverse the PCI bus.

TOE offers performance gains specifically for Servers and server applications which typically have to support a large and changing number of simultaneous connections, one example is a web server, where in certain versions of the HTTP protocol it requires a new TCP connection for each object (i.e. graphical image, text frame etc) that appears on a web-page.

The term, TOE, is often used to refer to the NIC itself, though it more accurately refers only to the Integrated Circuit included on the card which processes the TCP headers. TOEs are often suggested as a way to reduce the overhead associated with new protocols like iSCSI.

Much of the current work on TOE technology is by manufacturers of 10 Gigabit Ethernet interface cards, such as:

An original TOE implementation was first developed and patented (USPTO 20040042487 and others) by Valentin Ossman who later founded Tehuti Networks Ltd. based on his patented technology. The result of such TCP acceleration by the NTA (Network Traffic Accelerator) significantly reduces the processing power needed from the PC. Several benchmarks showed an improvement of over 5 times in the processing power required from the computer.

See also

External links