The Future TCP of TCP/IP
You insert a dollar in a machine to buy a bottle of water and what comes out is a bottle only 1/3 full, would you feel cheated?
Perhaps that is the way we should feel about a few Internet protocols. Reports have it that no matter if you run a slow 9600 Kbit/second modem or a hyper-fast consumer satellite connection at 400Mbit/second, you are only achieving 35% efficiency. What if you could achieve 95% efficiency? To put it in different terms, how would you like being able to download your files in 1/3 the time or watch a movie over the Internet on demand? Likewise, those downloading huge files or those running at slower bandwidths would find this to be a very desirable improvement. 
But first it should be pointed out that this article would be similar to a young lad standing on a bank skipping stones across a quiet pond. This article will briefly skim the surface of various topics to gain a little background information on each, so we can talk about the future of TCP.
We all know the Internet simply moves files in the form of data bits from one machine to another. All of us may not know the Internet is comprised of several layers of interacting protocols where each layer does a specific job. A sandwich may be a good example where it has several layers of different food types, like bread, meat, cheese, onion, tomato, lettuce and mustard. Each sandwich layer gives a different flavor and nutrition. So is it with the Internet, with each layer performing a significantly different function, but collectively it becomes a sandwich that moves data bits across the web.
Some are familiar with the seven-layer protocol stack detailed in the Open Systems Interconnection (OSI), and this protocol stack may be confused with the five layer TCP/IP stack, so both models are presented for clarity.
Layer 1 is the Physical/Hardware layer consisting of the network transmission medium (cable, radio wave, satellite, etc), multiplex boxes and network interface cards.
Layer 2 is the Datalink/Interface layer that does the sending and receiving. When sending, Ethernet organizes the data into packets of appropriate size, which also includes the physical address of the intended receiver. When receiving, Ethernet strips the transmission packaging and reassembles the data.
Layer 3 is the Network/Internet layer that does the message routing, including the translation from logical to physical addresses. This is where the Internet Protocol (IP) operates.
Layer 4 is the Transport/Transport layer and it controls the flow of data on the network. Data/packet flow control may be handled by one of the following protocols: Transmission Control Protocol (TCP), User Datagram Protocol (UDP) or Transport Layer Interface (TLI). The TCP is our point of interest and it will be discussed in more detail later.
Layer 5 is the Session/(Application) layer that manages reliable sessions between processes and Remote Procedure Calls (RPC) belong at this layer. In the TCP/IP model this would be embedded in the Application layer.
Layer 6 is the Presentation/(Application) layer that performs the translation of the data between the local computer and the processor-independent format that is sent. In the TCP/IP model this would be embedded in the Application layer.
Layer 7 is the Application/Application layer and this is where the user-level programs and services are found. Terminal Emulation & Remote Logon-in (Telnet), File Transfer Protocol (FTP) and Simple Mail Transfer Protocol (SMTP) are examples of programs and Domain Name Service (DNS), and Network File System (NFS) are examples of services. 
Out of this collection of layers we want to drill deeper into the Network and Transport layers, because here is where we find the TCP/IP protocols of interest.
TCP/IP is a name given to a collection of networking procedures called protocols. The TCP/IP name is derived from two of the fundamental protocols in the collection, IP and TCP. Other core protocols in the suite are UDP and ICMP. These protocols work together to provide a basic networking framework that many other application protocols use. 
The Internet Protocol (IP) is the central unifying protocol in the TCP/IP suite. It provides the basic delivery mechanism for all packets of data sent between systems. All other protocols in the TCP/IP suite depend on IP to move the packets of data across the Internet. An IP packet is composed of a "TCP segment" encapsulated with IP information like destination ID, IP checksum, etc.
IP is a "send and forget" service and consequently, it does not guarantee to actually deliver the data to the destination, nor that the data will be delivered undamaged, nor that the data packets will be delivered in order sent, nor that only one copy of the packet will be sent. It does send packets with a very simple protocol that will work on a minimum functional system, which can be deployed on a wide variety of network technologies. IP does protect its packet IP header with a checksum routine. If the checksum at the receiving IP does not match, then the packet is discarded.
The Transmission Control Protocol (TCP) is the data integrity protocol. This protocol provides the reliable byte-stream transfer service between two endpoints, but still depends on IP to move packets on its behalf. TCP protects against data loss, data corruption, packet reordering and data duplication by adding checksums and sequence numbers to the transmitted data.
TCP also has a multi-stage flow-control mechanism that continuously adjusts the sender data rate in an attempt to achieve maximum data throughput while avoiding congestion and subsequent packet losses. These tasks are achieved by using "Slow Start", "Congestion Avoidance", "Fast Retransmit" and "Fast Recovery" algorithms. The TCP flow-control and the future of TCP protocol will be discussed in greater detail in the 'Future TCP' section.
The User Datagram Protocol (UDP) provides unreliable packet data transfer and depends on IP to move packets around the network. UDP does not guarantee to actually deliver the data, nor that the packets will be delivered in the order that they were sent, nor that only one packet will be delivered to the destination. UDP does guarantee data integrity by adding a checksum to the data before transmission. This simplifies transmission because return acknowledgement is not required nor sent.
The Internet Control Message Protocol (ICMP) defines a small number of messages used for diagnostics and management purposes and depends on IP to move packets around the network. The most common commands are ping and traceroute (tracert).
The existing TCP multi-stage flow-control, was mentioned earlier, was developed in the 1970s and this is where we want to direct our attention. In a simplified explanation, the existing TCP protocol breaks down data files into small TCP segments of about 1500 bytes or so. The sending computer, starts at low speed (slow start), transmits a segment (IP packet), and waits for an acknowledgement from the receiving computer that the file arrived and all checksum were met. When the acknowledgement is received, the next segment (IP packet) is sent at a slightly faster speed, and this process of sending at increasing speed continues until no acknowledgement is received (times-out). If acknowledgement times-out, then the segment is resent (new IP packet) at a lower speed (speed of the last packet sent divided by two), and this transmission speed degradation continues until an acknowledgement is received. Then the next (IP packet) is transmitted, and the process begins again. Each subsequent segment (IP packet) is transmitted at a slightly higher speed until another time-out occurs. Because of intermittent Internet traffic, the datapipes are constantly fluctuating with congestion and this wreaks havoc with sustained high volume packet throughput. Consequently, the TCP recovery is a long and constantly interrupted process and the transmitting speed never maintains network capacity levels, thus yielding a very low efficiency.
Clearly, TCP is an area that has the greatest potential for efficiency improvement and there are 3 versions of TCP in development and testing; HighSpeed TCP, Fast TCP and Scalable TCP. 
The HighSpeed TCP protocol is currently being developed and tested out of Lawrence Berkeley National Labs, Berkeley CA, and it addresses the congestion control mechanism. HighSpeed TCP addresses this problem by making adjustments to the Congestion Window. The process described above for existing TCP protocol of segment (IP packet) transmission indicated that just one segment (IP packet) was sent and the sending computer would wait for an acknowledgement from the receiving computer. In practice this is not very efficient where one segment (IP packet) is sent and the system goes on hold until an acknowledgement is received. What really happens is that several segments (IP packets) are sent before the first acknowledgement is received. The number of segments (IP packets) in transit is called a Congestion Window and ranges from 1 to 10 packets, usually set at 3. When an acknowledgement of the first segment (IP packet) is received, then another one is put in the pipeline, thus maintaining a constant number of segments (IP packets) in the pipeline. HighSpeed TCP adjusts this Congestion Window by putting a much larger volume of segments (IP packets) into the Congestion Window when the network performance is high, and reduces the number when congestion occurs. Consequently, this approach achieves a higher segment (IP packet) loading over time and thus higher data throughputs. 
The Fast TCP (Fast Active queue management Scalable TCP) protocol is currently under development and testing at Caltech. This protocol addresses the transmission data rate to achieve high data throughput. This protocol constantly measures the time for a packet to arrive at the destination computer and how long it takes for an acknowledgement to come back. Based on this round trip time (RTT), the Fast TCP protocol can predict the highest data rate the connect can support without dropping packets, and makes the appropriate adjustments in real time. This approach achieves a higher data transmission rate over time and thus higher data throughputs. The referenced report indicates transmission efficiency of 95% with reliable data transport at speeds of 1 to 10 Gbps and the possibility of 100 Gbps in the future. 
The Scalable TCP protocol is currently being tested at Cambridge, and this protocol addresses the same transmission data rate mechanism as does the Fast TCP protocol. Traditional TCP probing times are proportional to the sending rate and the round trip time; however, Scalable TCP probing times are proportional only to the round-trip time making the scheme scalable to high-speed IP networks. The recovery rates tend to be more exponential over the Fast TCP recovery rates thus having a slightly higher potential for faster data throughput. 
Each of these 3 TCP models should be backwards compatible and could coexist on the network with the existing TCP protocols. Fortunately, the protocol would only need be implemented on the sending computer. The new protocols have several Internet and industry committees to clear before one or more is implemented. Reports indicate they may be available in a year or two. So as we take our baby steps into the future, we should look forward to a full bottle of water.
Listed below are links to KEY references used in this article and all links were active and free to the public on 7/5/03.
 Caltech computer scientists develop Fast protocol to speed up Internet
 Sun Product Documentation
 TCP/IP Frequently Asked Questions
 TCP Stacks on Production Links
 HighSpeed TCP
 Scalable TCP - Improving Performance in Highspeed Networks
Ron Fenley worked as an engineer/analyst and retired in 1999. Ron moved to the country and now pursues his interest in computers, basic science and technology. Ron has been a computer enthusiast for 20 years and has been a HAL-PC member for about half that time. Ron can be reached at firstname.lastname@example.org