Tuesday, September 8, 2009

Congestion Avoidance and Control

Summary

This paper describes the key algorithms used to control data transmission and limit congestion in the Internet. The following algorithms are described in detail:

1. Slow-start
2. Round-trip timing
3. Exponential back-off
4. Window sizing

When transmitting data it is key to send at the rate of the slowest link in the path. This is necessary to avoid losses due to congestion. The rate of the slowest link can be approximated by the spacing (delay) between the returning ACKs, thus ACKs can be used to trigger sending the next packet. The problem is how to start this system, before data is sent there are no acknowledgments. This paper proposes the slow-start algorithm which creates a congestion window that is increased by one for each ACK received. The result is an exponential growth in the number of packets transmitted, until data is lost.

The next improvement concerns round-trip timing. It is important to have an accurate estimate of the round-trip time because the sender must know when to assume a packet has been lost, in other words, how long to wait for an acknowledgment before re-transmitting. The proposed improvement takes into account the fact that round-trip time variance increases with load. This avoids "the network equivalent of pouring gasoline on a fire" by preventing spurious retransmissions during times of heavy network load.

Finally, the paper describes how to avoid congestion. Specifically, it suggests that packet loss is a good indicator of congestion, since it is often caused by overflowing buffers in the path. So when packet loss is encountered the end points should exponentially decrease the rate of sending data. Specifically, the congestion window size is cut in half whenever a timeout occurs.

Thoughts

This paper very clearly explains the causes of network congestion and proposes simple and very well described techniques for dealing with it. Overall, I really enjoyed reading this paper.

I am concerned about the use of packet loss as a signal of congestion. Especially in wireless networks this seems like a bad idea. If a packet is lost due to wireless "noise" it is unclear that sending slower will have any benefit, in fact it may even make sense to send duplicate data faster to improve the chances of it reaching then receiver.

No comments:

Post a Comment