(This is a co-post of Herman and Benoit at Fiberevolution)
With the debate on net neutrality in full swing in the US, we’ve been hearing about Bandwidth Hogs again. ‘Bandwidth Hog’ is a sound bite that conveys a strong emotion: you can virtually see the fat pig chomping on the bandwidth, pushing back all the other animals in the barnyard with his fat pig shoulders all the while scrutinizing with his shiny piggy eyes to see if the farmer isn’t around…
As analogies go this a quite mean one, obfuscating the technological reality and design choices telco’s have made. (For anybody who is not into the bits and bytes you can find an overview over here @ Fiberevolution, including an offer you can’t refuse).
What is the reality?
Lets take a DSL network for example.
When you purchase an Internet access line advertised at 8 Mbps download you get a dedicated connection rated at that speed to a concentration point (switch house, telephone exchange). You cannot “hog” that connection because it is all yours, you are the only user of that connection. The telco combines all dedicated connections in the concentration point and connects them in a shared high speed link to the nearest Internet Exchange (or peering point).
To save money they overbook that link, up to 40, 50 or a 100 times. In practical terms: if everybody tries to use their Internet access fully at the same time, they will experience 1/40th (or less) of their maximum advertised bandwidth because of the congestion in the shared link. The shared link is therefore a bottleneck, a chokepoint where congestion will rear its head.
Fortunately the chances that we all will try to use our access line fully at the same time are quite low. Experience has taught us in the past with the usage patterns of these days that as telco you could get away with these overbooking ratio’s.
When users started to complain the telco added some capacity in the chokepoint. You can find in the fine print that you get a “best effort” connection, which is the legal translation of the reality described above.
It did and does make sense to overbook the shared link because it reduces the cost of an Internet connection. But the big elephant in the room is the fact that all telco’s compete on advertised maximum speed and hardly anybody dares to be very clear about this bottleneck.
Time however is running out because usage patterns have changed over time (more video), expectations have been raised, and the bottleneck is getting noticeable. Some telco’s increase their capacity so to adapt to their users, others seem to see this as a good excuse to prevent the appearance of Net Neutrality regulations and introduce traffic rationing. After all, with artificial scarcity you can make more money….
Going back to the “hogging” theme, does a heavy user hog the chokepoint?
According to the experts we consulted the following is a good approximation of the complex reality, first of all for a DSL network.
Lets assume we have a 1 Gbps shared middle mile link for a number of users, all having bought a 10 Mbps rated Internet access line. In the beginning only 1 user is active. He initiates a download of a big file of a server with no speed limitations (makes it easier to explain). The server finds no limitation in the link, sends data at 1 Gbps and quickly saturates the receiving buffer of the particular access line modem (DSLAM for example) of the user. The effect is that packets are dropped . Which is a signal to the sending server to reduce its rate to the level where only now and then a packet is dropped (testing the maximum speed possible of the access line).
This is the basic congestion mechanism of TCP.
Other users start doing the same thing, the same happens until the shared link starts to get fully utilized and more. The congestion in the shared link leads to the shared link receiving buffer being flooded, dropping packets as well.
The result is that all servers sending data reduce their speed simultaneously until the number of dropped packets is acceptable. The dropping of packets is a statistical phenomenon, with every user having the same chance.
The effect is a gradual reduction of speed. [ Update: but also to an increase in latency which hurts perceived performance, see http://www.dadamotive.com/2009/12/congestion-neutrality-2.html]
The straightforward implementation of TCP will lead to a more or less equal distribution of bit-rates per TCP/IP session. How many sessions do your create as a user? Very few with email, maybe (but not necessarily) a lot by using Bit-Torrent or by surfing the web.
That having said, a more complex implementation of TCP might do it differently and reduce the bitrate for all session coming from one IP-adress.
It is all a design choice.
For a cable operator the bottleneck is already inside the hybrid fiber-coax access network itself. The access channel is itself shared with others and the cable modems reduce your maximum speed if its gets busy. The effect is that the users who have bought the high access speeds feel the reduction first because the upper limit is reduced to the same level for everybody (If you have bought a low speed access subscription you will notice it the last). So the congestion in the middle mile leads to the highest paying users complaining first. Harsh maybe, but very straightforward. If you assume that the “bandwidth hogs” have the fastest connections, they will be the first to get the brakes on if congestion occurs.
So hogging does not exist as claimed, only chokepoints and designs that companies are hiding from sight.
The thing that bothers us is the lack of transparency on these crucial design issues. ( Benoit is offering free consultancy on datasets if telco’s will generate them, so let’s see if someone will take it up.).
Instead of Network Neutrality rules we might want to start with Congestion Neutrality rules.