The costs of (over-)provisioning capacity in shared links

Even within the limits of the 140 characters of Twitter people succeed in having intelligent conversations and discussions. Like Dean Bubley (@disruptivedean) and Martin Geddes (@martingeddes) recently. Part of the discussion focused on the costs of provisioning more capacity in shared links as a cure for delays and losses introduced by statistical multiplexing and congestion in the shared link. Like building bigger motorways with more lanes, to eliminate traffic jams.
Martin Geddes is crusading for a new approach to manage traffic on the Internet: statistical multiplexing (and above all congestion) introduces delay and loss which should be traded between applications and possibly users. Which would allow you to first of all to reduce the costs of unused capacity, and would allow you to exactly deliver what is needed. The unused capacity in shared links would be too costly to continue throwing capacity at the problem. (See this post for more details and my doubts).

Dean Bubley argues that the Internet delivers too much value to too many people to start going down a risky new path with scant evidence supporting it. Don’t fix something which is not broke. He basically stated that if the cost of the “waste” of unused capacity is below 10 % (from the perspective of the Internet subscriber, as a percentage of the Internet subscription fee) one should not even bother to consider other options. Just add the capacity and live with it.

As a tinkerer by nature I love facts and numbers about potential outcomes. Ballpark numbers, rough estimates, anything that will give me a feel if some issue is worth fretting about. So when Dean set a number, it gave me a challenge: try to estimate if costs of provisioning unused capacity will be significantly below or above the 10 % target.

So what drives the costs?

Dean, Martin and myself agree (I believe) that there is no issue in transnational Internet capacity. The costs of transit capacity are going down and down, are negligible in an ISP’s budget. Furthermore the major capacity demanding (and not well behaved) application is video: for video the content delivery networks are transporting content themselves separately form the Internet to delivery servers at minimum located at Internet Exchanges, if not deeper into the ISP’s network. Bypassing the transit networks.
The costs that are under discussion are between an Exchange (or other peering point of the ISP) and your local CO (central office) where your access line terminates. (In cable networks and PON networks the accessline itself is shared, not so in VDSL/FttC and home run FttH. For the sake of simplicity I ignore that sharing by cable and PON: assume that is not the bottleneck).

Capacity in this part of the networks consists of fiber dug into the ground, transmission equipment per fiber, and routers to manage the traffic. When extra capacity is added to prevent negative delay and loss effects, one has to buy and install new equipment, or even worse dig new fiber.

Digging new fiber is costly but adds loads of capacity. A single 40 mm duct (and why leave it at one when digging?) carries thousands of fibers, if you use high density fibers cables. Once a fiber is there, Moore’s law allows you to add capacity for ever diminishing costs per Gbps (10Gbps, 40Gbps, or even a lot more with DWDM).
The biggest outlay is the digging and laying fiber. Equipment and routers are relatively inexpensive, but the bill can add up to a lot when you really start to use all the fibers in a duct. The good thing however is that you can add them if and when required, incremental.

Ballpark figures? Let’s assume 40k Euro per kilometer digging in fiber (or USD 80k/mile). For a 100 km stretch you spend 4 mio Euro. Let’s assume lighting up a fiber for 10G including routing is a 1000 euro on average: 1000 fibers is a million Euro, total 5 mio Euro. All right, let’s just take 50k Euro per km as investment for simplicity.
The very, very worst case is when you just have to dig again for all connections from central offices to exchanges. The easy thing about that assumption is that you can get an easy reference for how many kilometers that is, including redundant routes to get resiliency.

Just take the length of the main roads (motorways and major roads) in-between cities in a country.

This relies on the observation that the vast majority of a population lives in cities. For instance: the top 300 agglomerations in the USA hold 80% of the population, the urban (> 10.000 inhabitants) part of Australia holds 76 % of the population. And cities are connected by roads, in a redundant way (rarely ends in a city).
So what happens if we assume we dig fiber for backhaul (and light a sizable part up) for the length of all these roads, and divide it by the number of households in a country (which is proxy for the number of Internet connections)?

Below are the results for Germany, France, the UK and my own little Netherlands. (I have added rail to see if that network if more or less extensive than road. Which is the case for the UK).

Households and roads

 

Yes you have add running costs, but remember this figure is way over the top.

There is lot that can be said about these results and the assumptions, as some comments in the past have remarked. An ISP cannot indiscriminately add capacity (its comes in chunks), crazy tax laws, big bullies and very dispersed subscribers all can have a serious influence on decision and costs, managing DWDM can be a pain etc.
But for me the main observation is that very likely overall the real technical costs of the “wasted” capacity will be so low that subscribers will not mind paying (the real costs that is, not some artificially inflated figure…..).

If extra capacity will solve everything?  Maybe not everything. Nobody said TCP/IP or the Internet is perfect. But evolution learns us that what works is often good enough, and continuous tinkering may lead to surprising results….

 

About Herman

Herman Wagter writes on Dadamotive about facts and figures behind issues that interest him. His work as interim manager and consultant has involved him directly in the impact of hyperconnectivity and sustainability on society. As an independent agent and "mobile warrior" he has experienced the pro's and con's of how organizations and projects can be structured, and what the effects on the final result can be. In his opinion we are entering an era of profound change, driven by these fundamental forces. Following the trends, discovering the fun and debunking the half-truths is a passion he likes to share with others.
Posted in: Uncategorized.