The envelope pushes back

We all are aware of the (theoretical) graphs of the lifecycle of a technology, the S curves. From new to mature to end-of-life.  Very neat these graphs, but the map is not the territory: in reality it is not easy to predict if a technology is reaching its limits ahead of time.

There is an indicator (hat tip Hendrik Rood) that is a good telltale:

  • the more a system needs to be tuned integrally to achieve a certain performance level
    • the tuning  of every component becomes interdependent with the setting of more and more other components, which increased complexity exponentially
  • achieving the performance in a smaller and smaller operational envelope
    • you need to adjust and retune for different circumstances
  • the closer you are to the limits of the technology
  • the more expensive it gets

Good examples are F1 cars and fighter jets, where peak performance is everything. F1 cars are completely retuned for every new circuit, up to gear ratio’s and the motor management. The knowhow to be able to tune for maximum performance is very rare and sought after. The expense is staggering.

Recent publications on the next promise to squeeze broadband life from ancient copperlines show the telltale:  VDSL2-vectoring goes beyond VDSL2 in performance, but at a cost of higher complexity and higher number of dependencies, in a smaller envelope. End of life telltale….

Costas Troulos and Yves Blondeel were gracious enough to enlighten me on the details.

Vectored VDSL2 requires that all lines in the cable are vectored (signal injected/monitored for interference) to achieve the intended gains. To support vectored VDSL2 at the node level across different providers, a way to manage all lines is needed. As there is currently no commercial solution to support compatibility between the vectoring equipment of different vendors (and no incentive to do so as it would increase competition between vendors), there can be only one type/brand of DSLAM-equipment for a complete node.

The consequence is that in order to have multiple providers for a node they would have to:

-        choose the same brand of equipment

-        cooperate and coordinate the operation of their respective equipment

Possible in theory, highly unlikely in practice as this allows incumbents to erect another barrier for entrants.

Vectoring in combination  with ADSL in the same node ? No real problem.

The ADSL signal is not hampered by VDSL2 vectoring techniques, so they can be combined without coordination. (Combining VDSL2-vectoring with ADSL is however a  strange proposition in the market, at best a temporary situation).

Telekom Deutschland’s formal application to BNetzA for modifications to copper unbundling (focus on sub-loop but with possible extension to full loop) regulation is focused on interference between VDSL2 (ITU-T G.993.2) and vectored VDSL2 (ITU-T G.993.5), with some unspecified exceptions about ‘other interference’:  http://www.t-regs.com/index.php/tweets/281789261211107328/

Very, very clear sign that the envelope is pushing back hard…

Posted in Uncategorized Discuss

Analyse voor Ziggo webcare

Dag beste webcare,

een technisch lastig probleem met de internettoegang bij mijn bejaarde schoonouders, wat een behoorlijke analyse gevraagd heeft om uit te zoeken wat in ieder geval NIET de oorzaak is laat zich niet in een simpele tweet comprimeren. Een simpel contactformulier werkt ook niet, en een telefoontje naar een helpdesk is lachwekkend als je screenshots moet tonen aan een netwerkengineer.  Want dat zal nodig zijn.

Gelukkig heb ik een blog, en kan ik een link wel in een tweet persen. Dus dit is je kans om de goede mensen hier aan te zetten. Het adres waar het om gaat staat hier niet in, want dat hoeft niet openbaar rond te slingeren. Het staat in het contactformulier wat ik ingevuld heb met dezelfde vraag: hoe kan ik de technische info overdragen?

En hier is de analyse.

Ze klagen al lang over wegvallende internettoegang. Al eliminerende hebben het volgende kunnen vaststellen:

- de wifi toegang voor Ipads kan natuulijk last hebben van andere accesspunten in het gebouw als ze dezelfde kanalen kiezen. Scannen van de zichtbare accesspunten,  verplaatsen van het kanaal, en het bijplaatsen van een tweede accesspunt in de kamer heeft dat als oorzaak weggestreept.

- een vaste UTP lijn naar een Windows computer geeft hetzelfde symptoom

- maakt niet uit of je Windows laptop, Android of IOS devices hebt, probleem blijft (en als je via een 3G MIFI access point contact maakt, gaat het wel goed)

- power cycle van het Ubee kabelmodem geeft soms even soelaas, maar de internetverbinding valt dan weer weg

- de telefoon (VOIP via kabelmodem) werkt echter wel, dus er is upstream en downstream  RF data verbinding tussen CMTS en modem. Docsis kanalen geven een SNR van 38 db, prima toch?

- via wifi of vaste lijen kun je prima in de management interface komen van het Ubee-modem. Er is een WAN IPadres toegekend : 212.127.128.244 (MAC adres 90:6e:bb: e5: 95:c9), er zijn DNS servers bekend : 212.54.40.25 als eerste.

- maar….een ping naar www.google.nl faalt (alle pakketten), dus geen DNS die werkt.

- dan maar een harde ping naar de DNS server 212.54.40.25….faalt ook.

- ping van buitenaf naar 212.127.128.244…faalt

- tracert van buitenaf naar 212.127.128.244…stokt bij 213.51.149.124

Dan is er maar 1 conclusie die ik kan trekken: het zit in een hoger netvlak, buiten de woning. Alle mogelijke oorzaken in het huisnetwerk zijn geelimineerd.

Voer voor een netwerkspecialist….

[ Update: 12 maart.]  Het IP adres kun je nu wel pingen (38 ms), maar er is geen tracert mogelijk. Geen internetverbinding vanuit het huis echter.

[ 1 uur later]. Weer niet pingbaar WAN adres.

[Helpdesk gebeld] Aardige jongen, maar de suggestie dat het aan de UTP kabel ligt is natuurlijk een script uit het helpdesksysteem. Maar dat slaat nergens op, gezien het feit dat je moeiteloos via wifi en bekabeld kunt browsen in het management interface, en dat je het WAN-IP adres niet kunt bereiken. Enig duwen leidt tot een nieuw telefoonnummer met experts, dat wel 75 ct per minuut kost.

[ Expert helpdesk gebeld] Deze weten er meer van. Modem vervangen lijkt hun het beste, even de normale helpdesk weer bellen voor een afspraak.

[ Helpdesk gebeld] Na weer opnieuw door 4 keuzemenus heen gegaan te zijn…”tja, dat kan niet als je niet ter plekke bent van het modem, want we moeten weer kabeltjes checken ”  Grrr.

[ 3 dagen later] Naar schoonouders toe, helpdesk gebeld, afspraak gemaakt met de monteur voor de vervanging de volgende dag (dat wel).

[Volgende dag] De monteur is er en belt me. Ziet dezelfde fenomenen en gaat het modem vervangen. Belt terug: ondanks het vervangen toch geen oplossing, zelfde problemen. Ra ra. Hij gaat verder zoeken.

[ 2 uur later] De monteur belt: lijkt het probleem gevonden te hebben. De Devoloo Home Plug die gebruikt is om via het lichtnet een verbinding te maken van het modem naar een access point, die veroorzaakt het wegdrukken van het Internet. Wij beiden verbazen ons erover dat dit kan, maar er is een direct verband te zien. Laten we het zo maar proberen.

[ Week later]  Alles werkt nog steeds, dus lijkt er op dat het probleem nu opgelost is.

Posted in Uncategorized Discuss

Webcare?

Mijn schoonouders zijn ruim boven de tachtig jaar. Met een (Windows) computer zelf bankieren en de belastingaangifte doen is al heel wat voor ze. De eerste Ipad is een jaar geleden binnengekomen, en was na enig proberen een succes. Van Wordfeud tot teletekst en het weer, welke vogels buiten zingen, patience en bridge. Zo’n succes dat er een tweede Ipad is gekocht, dan hoeven ze niet op elkaar te wachten.

Natuurlijk zijn de technische schoonzonen en kleinkinderen de vraagbaak voor als er wat geinstalleerd of aangelegd moet worden. Een vaste UTP kabel naar de Windows computer? Extra geheugen? Updates? De wifi zo instellen dat er voldoende bereik is en er geen interferentie is met de buren? Toch maar een accesspunt erbij zetten, zodat op hun favoriete zitplek in de zomer er ook met de Ipad gewerkt kan worden? Allemaal kleine klusjes als je weet hoe het moet, en een onoverbrugbare barriere voor hen.

Helpdesks zijn helemaal een disaster voor hen. Die zijn uit kostenoverwegingen zo ingericht dat je met een vraag je door allerlei barrieres moet heen worstelen, om te voorkomen dat je een (duur) mens aan de telefoon krijgt. En dan komen de standaard afpoeierantwoorden, ongeveer van het soort “zit de stekker erin”?

Nu klagen ze de laatste tijd over vaak geen internet, en dat merk je meteen met Wordfeud spelletjes. Eerst dachten we dat het aan de wifi lag, maar na enige tijd zoeken en elimineren blijkt het probleem buiten het pand te liggen.  Maar het is geen simpele…

Nou gaat ie leuk worden: hoe gaat zo’n helpdesk, in dit geval van Ziggo, daar mee om? Dit valt buiten de standaard, en ze hebben te maken met een deskundige (ik dus :-)).

Deze post zal ik updaten met de ervaringen, een tweede post is een beschrijving van het probleem. Dat is nodig want er is uberhaupt geen mogelijkheid om een goede analyse op te sturen. Want wat zijn de opties?

- Contactformulier invullen

- Helpdesk bellen (wachtrijen en beperkte dekundigheid), maar zondag niet bereikbaar.

- Webcare op twitter en facebook (zondags overdag wel actief)

Stap 1. Contactformulier ingevuld met de vraag hoe je een gedegen analyse van een lastig probleem kunt opsturen. Ben benieuwd wat het antwoord is.

Stap 2. Webcare via twitter dezelfde vraag gesteld: lastig internet acces probleem geanalyseerd, met screenshots etc. hoe kan ik die informatie overdragen? Antwoord (en dan valt mijn broek uit) “Wat voor soort probleem is het?” Halloooh! Wakker? In 140 characters? Tweet niet gelezen?

Eens proberen: wat gebeurt er als ik het probleem op dit blog beschrijf, en dat vervolgens via een link terugstuur naar webcare?

Wordt vervolgd..

[Vervolg]

Nou, webcare dat werkt dus niet, geen enkele sjoege.

De helpdesk heeft scripts die er vooral op gericht zijn om het probleem bij jezelf te plaatsen, wat ik me ook wel kan voorstellen overigens. Ze blijken overigens geen internet access (browser) te hebben bij de helpdesk, want ze konden de analyse op mijn blog niet lezen ….(!). Leidt zeker teveel af.

Je moet stevig drukken om bij de experts te komen, en vanaf dat moment begon het te lopen. De monteur die ter plekke kwam was competent en liep niet meteen weg, analyseerde goed, communiceerde goed. Prima.

En  wat was nu het probleem? Een bizar effect van een Home Plug op een kabelmodemverbinding. Daar had ik nog nooit van gehoord. Geeft wel aan hoe gevoelig RF hoogfrequent communicatie toch is.

 

Posted in Uncategorized Discuss

The costs of (over-)provisioning capacity in shared links

Even within the limits of the 140 characters of Twitter people succeed in having intelligent conversations and discussions. Like Dean Bubley (@disruptivedean) and Martin Geddes (@martingeddes) recently. Part of the discussion focused on the costs of provisioning more capacity in shared links as a cure for delays and losses introduced by statistical multiplexing and congestion in the shared link. Like building bigger motorways with more lanes, to eliminate traffic jams.
Martin Geddes is crusading for a new approach to manage traffic on the Internet: statistical multiplexing (and above all congestion) introduces delay and loss which should be traded between applications and possibly users. Which would allow you to first of all to reduce the costs of unused capacity, and would allow you to exactly deliver what is needed. The unused capacity in shared links would be too costly to continue throwing capacity at the problem. (See this post for more details and my doubts).

Dean Bubley argues that the Internet delivers too much value to too many people to start going down a risky new path with scant evidence supporting it. Don’t fix something which is not broke. He basically stated that if the cost of the “waste” of unused capacity is below 10 % (from the perspective of the Internet subscriber, as a percentage of the Internet subscription fee) one should not even bother to consider other options. Just add the capacity and live with it.

As a tinkerer by nature I love facts and numbers about potential outcomes. Ballpark numbers, rough estimates, anything that will give me a feel if some issue is worth fretting about. So when Dean set a number, it gave me a challenge: try to estimate if costs of provisioning unused capacity will be significantly below or above the 10 % target.

So what drives the costs?

Dean, Martin and myself agree (I believe) that there is no issue in transnational Internet capacity. The costs of transit capacity are going down and down, are negligible in an ISP’s budget. Furthermore the major capacity demanding (and not well behaved) application is video: for video the content delivery networks are transporting content themselves separately form the Internet to delivery servers at minimum located at Internet Exchanges, if not deeper into the ISP’s network. Bypassing the transit networks.
The costs that are under discussion are between an Exchange (or other peering point of the ISP) and your local CO (central office) where your access line terminates. (In cable networks and PON networks the accessline itself is shared, not so in VDSL/FttC and home run FttH. For the sake of simplicity I ignore that sharing by cable and PON: assume that is not the bottleneck).

Capacity in this part of the networks consists of fiber dug into the ground, transmission equipment per fiber, and routers to manage the traffic. When extra capacity is added to prevent negative delay and loss effects, one has to buy and install new equipment, or even worse dig new fiber.

Digging new fiber is costly but adds loads of capacity. A single 40 mm duct (and why leave it at one when digging?) carries thousands of fibers, if you use high density fibers cables. Once a fiber is there, Moore’s law allows you to add capacity for ever diminishing costs per Gbps (10Gbps, 40Gbps, or even a lot more with DWDM).
The biggest outlay is the digging and laying fiber. Equipment and routers are relatively inexpensive, but the bill can add up to a lot when you really start to use all the fibers in a duct. The good thing however is that you can add them if and when required, incremental.

Ballpark figures? Let’s assume 40k Euro per kilometer digging in fiber (or USD 80k/mile). For a 100 km stretch you spend 4 mio Euro. Let’s assume lighting up a fiber for 10G including routing is a 1000 euro on average: 1000 fibers is a million Euro, total 5 mio Euro. All right, let’s just take 50k Euro per km as investment for simplicity.
The very, very worst case is when you just have to dig again for all connections from central offices to exchanges. The easy thing about that assumption is that you can get an easy reference for how many kilometers that is, including redundant routes to get resiliency.

Just take the length of the main roads (motorways and major roads) in-between cities in a country.

This relies on the observation that the vast majority of a population lives in cities. For instance: the top 300 agglomerations in the USA hold 80% of the population, the urban (> 10.000 inhabitants) part of Australia holds 76 % of the population. And cities are connected by roads, in a redundant way (rarely ends in a city).
So what happens if we assume we dig fiber for backhaul (and light a sizable part up) for the length of all these roads, and divide it by the number of households in a country (which is proxy for the number of Internet connections)?

Below are the results for Germany, France, the UK and my own little Netherlands. (I have added rail to see if that network if more or less extensive than road. Which is the case for the UK).

Households and roads

 

Yes you have add running costs, but remember this figure is way over the top.

There is lot that can be said about these results and the assumptions, as some comments in the past have remarked. An ISP cannot indiscriminately add capacity (its comes in chunks), crazy tax laws, big bullies and very dispersed subscribers all can have a serious influence on decision and costs, managing DWDM can be a pain etc.
But for me the main observation is that very likely overall the real technical costs of the “wasted” capacity will be so low that subscribers will not mind paying (the real costs that is, not some artificially inflated figure…..).

If extra capacity will solve everything?  Maybe not everything. Nobody said TCP/IP or the Internet is perfect. But evolution learns us that what works is often good enough, and continuous tinkering may lead to surprising results….

 

Posted in Uncategorized 1 Comment

“Future of Broadband” workshop

Last monday Martin Geddes and PNSOL organized a workshop (Future of Broadband Flyer) on their vision of the future direction of “multiplexed packet switched networks” (aka broadband and/or Internet).  Contention and discussion garantueed, as Martin and Neil c.s. state that we are on a track that leads to doom: kind of like the first climate change prophets in a room full of petrolheads :-).

The lively discussion sharpened my thoughts and views: I do not agree with everything that was claimed, yet the issue at hand is worth to be investigated.

Statistical multiplexing of information packets is a fascinating subject: the theory is complex, and the practice as embodied in the Internet is a revolution in society as big as any in history. As Dean Bubley said, the value generated for society by the Internet in only a few decades as hard to overestimate, our children cannot imagine a world without it, so we should cherish what we have and be very careful in applying “improvements”. This warrants both an open mind to any proposal to improve and a healthy respect for what has been achieved by many great minds who kept on researching and tinkering to get what we currently have. After all, the Porsche 911 sounds like a bad design with lousy handling on paper, but look what 40 years of improvements got us…

The core insight that has got my attention ( long before the workshop) is that statistical multiplexing of information packets has made a global information network like the Internet possible and affordable: yet the price to pay is that “noise” or imperfections are added as the combined load rises. And once “noise” (delay, jitter, loss of packets) is added you cannot reverse the degradation, its a one way street. In many cases the noise is inconsequential, sometimes it is not.

The best analogy ( damn analogies…yet we cannot communicate without them) is a highway with cars. As traffic density increases first nothing significant happens, but after a certain level of traffic density is reached the average speed is reduced (delay) and the variation in arrival time increases (jitter). The chance of an accident increases (loss). Once the traffic density reaches “the cliff” (maximum level) any minor perturbance causes a collapse of the flow, reducing the throughput to almost nothing ( aka “traffic jam”). A well known effect on highways.

The analogy fails in many aspects: if only where on a multiplexed packet switched network the changes in offered load can vary instantaneously and very fast, and loss of packets is acceptable, even is used on purpose as a signaling mechanism. On the Internet we have a flow control protocol like TCP that is designed to reduce the sending speed when somewhere in the path the flow level reaches “the cliff”. When TCP senses that packets do not arrive at the destination (loss as signaling) it backs off, only to try again later if the speed can be raised again.

The typical reaction time of this control loop is dependent on the round trip time of packets  sometimes elongated by a network design flaw called “buffer bloat“.  Any traffic phenomenon like changes in offered load that is faster than the control loop of TCP can react to will not be compensated for: it even might have adverse effects as multiple effects including the control loop of TCP interact with each other.

The claim Martin c.s. make is that as accces speeds increase (FttH, HFC networks) the volatility of variations in a multiplexed link starts to outpace the TCP control loop, leading to more and more transient “traffic jams” and even collapse. Which could be true: they showed some measurements of the variation in delay of packets in real life that indicated that there might be a problem. (Much more data needed however).

They use the graph below (click to expand so you can see the full graph) to make their point. The amount of time (delay) it takes to send a packet of information from sender to receiver depends on:

- distance (speed of EM waves is finite)

- number of routers which convert light to electrical to light

- serialization delay  (you have to wait until the last bit is there)

- transient delays (contention in buffers, loss and resending etc.), also called non-stationarity

Without the transients TCP can do a great job.

 

So far so good: I would like to see much more measurements and analysis of data to determine if these transients are a) a new phenomenon b) increasing  in number and size c) are the cause of big problems d) caused by what we think is the source. Worth the effort,

Assuming that the transients indeed prove to be a serious problem the question arises what the remedy should be.

(PNSOL proposes (not part of the workshop) that the network operator intervenes at the ingress-point of a section of a network. The intervention is based on the value of a certain type of stream of packets, and the sensitivity for that stream for loss and delay/jitter. For instance: VOIP is sensitive to delay and jitter, not so much to loss. Mail is quite insensitive to delay and jitter, so you prioritize VOIP and delay mail. The intervention makes sure that at no point downstream contention arises, so all loss and delay are distributed at the ingress point. I guess this can work as advertised, but….)

The organizers of the workshop went on with statements that I question:

- because of the transient non-stationarity effects we need a new flow control paradigm to be able to utilize the resources (capacity) much better, adding bandwidth is not a solution

- operators have an unsustainable (or rather very risky) business model if they don’t apply the new paradigm, because they will be taken by surprise when transients lead to collapse or go broke on adding bandwidth

- networks are to become trading places for “noise/imperfections”

All that I have learned over the years is that bandwidth is cheap and running a network below maximum utilization will keep the transients low. (Again, if the fast transients are indeed a problem we need extra measures to remedy that, not necessarily the way PNSOL envisions that).

The business models of fixed line operators are not very dependent on the cost of amount of bandwidth offered (both transit and backhaul or access), provided the physical infrastructure is good enough (aka fiber). Yes there is a problem if you run over copper or underinvest. The operators balk at the one-time investments needed to go to fiber, as no CEO wants to send a message to shareholders that the rich dividends will be absent for a decade  or so. Wireless is a totally different situation: networks taken by surprise at the demand and the type of demand (signaling load), claiming spectrum shortage as an competitive strategy to keep ouy contenders, shift of revenue from voice and text to lower revenues of broadband while investments are required create a fuzzy picture of what reality is.

Aiming for 100 % utilization so you can delay investments, at the price of the operator deciding what valuable is and what not creates a big moral hazard and a potential destruction of future innovations. You immediately create an incentive NOT to invest in capacity and create artifical scarcity which you as an operator can monetize. The operator gets to decide what is valuable and what is not. A bad deal for society.

As I have argued before the interpretation of transmitted (or even not-transmitted) data is already dependent on the particular sender and receiver, the value of the interpretation is even more specific. So no operator should interfere. Martin proposes that the network becomes a trading place for end-users, trading loss and delay options which the operator only executes. Even if that would be possible (information asymmetry, no options to leave the market, very hard to make informed decisions all the time for normal people) it is a complex and costly solution for a problem created by scarcity, by lack of investment in infrastructure.

Having said that, our regulators and politicians are at loss to get the investments in new infrastructure going. I have yet to see regulation that really incentivizes investments. The Network Neutrality debate is about the same issues as discussed here. There is is complexe emergent relationship between network design, network operations (capacity and management), revenue for the network operator and value as experienced by users, wether we like it or not. And it exists today.

Workshops like these help to develop our conceptual framework on how to deal with these issues.

 

Posted in Hyperconnectivity Discuss

Spaarzin

Een recente publicatie van de Nederlandse Bank over de ontwikkeling van de bezittingen van huishoudens in Nederland geeft een beeld hoe hard er gespaard wordt, vooral zichtbaar bij de pensioenfondsen. Maar ook het vermogen dat in deposito’s en effecten zit neemt weer toe. Tegelijkertijd stijgt de staatsschuld, zoals we allemaal weten. Het CBS publiceert ook statistieken over de vermogens van huishoudens, en daarin komen de eigen woning en de hypotheekschuld die er tegenover staat aan de orde. Interessant om eens te kijken hoe het gaat met ons gemeenschappelijk vermogen.

Een rondje factchecken levert op dat er 2 getallen circuleren voor de waarde van het eigenwoningbezit: ca 1400 miljard en ca 1100 miljard. Het laatste komt van het CBS, het eerste is niet terug te vinden in statistieken. De verklaring kan zijn dat sommigen de waarde van de 2,4 mio corporatiewoningen meetellen en zo op het hoge bedrag uitkomen. Dat is niet zuiver, want dan moet je de (hypotheek-) schulden van de corporaties  aan de andere kant ook weer optellen bij de uitstaande hypotheken. Laten we het dus maar houden bij wat het CBS en de DNB hanteren, met in het achterhoofd dat er bij de corporaties nog een reservepotje zit.

De tweede belangrijke factor is dat  noch het CBS noch de DNB weten wat er in kapitaalverzekeringen voor de eigen woning (spaar- en leven-hypotheek) opgepot staat bij verzekeraars en banken. Sommigen noemen een getal van 300 miljard, maar daar is geen statistiek over te vinden; alleen bij de vereniging Eigen Huis wordt (geschat door de hoogleraren Conijn en Schilder) een bedrag van tussen de 60 en 95 miljard Euro genoemd. Voor de grafieken is dat bedrag ook genegeerd: nog een reservepotje.

De combinatie van de statistieken geeft een paar interessante grafieken.

Allereerst de “assets” : pensioenen, eigen woning en deposito’s/effecten. (De waarde van het eigenwoningbezit is slechts per jaar gegeven, de rest per kwartaal, wat de harkerige veranderingen verklaart.)

Het blijft spectaculair hoe de pensioenvermogens, ondanks de forse dip blijven doorstijgen. Als we ondanks die stijging nog steeds pijn in de buik hebben over de betaalbaarheid van de pensioenen, dan hebben de bestuurders de voorafgaande jaren wel enorm zitten slapen….. De eigen woning is ondanks de daling en de inflatie (zeg 10% koopkrachtdaling in 5 jaar) nog steeds meer waard dan in 2006.

 

En we sparen lustig door op bankrekeningen.

De tweede verrassing is dat deposito’s en effecten in totaal groter zijn dan de staatsschuld. En de laatste jaren iets harder groeien, dan de staatsschuld toeneemt!

 

 

 

 

 

 

 

 

Bij het woningbezit minus de hypotheken zie je een andere trend. De “overwaarde” van de woningen is terug op het niveau van 2006, maar dan moet je de koopkrachtdaling nog wel verrekenen. Desalniettemin, met medenemen van het slecht te schatten potje “kapitaalverzekeringen voor eigen woning” is de hypotheekschuld ongeveer de helft van de waarde van de woningen.

Een rijk en zich suf sparend land dus. Met daarbij gezegd dat het geld scheef zit, zoals Conijn en Schilder ook aantonen. Het vermogen zit bij de babyboomers: de jongere generaties zitten meestal met een huis “onder water” en lagere pensioenverwachtingen.

 

Posted in Uncategorized Discuss

Incomplete

One of the joys of life is to learn new ideas that open up new lines of thinking, such as Terrence Deacons work. Terrence Deacon introduces in his Magnum Opus “Incomplete Nature”  a new conceptual  framework : how constraints on thermodynamic processes result in work (and information), how counteracting processes can lead to emergent new behavior, how  multiple system levels interact,   and lead to new interaction levels. The end result is new emergent behavior of a new complexity. His  approach to explain how life can emerge from matter is very compelling.

On a tangential track he introduces three levels of information, each one emergent from the other:

-          The Shannon level, using the uncertainty (entropy level) of the next symbol to express the capacity of a communication channel

  • The higher the uncertainty means the more options means more carrying capacity

-          The Boltzmann level, where the influence (or absence of an influence) of a constraint on the sending process is deducted by interpreting the information flow over the channel

  • Which is particular between sender and receiver, so multiple interpretations can co-exist !
  • Where absence of an influence can be interpreted as a signal as well, just as the loss of a packet can be a signal for TCP/IP

-          The Darwin level, on the usefulness of the information. This is by definition a normative judgment, normative for the individual (or at least for a selective group) receiver or combination of sender and receiver.

Deacon introduces these levels in the context of self-organizing systems that emerge from naturally occurring thermodynamic non-equilibrium processes. The Internet is a designed architecture, something completely different.  Yet the similarity of  the three interdependent levels he introduces with the interaction between the levels of 1) IP-routers plus links,2) TCP/end2end and 3) the Net Neutrality debate is striking.

  • IP-Routers plus links are (imperfect and shared) communication channels for packets of information.
  • TCP and the end-to-end principle embody the interpretation level on top of the information channel: the absence of an ACK is a signal for TCP routines in the endpoints to manage the flow, the interpretation of the content of a packet is done by the endpoints.
  • Net Neutrality is about the usefulness of the information for the endpoints, its value, and who gets to monetize that value.

The interaction can be described as follows.

The basic functionality of the router is:

-          To receive and de-mux incoming packet streams that arrive over multiple links

-          To select the forwarding link for each packet

-          To mux and send packets that share a link to another router (or endpoint)

A “perfect” router does not add any “imperfection” to a packet stream, it preserves the original  characteristics:

-          No delay  (or variation in delay aka jitter)

-          No bit errors

-          No loss of packets

In real life a router cannot be perfect:  imperfections are added to streams  and cannot always be removed at the same system level. At a different level some of them can be:  packet loss can be recovered by an end-to-end protocol, bit errors can be recovered by redundancy etc..

TCP/IP uses an specific imperfection (aka packet loss) as a signal on the Boltzmann level to dynamically manage the maximum sending speed over an unknown and variable route,  varying traffic conditions in shared routes, and unknown receiver capacity. The feedback loop of TCP/IP is based on NOT receiving an ACK from the receiver.  The purpose is to cooperatively use the available shared information channel (as a commons) so the amount of imperfections are minimized and shared over all streams  (aka “best effort”) . If all streams would try to grab everything all would suffer much more imperfections than otherwise. The content of a packet is interpreted by the receiver, based on agreed upon standards or proprietary bilateral agreements,  for a specific purpose. Imperfections at the lower level ( or low throughput introduced by flow control) can have a detrimental effect on the interpretation. (For example jitter/delay on NTP information in packets)

The interpreted information has a value (usefulness) that is determined by the receiver (and the sender). The value can be reduced because of interpretation imperfections (for example jitter/delay on voice information in packets). The value for one receiver can however be the (perceived) loss for another party (royalties, texting income, traditional voice income, subscription income etc.).

Messing with lower system levels to prevent receivers to get their hands on that competing value is a known practice for ISP’s: for instance DPI to block VOIP on mobile data networks. However bad engineering can have detrimental effects on VOIP as well:  bufferbloat which prevents TCP/IP to receive the absence of an ACK in time ruins a lot, underprovisioning the capacity of links creates problems as well.

The Net Neutrality debates focuses on who can monetize the value of the usefullness. In this analysis it is clear that endpoints (users) define the value. The conduits should not be allowed to extort that value. The messy part in the debate is created in part by the badly understood interaction between the levels.

 

 

 

Posted in Hyperconnectivity Discuss

Killer app: work

One of the often cited motivations for Next Generation Networks is flexible work locations (like home) in combination with videoconferencing.  No need to commute to the office or travel to meetings. Work as the killer app that drives NGN deployments.

I guess it is not as clear cut as that.

Yes, the ability to create teams with the best people regardless of their location is great. To setup impromptu meetings without wasting time to travel is great.

On the other hand there is recent research material showing that creative jobs benefit from the freedom, and repetitive (boring) jobs do not.

We have 3 color receptors in our eyes which allow us to “see” skin color and therefore the accuracy of someone’s proclaimed emotional state. (blood oxygenation levels and amount of blood near the skin is a telltale). One of the reasons we like physical meetings and where video falls short.

Last but not least: work is more than earning money, for many it represents a social environment where professional contact and social contact are mixed. We like that, as social animals.

Nevertheless, even a limited change in work patterns has the potential to finance FttH build outs.

Take some data from the Netherlands.

Approximately 90 billion kilometers are driven by car every year. 50 % is for commuting and business trips, 50 % is for personal use (shopping, entertainment and the like). The commuting and business trips are done with on average a little over 1 person per car, personal use 2 persons per car.

The pattern over time in commuting and business trips is interesting. The graphs below shows in red the business trips, and in blue the commuting. While the economy grew in this decade, the amount of kilometers driven in business trips was more or less stable. Apparently we shifted to more and more electronic communication in professional relationships, increasing productivity. The commuting on the other hand rose sharply by 30 % or in absolute levels by the amount of kilometers driven each year for business trips.

The explanation is twofold:  higher education and dual income families. Higher education means more specialization: more specialization means that you need a larger “catchment area” to find the jobs fitting to your skills. Dual income families do not relocate easily when only one of the partners finds a new job.

The result is costly, both in wasted time and in commuting costs. 9 Billion kilometers at on average 50 km/hr = 180 million man-hours wasted. 9 Billion kilometers at 30 cts/km = 3.300 Million euro.

A big reservoir of untapped waste which could easily pay for FttH: a full scale national build out would cost “only” 5000-6000 Million euro.

 

Posted in Hyperconnectivity Discuss

Wirtschaftswunder

“May you live in interesting times” is an ancient Chinese curse : most events that you can read about in history books have meant hard times for the individuals involved. We live in interesting times now, paying the price for the (financial) follies of the last decade. Mindboggling amounts of money are shifted around, austerity proponents battle with stimulus proponents, and the prospects are bleak and uncertain.

Economics as a science has been (rightfully) pushed from the Olympus: the extreme belief in the power of mathematical modeling can now be identified as the expression of a minority complex in relation to Physics. ( see http://www.zerohedge.com/news/guest-post-pseudoscience-economics).

As good a time as any to reset our thinking about what we are calling “The Economy”.

For me it’s not some abstract concept to be revered and feared, we are “the economy”. Our efforts to create a better life for our dearest and nearest and for our descendants are the core . Everything we have created to support that effort is just that: support, a means to an end.

Once we discovered that more wealth is created by specialization and trade, things took off compared to people who stayed hunter-gatherers. Specialization meant we could afford to spend time on knowledge, on research and development that fueled innovations. We could collectively  afford to invest in infrastructure  that lifted our existence to new heights. (Try doing that on your own: you could own all of the earth but if you would be truly alone on this earth, you have a poor and hard life).

The price you pay for specialization is dependency on others. Who can nowadays make any of the objects you routinely use or eat at home, all by yourself? We rely on incredibly complex dependencies to lead our wealthy lives. More than that, we rely on people we have never seen and do not know, have never communicated with and  have no basis to trust. No small feat, given the limitations and weaknesses of human nature. The seven sins are always lurking just around the corner.

The solution is what some call “Social Technology” as opposed to physical technology. Laws, the justice system, constitutions, money and banks, democracy, corporations with shareholders, labor contracts, regulators, ownership, intellectual property and so on. All designed to compensate for the limitations and weaknesses of human nature  while reaping the benefits of specialization.

Social Technology has to adapt, has to be innovated as circumstances change. For instance as physical technology gives us new possibilities: just look at the disruptive effects of digitization and worldwide communication networks.

Unfortunately  the “laws of human nature” (if any) are quite different from physical laws, making it very hard to predict outcomes.  And  the only tests we know of are in real life……

The bad news is that we have a propensity to stumble along, changing our social technology  by trial and error (and costly they can be, the errors created by ideologies). The good news is that we are only limited by our will and imagination in what we can change.

Take for instance the deployment of Next Generation Networks, of FttH.

Some decades ago we decided that privatized (shareholder driven) organizations would do a better job at providing telecommunication services than state owned (politically driven) organizations.  And that competition would drive innovation. There is absolutely merit in the idea, but the design flaw in this setup becomes visible now that we need serious investments in new wireline access networks.

The new shareholders inherited a built out access network with customers that  already generated massive free cash flows from the subscriptions paid by us . They have become addicted to the payout of these cash flows: going on a diet for 5 to 10 years to finance the build out of a new access network, that is not what they want . And as shareholders do everything but “hold” shares nowadays, the brakes are on.

Competition is a great idea but a very expensive hobby if you are imagining duplicated physical access networks, as other utilities show. We do not duplicate electricity access, or gas or water or sewer pipes.  All available evidence indicates that the same applies to new wireline buildouts: it is very hard to build out one new network, let alone duplicates.

So, if this “social technology” doesn’t work out as planned, is there a change  imaginable (other than  reverting back to state owned organizations)  that would  at least create the incentive to invest?

In my opinion there is. The amazing growth of the German economy out of the rubble of WWII (known as the  “Wirtschaftswunder”) is a good case study. German entrepreneurs have told me that (next to massive debt restructuring) one very clever piece of fiscal regulation fueled investments in the newest technology.

They could put existing old capital equipment like production machines  in their books (once) at the price for which you could buy the newest models. The depreciation of the (artificially high)  book value was a part of the cost price of goods that were produced. This of course led to nice cash flows for the companies. The clever part was that this cash flow could not be used for a payout to shareholders (or only after punitive taxation), however it could be used for new investments in capital equipment. The result was that German enterpreneurs invested heavily in the newest equipment (which created the market for manufacturers etc.).  The fiscal regulation was removed after it had done its job

Imagine the same type of regulation would be put in effect for  telco’s. No more payouts from cashflows generated by (us , the subscribers on) the old infrastructure, until the country is covered by FttH.  Cashflows would immediately be directed to new network buildouts. A simple ULL-regulation would open up the network for competition.   Yes, the shareholders would complain for some time, but it would fuel the required investments and focus everybody on the new future where the growth is. A benefit for shareholders (our pension funds?) as well in the longer run.

It may be unimaginable now for this to happen, but hey, did we expect the current crisis and the extreme measures taken by governments? And a small “wirtschaftswunder” would be quite welcome in these difficult times.

Posted in Uncategorized Discuss

No techies please

(Guest post by @Tinegoedhart)

During your graduation year at a Dutch Havo or VWO your endeavor is monitored by your tutors and a governmental board. Tests during the year at any given school can’t differ too much from the country wide exams without questions being asked by controlling authorities. The open question parts of final exams in may/june are scored by your own teacher as well as a qualified teacher from another school. And of course all exams are handed out at the same time to all students in a given subject. So far so good.

Then there is the middle section (MBO). My knowledge does not reach any further than the technical streams of training routes, but nevertheless…. At my company children are trained to be skilled steel construction workers, welders, pipe fitters, etc. They work four days a week and attend school one day. Of course I pay them all five days. After two years they graduate as well….

At school there are several theoretical tests. The teacher scores them and that’s it. Then there should be a portfolio spanning two years of drawings, theoretical questionnaires, hand made work objects and reports. The employer (also responsible for the technical part of the training) scores that part.

The third part is the technical exam. Many moons ago when I was a puppy, you had to go to a technical exam centre with marked steel items according to a list. At the centre you were handed a drawing and had to make an object according to that drawing in a given time. No errors to be erased by grabbing a new piece of steel from the stock. An exam committee judged your effort and scored it. So as well as for the higher theoretical levels of education the tutor wasn’t the scorer.

Last year one of my apprentices went for his final exams… His school filled me in: I was to give him an assignment which was going to be scored as an exam. Flabbergasted I asked his teacher how on earth in this global economy my work standard could be his exam level for the rest of his life. I ordered, and paid, an old – school technical exam from the internet. My apprentice wasn’t going to go to an exam centre teached his teacher me: my workshop was his exam ground (how about re cutting new material if things went wrong or help from fellow workers during the exam??? I asked, but had to find proper and honest answers myself).

Then when my apprentice finished his masterpiece I called school again: were to ship the results to be scored. Astonishment again on my side of the phone: I was the one who had to score this masterpiece. I ordered the teacher over and said that a. I was the tutor, and as a result not the judge, b. I only have one apprentice so can’t place his qualities in context and c. what if, for crying out loud, wasn’t pleased with my apprentice / employee? Teacher, astonished, scored the masterpiece.

But what’s the consequence of the value of a certification like this? How serious a parent, an aunt, or the student himself, can take a diploma this way? Would this influence the percentage of children choosing technical education routes? Can this be added to the reasons Waddell & Meyer give in their blog http://www.evolvingexcellence.com/blog/2012/06/dont-let-your-babies-grow-up-to-be-welders.html “Don’t let your babies grow up to be welders”.

Posted in Uncategorized Discuss