This is a free preview of a paid episode. To hear more, visit jollycontrarian.substack.com
“I once got all the way from Glasgow to Edinburgh without a ticket. I walked.”
— Sid Snot, The Kenny Everett Video Show.
“There are more network use-cases in heav’n and earth, Horatio, than are dream’t of in your philosophy.”
— Shakespeare, Spamlet I, vi
An iron-fisted Romanian
“The Bickerings,” ancestral home of the Contrarian clan, is freezing old pile in Squatney Green. It is cold enough, but made worse on account of the JC’s missus, the Contesă Birgită von Sachsen Rämmerstein, who controls the central heating with an iron fist.
The Contesă grew up in a stone castle in the high Transfăgărășan, her father was a tyrant and she has therefore grown accustomed to a chilly ambience. The family was grand but impecunious, and she habitually regards any attempt to put temperatures into double figures as evidence of immutable moral decay. “Eef you are cold,” she is fond of saying, “you should put on a hat.”
I am, by these standards, weak. I am often tempted into defiance when she is not looking. Until now my meagre resistance has been mainly useless: the Contesă is gimlet-eyed, and immeasurably helped by our central heating system which was designed about the time they built the computers for the Apollo programme, and it has similar functionality. While it can, I am told, schedule and regulate temperatures this requires an advanced facility with algebra that I, alas, do not have.
Nor will the Contesă countenance my occasional suggestions that we upgrade to a modern central heating system with an intuitive user interface. That would involve massive expenditure and, besides, capitulate to my lack of Transylvanian fibre.
But recently things have changed. I have identified a way of fitting inexpensive replacement valves on our radiators. They are wifi-enabled and fitted with a smart thermostat. They can be programmed, controlled and adjusted from an app.
I used the meagre allowance the Contesă grants me and bought a set of smart valves. As the northern hemisphere winter grinds its saturated way to a squelchy close, retailers are trying to shift their inventory before the spring arrives, the world warms up and it is too late. The valves are currently on sale. I bought seven and I got a bargain: they were half price.
Thanks for reading! This post is public so feel free to share it.
The problem with central heating systems
Until there was the internet, the problem with upgrading a traditional central heating system was exactly that: it is a centralised system. It has a heavy structure. There is a single central brain, a designed-in “nervous system” and it is integrated and not articulated: if you want to upgrade any part, you need to upgrade the lot.
The brain controls two systems: a water system, that sends hot water from the boiler out to spur radiators around the house, and an electrical system that measures temperatures around the house with remote thermostats and sends that information back to brain. The brain has a “preferred setting” from which it controls how much water it should send out to the radiators. If the thermostats say, “it is too hot” the central system shuts off. If they say, “it is too cold” the central system opens up. There is no great intelligence in the system: it has some kind of a time scheduling function and a temperature gauge, and that is it. More sophisticated systems divided the house into temperature zones, each controlled by a single thermostat.
But beyond that, to micro-manage their local environment, users would have to manually adjust the radiators. Each has its own analog thermostatic valve connected to a switch that gates the pipes running into the heater. If it opens, water flows in. If it closes, water stops. But the manual valves are not connected to the central brain: if a radiator’s local valve is fully off, the radiator will not come on, whatever the central system tells it. The electronic thermostats that talk to the central system’s brain are overriden by the manual ones that do not. On the other hand, if the central system thinks the zone is too hot, it won’t send any water to the radiators, so it won’t matter how the local radiator valves are set.
The system is, therefore, something like a binary logic gate: a radiator heats only if both the electronic and the manual valves open. It is what lawyers, and grammarians, would call conjunctive: an “and,” not an “or”.
It all takes quite a lot of — well — plumbing and wiring to install such a system, and therefore quite a lot of disruption if you want to replace it. The electronic thermostats are hardware-controlled and connected by cable, chased into the walls of the house. God forbid should I suggest we move a thermostat and upset the Contesă’s Farrar & Ball™ elephant spunk™ skim coat wall finish.
Since our control panels were designed in the late 60s, they have little of the functionality we are used to these days. They were not designed to be upgraded. They are not modular. Their programming is hard-coded into ugly little devices dotted around the house. Not just ugly, but dysfunctional: they hail from a time before “user experience” was any kind design criteria. There are four buttons, embossed with hieroglyphics I don’t understand, and a small liquid crystal display panel that displays different hieroglyphics that I don’t understand either.
It isn’t clear what any of them do. How we originally programmed them is now lost to posterity, and for some years now we have just tolerated the meagre assistance they provide in the depths of winter. For the Contesă, this is business as usual. Over the years I have invested in knitwear. The heating comes on when it deigns to come on, goes off when it deigns to go off and that is that. The Contesă and I shuffle around our frigid house, wrapped up in mittens and scarves.
The problem is solvable because of the ingenious design of the valves. They accord with a principle of network design called the “end-to-end principle”. It is quite unintuitive but, when you get your head around it, utterly brilliant. The design of the internet is fastidiously based on the end-to-end principle.
But — and this is the beautiful thing about design — the internet’s construction in the 1960s long preceded theory that made it viable. The end-to-end principle explaining why the internet works was not identified or formalised until 1984.
How to design networks
When creating a network of dispersed “users” — call them “endpoints” and the system a “distributed network” — you have design choices to make. Different network designs have different pros and cons and different consequences for scaling, efficiency and task management. It is all rather mathematical.
Direct point-to-point networks
The simplest, in theory, is to link every endpoint in the network directly. We can see this rapidly gets complicated. With a two endpoint network there is one link. Adding a third endpoint, requires two new links. Adding a fourth requires three. The problem grows arithmetically as you add new users. Given a total userbase of N, the number of new connections needed to add a single user is N - 1. The more endpoints, the more links required to add a single new user.
The application for which the network is used is important. If all users will be interacting with all other users all the time, this may be the maximally efficient design. An example of this kind of network is a high-performance computing GPU cluster used for AI training: here the point is parallel processing, where every node exchange data directly with every other node on the “network” (a series of gates on a graphics processor) at maximum speed with minimal latency. But it is a pretty unique case. There aren’t many cases where a point-to-point network is a great design choice.
Most human networks are not like that. We only have a certain amount of personal bandwidth. We can only read one book at a time, or watch one film at a time. Our interaction with a given network is highly selective, and in fact unique: how I experience and interact with London is unique: I go to the Cherry Tree in Ost Finkelstein for my apples. The Contesă goes to an odd little Russian shop to get ingredients for her borscht. She does not need a link to my greengrocer. I don’t need a link to her cabbage purveyor.
In this case a fully-connected network becomes progressively harder to scale and less efficient. The more endpoints in the network, the less likely user is to communicate along a given link. A directly linked network, therefore, contains a great deal of redundancy.
Hub and spoke
Another way of designing networks is a hub and spoke model where local users are connected to a single large hub which has a much greater bandwidth connection to other hubs, to which other local users are connected. This is how, for example, railway networks work: There are a small number of “nodes” — stations — and these have limited set of very-high bandwidth connections between them. Endpoints — passengers — must make their own way to a node. But “adding new users” is therefore, from a “hub and spoke” network’s perspective, a low-cost, low complexity activity. It carries a predictable, low marginal cost. building additional hubs and connectors between them — that is, rails and tunnels — is obviously more expensive, but it is a one-time expenditure that happens infrequently and supports a greater capacity to handle users on the network. It is much, much less wasteful than a point-to-point network.
But hub-and-spoke models have some odd inefficiencies of their own. For one thing, connection routes on the network may be much longer and more complicated than is needed to cross the physical distance between user endpoints in real space. The London Underground is famous for this sort of thing. Visitors who take the journey from Wood Lane, on the Circle Line, to White City, on the Central Line — which takes about three quarters of an hour via Liverpool Street, or over half an hour with two changes, via Notting Hill Gate and Edgware Road —deposits them across the road from where they started.
Furthermore, knocking out a single hub can break the whole network, at least for anyone connected to it, or depending on it for a through link to another person.
The hub-and-spoke model is, nonetheless effective in most cases, at least where nodes are not very close to each other. Airlines run a similar arrangement, with regional airports feeding central hub airports like Heathrow and Chicago, which handle long-haul flights between them. Postal services, too, are hub-and-spoke models, often with several layers of hubs arranged as spokes around each other.
But typical social networks are not like that. In urban communities a lot of different networks live on top of each other. There are all kinds of random intersections and interconnections between disparate networks. It is all very fluid. There’s no central control: networks arise and die back as individuals need and use them. These networks don’t have any intelligence of their own: all the intelligence lives within the individual members of the communities. At network endpoints, in other words. Community members figure out which networks to join and what to use them for.
Neither the point-to-point or hub and spoke networks are efficient when people are often close to each other and sometimes distant, and where network needs are constantly in flux. In a dynamic, fluctuating community users need something that can do a bit of both.
Mesh network
There is, as Tony Blair once said, a third way. (There are doubtless others, but I don’t think you would thank me for embarking on a comprehensive survey of all network ontologies.) In this case, there are a great number of nodes, and most endpoints function as nodes too. the only difference between a true endpoint and a node is that an endpoint only has a single connection. Because there are countless nodes, nodes are not all interconnected but, instead, connected only to nearby nodes. Distant nodes are only indirectly connected through one or more intermediate nodes.
Now there are any number of indirect connection paths between any two nodes. The more nodes in the network, the more possible connection paths between them.
This solves all three of the problems identified above, and quite quickly. Firstly, it is easy, and cheap to add new nodes and endpoints to the network — each needs a small number of connections,: it may be as few as one, so the “arithmetic increase in cost to connect an additional user” problem does not exist. The network is easy to scale. The marginal cost of adding users is static, and it is borne by the connecting user, not the rest of the network. User pays.
Secondly, it solves the “single point of failure” problem of a hub-and-spoke model. As a mesh network scales, what does increase, geometrically, is “the number of potential connections between any two points”. The bigger the network, therefore — the more nodes it has, and mesh networks tend to have a lot — the more robust it is. The more resilient to failure. This means that there are no single, or significant points of failure. If you knock out a node, that only impacts that node, and any endpoints connected only to that node.
This is, indeed the fundamental problem that the U.S. Department of Defense’s Advanced Research Projects Agency — DARPA — was trying to solve when it formulated the principles for the ARPAnet, on which the modern internet was founded. The goal was to create a network that could sustain operation during its partial destruction, such as by nuclear strike. A mesh network is largely immune to targeted attack. If you want to knock out the network you must take out all its nodes. The more nodes the network has, the harder it is for a single impulse to destroy it.
Thirdly, it solves the hub-and-spoke model’s “stupid-way-of-crossing-the-road” problem, too: since all nodes are connected directly to other local nodes and will always be connected to the ones closest to it, there will never be a need to go from Wood Lane to White City via Liverpool Street.
Problems with mesh networks
Of course, nothing is perfect and mesh networks have their disadvantages too. For one thing, the route any signal takes across the network is likely to be circuitous.
That is a problem if what you are sending is somehow secret. Everyone in the communication chain will get to see it. It’s also a problem if you are a control freak or, for some other reason, you need a predictable route. A mesh network is all very-seat-of-the-pants, make-it-up-as-you-go-along and ad hoc.
Furthermore, should there be a time or cost implication of sending a message, then mesh networks can be quite inefficient. The larger one gets, the more expensive, and slow, sending “content rich” messages becomes.
But there has been an information revolution in the last 40 years. Electronic signals move down a wire at the speed of light. Speed was not the constraint it once was.
But the resource impact of sending a message across a node — not speed of communication, but volume and format of information sent — presented another problem.
The variety of human communications
There is a down-side to there being an almost infinite number of pathways across a network. It means, to route a given message, every one of those pathways needs to be able to handle the message.
Say you built a physical “mesh” network that employed those cute little Citroën Amis to shuttle your messages between individual loading bay nodes on the network. The vehicles are smart, they drive themselves, using an algorithm to determine which nodes to use on the network pass. As long as you are transporting small people and the odd parcel it will work serviceably well. But if you want to transfer a live dolphin, the network cannot manage. You would need to re-engineer the whole network, and every point on it, to cope. You are stuck. You would have to start again.
Unless you can figure out a way of working around the chunkiness implicit in a live dolphin.
So, whatever your network topology there is always a design decision to be made: what is the universe of items that can conceivably be transported across this network?
It is an optimising function, rather like the one we take when buying a car. We know most of car our journeys will be short and involve one occupant with little luggage. For these, a Citroën Ami would be perfectly adequate. Better, in fact, as long as our friends don’t see us. You don’t need a Land Rover with a snorkel to get around the Hampstead Garden Suburb.
But there will be times when we need to collect the kids from karate practice, take old furniture to the dump, or go off-roading in Wales. It is worth “solving” for these contingencies. But every now and then it might be useful to have a minibus, or a tractor. But we don’t optimise for these extremes: we just hire in the equipment, or the man with a van, as we need it. The “network” has its limits.
Designers of physical networks — even for mesh networks — must do the same exercise. They will optimise for known use-cases, but cannot be expected to predict future use-cases that might come along as technology develops. This is a shortcoming of all models of network design — if you build tunnels that are only ten metres wide, that forever precludes putting eleven-metre wide vehicles on your railway. So, along with a rail network (hub-and-spoke) there is a road network, which is much more like a mesh. The railway is very good at certain transport functions — passenger commuting, or hauling coal around – but not good for nipping up to the highstreet to collect your dry cleaning, or ingredients for borscht.
Because the link count in a mesh network is so large, and chaotic, capacity constraints are a particular limitation. This leads to different arrangement of structure and intelligence. For a hub-and-spoke network there is a real advantage to heavily engineering and controlling the central parts. It doesn’t matter of some things can’t go on the railway because there are aways other networks: the road, sea, and air, that can accommodate them. So railways and their designed-on rolling stock are heavily engineered to work together, and closely controlled by a centralised, intelligent monitoring system.
But central control of a system has its drawbacks. It is a single point of failure.
Any London commuter will know that a central signalling failure can lead to widespread disruption. End users can’t work around it unless they get off the network and use the roads — being a different kind of engineering proposition. The engineering of roads is minimal, and while in urban settings they are controlled, it is lightly. If all the traffic signals go down, the network functions: drivers just have to be a bit more careful.
In any case there are two design principles: engineering and intelligence in the middle, or intelligence and engineering at the edges.
A railway is a heavily engineered, centrally controlled, intelligent network. All the intelligence is in the middle, and the edges are really easy. You don’t need any particular kit to ride a train other than a ticket. You can just sit there. You just have to remember where to get off.
A road is simple, mainly dumb network, with little central intelligence. All the complication, design and intelligence is “at the edges”. Users must bring their own vehicles, and they have to operate them. They have to figure out where to go, by which route, and how to operate their vehicle. The road network is mainly passive. It just sits there. You have to worry about where you are going. The road doesn’t care.
Internet as a dumb network
So there are smart networks and dumb networks. What about the internet? You could be forgiven for presuming the world wide web—surely the most sophisticated distributed network in the known universe—is highly intelligent. In fact, it is not. It is a supremely dumb network. That, indeed, is its very brilliance. The world-wide web could hardly be stupider. All the brilliance is at the edges.
This is partly a function of its genealogy. They built the digital world wide web on a network that was already there, that was designed with a completely different use-case in mind: analog telephone signals.
A traditional telephone mouthpiece worked by converting sound waves into an analog electrical signal—a continuously varying voltage describing those sound waves that travelled to the exchange, passed through a series of switches and down another wire to the other caller, where the receiver’s ear piece speaker does the reverse: converting the analog signal back into sound waves. An analog system was a continuous pipe. The exchange would physically dedicate a continuous electrical circuit between callers for the duration of the call. It was like a private, dedicated tunnel. It persisted whether anyone was speaking. It was inefficient for data. The internet wanted to send binary digits — lots of ones and zeros — down the pipe. It did that by converting them into audible tones that the phone line was expecting. That is the famous modem noise — youngsters probably don’t remember it, but for people of about JC’s age it was a thing of marvel and wonder.
This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.