20 November 2024. Centuries | Internet of Things
The end of the 20th century? // Why the Internat of Things has never taken off [#617]
Welcome to Just Two Things, which I try to publish two or three times a week. Some links may also appear on my blog from time to time. Links to the main articles are in cross-heads as well as the story. A reminder that if you don’t see Just Two Things in your inbox, it might have been routed to your spam filter. Comments are open.
1: The end of the 20th century? It’s the end of something
In my recent piece about interpreting the election of Trump as President of the United States, one of the ‘six narratives’ was headed ‘The final, final end of the post-war settlement’.
This seems like it is significant issue, if true, and I’m going to share a couple of posts this week exploring the idea in more detail. The first is drawn from Jason Steinhauer’s piece on his newsletter History Club headlined ‘The end of the 20th century’. It’s a long article, but he lays out his stall early on like this:
[T]his election marked the conclusion of a process whereby society re-organized itself, with one superstructure becoming permanently unwound and a new one crystallizing in its place. Such a process did not happen in one day; it had been unfolding over the past decade. Simply put, November 5, 2024, was the night the 20th century ended.
Obviously, this is something of a conceit, and other people have measured the end of the 20th century differently. Eric Hobsbawm’s ‘short 20th century’, for example, ran from 1914 to 1991.
But I’ll let Steinhauer make his case.
He says the ‘long 20th century’ starts in 1920, with the creation of the League of Nations and its subsequent failure to do anything about the rise of the fascism and militarism in the 1930s. But in the wake of World War II, we had another go, setting up the United Nations—League of Nations 2.0–and a whole raft of related institutions:
Imperfect and flawed as they were—and always rife with agendas, politics and hypocrisies—these institutions played major roles in propagating a set of beliefs that animated society in the ensuing decades. Those beliefs included the assumption that institutional structures, when properly funded and supported, could advance diplomacy, education, medicine and, ultimately, peace.
(Sheet music for the League of Nations song. Public Domain via Picryl.)
He argues that these structures were underpinned by a particular set of industrial technologies that were essentially mechanical, industrial, and large. They were also linear:
an assembly line with a beginning, middle and end.
This linearity, he suggests, extended both to the knowledge production practices of the education system, including our universities, and our mass media systems:
Films had beginnings, middles and ends; newspapers had sections that started at A1 and were organized linearly through sections B, C, and D.
Of course, the media landscape was complex and guided by corporate interests:
But the forms were linear, and with them semi-predictable patterns of how consumers would behave and interact.
He acknowledges that there was still a lot of violence in this world—we can all list the big wars and some of the proxy wars—but he argues that the underlying principle was broadly that nations could and should co-operate. And so we got the IPCC, the International Space Station, SALT talks and so on.
He suggests that this starts to unravel at the turn of the 21st century, and again, he spends more time on this, but you can sketch out a series of disruptions in different domains that include the Iraq War, the rise of social media and the Silicon Valley companies that sat behind it, the financial crisis of of 2008. (He mentions bitcoin somewhere in here but I think we can skip that one.)
Mostly, he focuses on social media, which privileges the scroll and the capture of attention. And he makes an argument that our behaviour followed our technologies:
[O]ur behaviors and tastes became more like the technologies we used: nimble and agile became preferred to static and durable. Why be tied to a permanent home, partner or career, when one could be portable, mobile, work remotely, and hop from job to job?
(Source: Fast Company)
This all sounds a bit Tom Peters, and frankly you can over-claim here. If you look at a lot of the sociological data, we still stay in jobs for about the same length of time as we used to, and we stay with our partners for about the same length of time too.
But there was a new socio-economic fragility to it, as Steinhauer acknowledges elsewhere:
[T]his was occurring amid other structural changes, namely the displacement of jobs and the hollowing out of manufacturing communities. While the previous technologies created jobs, the new technologies were displacing them. That displacement was felt acutely among the working classes.
He argues that this has taken two decades to unravel, but two events have accelerated it: the pandemic and Russia’s invasion of the Ukraine. Mass vaccination was one of the public health triumphs of the 20th century, but the interaction of social media and lockdown gave us anti-vaxxing at scale. Putin’s attack on Ukraine recreated within the United Nations and NATO the paralysis of the League of Nations in the face of “the madman at the door intent on war.”
He suggests that the Democrats—like most other centre parties in the rich world—are still rooted in the logic of the 20th century. Trump was not:
Trump smartly seized every moment to become a meme for the infinite scroll of social media—dancing to “Y-M-C-A” or working at McDonald’s. He embraced Bitcoin, locked arms with anti-vaxxer RFK, Jr., eschewed the legacy press, gained favor with iconoclasts such as Joe Rogan and Elon Musk, and promised that the jobs “they” took away would return under his leadership... The candidate who promised to “burn it all down” would be the one the voters would hoist up.
One of the things that is happening as the worldview that underpinned the 20th century is being shredded is that democracy is getting chewed up at the same time. Here is Steinhauer’s take on this, globally:
Freedom of the press has been curtailed, those who oppose government are being jailed or murdered, corruption runs rampant, mass surveillance is being imposed, rights and liberties are being stripped, and wars are being waged that put civilians in harm’s way... Our new age is—for the moment—marked by cynicism, distrust, war, surveillance, conspiracy and stridency.
The story he tells here is right in parts, but it’s missing some important elements. The crisis in capitalism doesn’t start in the 21st century, but in the 1970s, when capital decided that increasing equality in the global North was less important than its rate of return. The problem that the centrist parties have right now is that when they won elections they propped up the political and economic structures developed in the 1980s that fostered inequality.
And the global North was always going to face a huge dislocation as China and India reclaimed their places in the global economy after a century of suppression.
Steinhauer falls back on a liberal plea:
[O]ur biggest task is to plug compassion, tolerance, peace, human rights, equal opportunity and democracy into a technological, political and media reality that often promote the opposite.
And sure, fine, whatever. What is so funny about peace, love, and understanding? But there’s a big hole in this argument, and it’s called political economy. I’ll come back to that in the next edition of Just Two Things.
H/t The Browser
2: Why the Internet of Things has never taken off
Like every other futurist in the world, I imagine, I have given those presentations in which I talk about the coming world of smart, connected, devices known as the Internet of Things.
So it is worth noting a piece by the software engineer Pete Warden from the end of August asking:
Why has the Internet of Things failed?
He has data:
According to a survey last year, less than 50% of appliances that are internet-capable ever get connected. When I talk to manufacturers, I often hear even worse numbers, sometimes below 30%! Despite many years and billions of dollars of investment into the “Internet of Things”, this lack of adoption makes it clear that even if a device can be connected, consumers don’t see the value in most cases.
To try to understand why this might be, he goes back to the way that the International of Things was pitched.
Part of the idea was simply that it was inevitable. In the computing and communications sector in general, things that started out as standalone devices have ended up being connected, computers being the obvious example:
Sun coined the phrase “The network is the computer”, and that philosophy has clearly won in most domains, from Salesforce pioneering Software as a Service, to the majority of user applications today being delivered as web or mobile apps with a data-center-based backend.
For this reason, the extension to everyday devices such as toasters or refrigerators seemed on the face of it to make sense. But not, apparently, to customers.
One obstacle he identifies is just the setup time.
[Y]our fridge or toaster probably doesn’t have a full-featured user interface, and so you’re expected to download a phone app, and then use that to indirectly set up your appliance. This adds multiple extra steps, and anyone who’s ever worked on a customer funnel means that every additional stage means losing some people along the way.
And, as a customer, you’d need to do that for a whole host of devices.
The second is a lack of customer benefit. Warden talked last year to an engineer who had been hard at work on a connected dishwasher project. (When I say ‘connected dishwasher’, obviously it is already connected to the water system). But his team had failed to find a compelling reason to connect it to the internet:
You could start the dishwasher remotely, but how did that help if you had to be there in person to load it? Knowing when it was done was mildly useful, but most people would know that from when they started it.
Or—as in my house—you could walk into the kitchen later on and notice that it had finished its cycle, which also seems to work quite well. Or, as he points out,
Getting an alert that your fridge door has been left open is nice, but isn’t much better than having an audible alarm go off.
Similarly, all of the big tech companies have been hard at work adding voice interfaces to their systems, but they get used for the most trivial purposes: to set alarms and play audio.
There’s another issue, which is just charging the things.
Unless you want to run ethernet cables everywhere, a network connection requires radio communication, through Bluetooth, WiFi, or cellular data. All of these technologies need at least 100 milliwatts of power to run continuously.
This is trivial if you can run it off mains power, but not if you need a battery. That’s why we end up charging our phones every day.
There’s a way around this, which is to have the device power itself up when we need to use it, but given that part of the pitch of the internet of things was convenience this doesn’t quite work. (But, in contrast, we don’t mind waiting for a Garmin or Wahoo device to power itself up, because the context and use case of doing exercise is different.)
Warden doesn’t mention the security or privacy issues associated with the cheap sensors that end up in these devices, although it’s well known that they are a source of vulnerability. And one other area that seems to matter when I think about the internet of things is the way it changes your relationship with the supplier, and not necessarily in a good way.
(‘The internet of anonymous things’. Image: sndrv/flickt. CC BY 2.0)
In the auto sector, connected cars have basically enabled the car companies to enforce expensive service protocols. In the case of the tractor company John Deere, they have allowed them to make repair harder. But even less extreme examples provide information that is likely to be more valuable to the provider than it is to the user.
Warden still thinks that the internet of things has a future, but he also thinks it will have more of a future if it focuses on things that have a clearer user benefit:
[L]ike why do I have five remotes on my couch, or why doesn’t my TV turn on instantly like it used to years ago? Most of the issues that are frustrating people with consumer electronics don’t need a network connection to solve. I’d much rather have us building machines that can understand us better, and figure out the monetization strategy after we’re providing value, instead of building features nobody uses because we think they can make money.
It was a reminder to me that E.M Rogers’ five diffusion factors are still a pretty good guide to how likely users are to start using new products or features.
Relative Advantage: The degree to which an innovation is seen as better than the idea, program, or product it replaces.
Compatibility: How consistent the innovation is with the current values, experiences, and needs of the potential adopters.
Complexity [better thought of as Simplicity]: How difficult the innovation is to understand and/or use.
Triability: The extent to which the innovation can be tested or experimented with before a commitment to adopt is made.
Observability: The extent to which the innovation provides tangible results.
When I look at these, I think that the Internet of Things gets pretty low scores across the board. I suspect that Warden is being optimistic.
(H/t The Overspill)
j2t#617
If you are enjoying Just Two Things, please do send it on to a friend or colleague.