Welcome to Just Two Things, which I try to publish two or three times a week. Some links may also appear on my blog from time to time. Links to the main articles are in cross-heads as well as the story. A reminder that if you don’t see Just Two Things in your inbox, it might have been routed to your spam filter. Comments are open.
1: How to think about AI
A few months ago I swore a self-denying ordinance here on writing about AI. There was far too much noise and barely any signal. Here quickly, and by way of a recap, are the things I worry about when I think about AI:
that its energy consumption is an order of magnitude higher than conventional internet queries, at a time when we need to reduce energy consumption as we transition away from coal;
that the proliferation of data centres will have bad effects on water systems;
that there’s always a low paid human buried inside the development and delivery systems;
that as the internet is populated by increasing numbers of AI texts, then we will suffer ‘model collapse’ that will make large parts of the internet useless; and
that there’s no path to a credible business model that will ever justify the scale of the investment currently being touted by AI’s advocates.
And no, I’m not worried that we’re going to get to the Singularity any time soon.
When I talk about a credible business model, I mean something that generates enough returns to finance the vast investment touted by companies like Open AI. So far, I’ve seen small incrementally useful applications in specific domains. These are valuable, but all the same are basically productivity improvements to existing digital systems. Obviously when I say this to AI enthusiasts they tell me that I’m suffering from a failure of imagination.
(‘Artificial Intelligence’ (2019), by Shefali Gopinath Madkar. Via Khula Aasmaan. CC.)
This is a long way in to a piece by Drew Breunig that breaks AI use cases into three main types: Gods; Interns; and Cogs. (He slips in a fourth one later on). Here’s his summary of these:
Gods: Super-intelligent, artificial entities that do things autonomously.
Interns: Supervised copilots that collaborate with experts, focusing on grunt work.
Cogs: Functions optimized to perform a single task extremely well, usually as part of a pipeline or interface.
I’m going to pull out some extracts from his description of each one.
Gods
Gods are the human-replacement use cases… Gods require gigantic models. Their autonomous nature – their defining quality – means a low error tolerance and a broad, general model. But a giant LLM surely isn’t enough. We could throw every data center and every database at the problem and still be short.
He’s sceptical about how far this is going to get, because the financial and technical barriers are so high. By way of scepticism, he notes that this is the “much-hyped” AGI [artificial general intelligence] that is apparently just around the corner:
(Or maybe they’re just imminent when you’re trying to close one of the largest funding rounds in history? After the round is finalized, you can go back to realism...)
Because of the cost, there are very few players in this bit of the game. But they are very noisy, and they have very good access to policy makers. So the way this affects us is in the form of hype. But this hype
will affect funding, regulation, and politics.
Interns
Interns are the copilots (and a term I’ve shamelessly borrowed from Simon Willison). Their defining quality is that they are used and supervised by experts. They have a high tolerance for errors because said expert is reviewing their output, and can prevent embarrassing mistakes from going further. Interns are focused on the grunt work.
Here’s a list that he has of intern-type programs, mostly in the programming and applications space:
Github Copilot and Cursor are interns for programmers. Adobe Firefly and Visual Electric are interns for designers. Microsoft Copilot and Grammarly are interns for writers. NotebookLM is an intern for researchers. Monterey is an intern for product managers. There are many more examples.
In terms of business models, interns might have a credible path, because they have specific markets in specific domains where you can see specific use cases:
Their model sizes are large, but not massive. One could build new interns – starting from scratch – for millions. If you use an open model, the costs are dramatically lower. Today, Interns are delivering the lion’s share of the realized value from AI. Engineering copilots alone are delivering massive returns, increasing the output and capabilities of expert programmers.
And if you want to improve the performance of an Intern AI, you need to get yourself a better class of expert.
Cogs
This is the third bucket of use cases.
Cogs are comparable to functions. They’re designed to do one task, unsupervised, very well. Cogs have a low tolerance for errors because they run with little expert oversight.
Cogs, he says, can be built into interfaces, for example as speech translators.
Cogs exist in systems as reliable components. Their focus on one task and a low tolerance for errors mean they are usually built with fine-tuned or heavily prompted small, open models. Cogs are cheap to run and relatively cheap to ship.
This is probably the most used type of AI at the moment, but the elements are even cheaper than the Interns, and the returns are even more specific—to specific applications, not to domains. Basically, they’re like plumbing. Cloud platforms, in particular, have gone big on the Cogs use case.
A few months earlier, Breunig had done a simpler version of this piece where he broke AI use cases into two types, Sober AI and Hyped AI. Clearly the Cogs and Interns come into the former category. The earlier article came with a useful table.
(Source: dbreunig.com)
By the way, his fourth category is Toys, which is, he thinks, a sub-category of Interns. A toy might be an image generator or a chatbot. He calls them Toys because they are used by non-experts on a casual basis.
Toys have a high tolerance for errors because they’re not being relied on for much beyond entertainment.
All the noise about AI is about the ‘Gods’ and the Hyped AI. All the use and the value is in ‘Interns’, ‘Cogs’, and Sober AI. The problem with the noise about Hyped AI is that it has an enormous opportunity cost. It sucks in investment money which would be better spent elsewhere, and takes up brainspace in the heads of policy makers that would be better used in thinking about the small but useful productivity gains that can be made from the everyday use of Sober AI.
H/t Roblog
2: The horror within
I’m a fan of the cultural writer John Higgs, and he marked Hallowe’en a couple of weeks ago with an engaging post about monsters. As he says near the beginning of the post, the monsters a period creates tells you a lot about the times:
The uptight repressed Victorians feared the sensuous Count Dracula, threatening to nibble at their untouched necks. Mary Shelley wrote Frankenstein as we entered the Age of Enlightenment, fearing the inhuman horrors that a godless scientific age could create. When those raised in the shadow of World War Two entered the Space Age, their memory of fascism and fears of radiation gave us the Daleks.
The post is prompted by a recent documentary about Hammer Studios, which became synonymous with low budget British horror films in the 1950s and the 1960s. He wonders how Hammer would go about creating monsters these days, when there’s less of a shared view of what we should fear. But one obvious candidate, he thinks, is the billionaire:
It’s certainly hard to imagine a billionaire hero being created now, in a similar way to how the twentieth century produced characters like Iron Man and Batman. If a modern story has a billionaire character, you can be certain they will turn out to be a wrong ‘un. Billionaires are the modern expression of a role that has historically been played by feudal barons, slavers, oligarchs, nabobs or factory owners.
He thinks this is partly because the scale of their wealth is more or less impossible for the human mind to grasp. Counting to a million is difficult—it would take a month to do it non-stop, assuming you could survive without sleep that long.
But it would take you more than a lifetime to count to a billion. Literally: it would take you around a hundred years.
There is something Lovecraftian about a billion - it is unimaginable, unnatural and impossible to truly comprehend without going mad.
Of course, the other problem with being a billionaire is that you never know if people like you because of your charm, intelligence and general good humour, or if they just seem to like those things because you have so much money. As a result, as I wrote about here recently, they tend to shut themselves away from society.
(Not everyone has this view: other parts of society just think that billionaires should be admired. Higgs shares a clip from BBC Radio 5’s Live programme in which the presenter Emma Barnett seems to think that billionaires are some kind of naturally occurring phenomena, like, say, tides or the Earth’s orbit, rather than the construction of a particular form of political economy.
”How on earth are you going to stop them?”
she asks disbelievingly of a caller who has just told her that he doesn’t think anyone should be a billionaire. And to think there are people who think that journalism is broken. But I digress.)
But if you are a billionaire, clearly your fortune is not a thing of horror. The real monster—as the billionaire Elon Musk has pointed out to us—is “the woke mind virus”.
And it has to be said that—when you look at some of the British horror films made by Hammer—that this has a lot of things going for it. A virus that has captured our brains! This idea happens to have quite a long pedigree. The notion of people whose brains and bodies have been transformed by an alien force is basically the plot of Invasion of the Body Snatchers, made in 1956 and remade in 1978.
The first version was widely seen as an allegory about the fear of Communism in the United States—the director, Don Siegel, said as much. So you can see why far-right billionaires might want to cast this as a monster. Indeed, when you watch the trailer for the 1956 film, you can imagine Musk identifying with the character who is the one who has seen the danger but is being ignored by everyone:
As Higgs says:
To those outside this tribe, of course, this idea is nuts. The woke mind virus is not a real thing, they insist, beyond a general sense that we should treat others decently. Were it not for the extent to which attacks on the woke mind virus emboldens white supremacists and misogynists, belief in it would be extremely funny.
Higgs is knocking out a quick Hallowe’en piece here, and indeed the whole piece may just be a long set-up to a gag (see below).
But let me take the idea seriously for a moment. First: billionaires aren’t really good for horror films. As Higgs well knows, they are much better suited to be Bond villains. The billionaire Peter Thiel, with his interest in Seasteading and his plans to build a bunker in New Zealand Aeroteroa is clearly auditioning for this part already.
Once film critics and academics started taking the horror genreseriously, a whole school emerged that basically said that the horror film was about the return of the repressed.
In Pam Cook’s Cinema Book, the section on horror quotes Robin Wood’s 1979 essay written for a season of horror films at the Toronto Festival of Festivals:1
One might say that the true subject of the horror genre is the struggle for recognition of all that our civilisation represses and oppresses: its re-emergence dramatised, as our nightmares, as an object of horror, a matter for terror, the ‘happy ending’ (when it exists) typically signifying the restoration of repression.
So, for example, we can read a whole set of horror movies, from Psycho (1960) to Dressed to Kill (1989) as being about women being in the “wrong” place—wrong, here, being from a male perspective. The Fly (1986) is generally read as being a way to write about AIDS—a mutation that makes you other than fully human. John Carpenter’s Assault on Precinct 13 (1976)–an updated zombie film—is about urban collapse. There’s a whole host of films that are about the horror of the home, just as home ownership became a centre of social (suburban) aspiration.
An article in Screenrant by Steve Cuffari brings this up to date for the 21st century:
[T]he novelty of 21st-century horror movies is more than just new technology or new societal issues. It is the inclusion of perspectives other than that of the middle-class, heterosexual, white male, which has been dominant for the majority of cinematic history. Get Out is a perfect example of this. The movie takes familiar elements of horror movies—namely body snatching and human sacrifice—and adds an essential layer that is rooted in the Black experience. Its unapologetic criticism of white treatment of Black bodies and Black lives is groundbreaking.
Horror films often deal with emerging ‘structures of feeling’, to use Raymond Williams’ term, in that they get to a psychic concern that is hard to express in other ways, because of the prevailing or dominant views of society at a given time. The obvious question, if we are thinking about monsters, is how climate change will be expressed in horror. Because it won’t be head-on.
Anyway, here’s Higgs’ pay-off: as I said, he may have just been setting up this punchline:
Perhaps only the story of the inhuman crimes of a woke billionaire will bring people together. So if you’ll excuse me, I’m off to work on a script entitled Taylor Swift Will Kill Again.
UPDATE: Gaming the response to Trump administration policies
I wrote here recently about a documentary on a ‘war game’ about a possible contested election in the United States. We know now that that didn’t happen, but it is worth noting a piece in The New Republic (limited paywall) by Rosa Brooks—who’s a law professor at Georgetown University—on a set of simulation games on resistance to policies enacted by a Trump administration. They were hosted by the non-partisan Brennan Centre. It’s a sobering read.
As with the earlier game, this was a soft simulation in which people played roles that they had previously performed in real life (politicians played politicians, generals played generals, etc). It’s quite a long piece, and this is only an update, so I will pull out only a couple of extracts.
What didn’t work well in the game: litigation had some effect, but more at state level. Large scale protests were misrepresented in mainstream and social media, and were used as an excuse for hard-line responses. But:
Our exercises made it clear that mitigation is feasible. We can, for instance, find ways to throw sand in the gears whenever possible. Patriotic civil servants and military personnel faced with ethically problematic directives can raise questions, insist on legal reviews, demand greater clarity, ask for more process... Everyone concerned about autocratic overreach can, and should, seek allies of convenience, wherever they can be found... At the state and local level, governors, mayors, legislators, and other officials committed to rights and the rule of law can push back against federal overreach.
The other danger in the game is that opposition leadership fails to co-ordinate its actions effectively. In the exercises, “the opposition was fragmented and discordant, unable to agree on tactics or messages.” I can see how this might play out in a game, but I’m less persuaded. In the UK, for example, the most effective responses to deportations have been loosely co-ordinated local actions. At the very least, I’d want to see this articulated a bit better: what kind of leadership, designed to achieve what sort of ends, through what means. Because effective responses do matter:
With the help of opposition leadership structures, we can establish and support protection networks that can quickly provide legal assistance, cybersecurity help, communications assistance, and even shelter or physical protection to those who find themselves under threat... This is as critical as anything else: Autocrats succeed by sowing fear and mistrust.
Rosa Brooks has also written a second account of the simulation at The Bulwark, with more details of how it played out rather than a focus on implications.
j2t#615
If you are enjoying Just Two Things, please do send it on to a friend or colleague.
I have a secondhand copy of the book, and mostly it barely seems to have been read. But the horror section has notes and underlinings and exclamation marks scrawled all over it.