11 November 2021. War | Robots.
Goodbye to All That on Armistice Day; Killer robots are already here. We should be worried.
Welcome to Just Two Things, which I try to publish daily, five days a week. (For the next few weeks this might be four days a week while I do a course: we’ll see how it goes). Some links may also appear on my blog from time to time. Links to the main articles are in cross-heads as well as the story.
#1: Goodbye to all that
To mark Armistice Day, I’m republishing a review of Robert Graves’ first world war memoir Goodbye To All That, first published on my ‘Around the Edges’ blog.
I started reading Goodbye to All That during the first, tightly observed, lockdown as a way to transport myself from the confines of the house to a different place and time. It succeeded in doing that.
The book, published a decade after the end of World War One, deserves its classic status.
In Goodbye to All That, Graves is not trying to make an overt point about the evil of war—unlike, say, Siegfried Sassoon—although he has plenty to say about its banality. This means that he has an eye for its quotidian detail.
For example: on the prices that the French charge the Allied soldiers when they are behind the lines; the organisation of the brothels (different for officers and men, of course); the fact that officers have servants who clean their boots and polish their buttons, even down to the level of Captain (in charge of a Company).
You get a different idea of the fighting as well. Our mental images of the First World War are shaped by the big pushes over the top in which thousands died. (As—spoilers—at the end of Blackadder Goes Forth).
But you get the impression from reading Graves that these didn’t happen that often, which makes sense: you would quickly run out of men. A lot of time was spent rebuilding trenches, both in the front line and the support trenches.
There were smaller actions, designed to change the geometry of the front line. Robert Graves has a disagreement with a senior officer who wants to attack a German salient, apparently out of tidiness. Graves couldn’t understand the logic. As far as he was concerned, if the Germans wanted to hold a position where they were under fire from two sides, that was just fine.
There were also revenge actions, as in one raid made in response to a mining attack (p.171). This was a elaborate thing, with shrapnel barrages designed to push the Germans back into the support trenches — repeatedly, so eventually they wouldn’t come forwards between each barrage.
When the attack came, it was accompanied by a smoke screen and a barrage on the support trenches, to prevent the Germans coming forwards again. Graves had to write an account for the regimental records, noting that the attackers also carried makeshift pikes: “effective at close quarters, and lighter than the bayonet.”
A few days after finishing the book, I realised that the reason Goodbye to All That endures as a classic is that there’s a tension running through the heart of it. He comes to hate the war, but he’s proud of being a good soldier. There are striking stories about the difference between a good battalion and a poor one.
“The boast of a good battalion is that it never lost a trench… meaning, they had never been forced out of a trench without recapturing it before the action ended.” (p.155)
Arms drill was a factor in maintaining this discipline. The savvy Company commander would check, when he arrived at the front, who was holding the positions on each flank, to decide if he could rely on them.
Graves got to know W.H.R.Rivers, the doctor who more or less invented ‘post traumatic stress disorder’, although it was then more widely known as shellshock. In Goodbye to All That it’s referred to as ‘neurasthenia’. (p.143-44).
For the first three weeks in the front line, an officer was more or less useless. He didn’t know his way around and didn’t recognise “degrees of danger”. Between three and four weeks “he was at his best. Then his usefulness gradually declined”.
At six months, he was still just about alright, but by nine or ten months he had become a drag on the other officers, unless he was given a break on a course, or in hospital. After two years, they tended to become alcoholics (two bottles of whisky a day) and incapable of making good decisions.
Rivers’ medical explanation was that one of the ductless glands (Robert Graves thinks it the thyroid gland) normally pumped a sedative into the bloodstream, but under stress it stopped doing this. Graves, writing in 1929, reckoned that it had taken ten years for his blood to recover.
#2: Killer robots are already here. We should be worried.
From the history of war to the future: According to a recent United Nations report, autonomous weapons systems—more popularly known as ‘killer robots’—may have already killed people on a battlefield. The story is reviewed in The Conversation by human rights academic James Dawes. But first, a definition and some data:
Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without a human weighing in on those decisions. Militaries around the world are investing heavily in autonomous weapons research and development. The U.S. alone budgeted US$18 billion for autonomous weapons between 2016 and 2020.
Of course, human rights and humanitarian organisations are less keen on this development. And it’s possible that military organisations should proceed with caution, since any kind of deployment of autonomous weapons systems could destabilise a lot of the doctrinal assumptions that they current use to guide their operations.
(The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. Image via Armyinform.com.ua. CC BY 4.0).
In his article, Dawes identifies four big risks:
1. Misidentification. The first one is already familiar to use from the deployment of remote controlled drones: “When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and insurgents making a tactical retreat?”
But it’s not exactly like for like, because autonomous weapons systems can potentially operate at a whole different scale.
The scale, scope and speed of killer robot systems – ruled by one targeting algorithm, deployed across an entire continent – could make misidentifications by individual humans like a recent U.S. drone strike in Afghanistan seem like mere rounding errors by comparison... when AI systems err, they err in bulk.
And we probably don’t have to spend more time here on the risks of AI and machine learning systems in general. But sometimes, when they make mistakes, even the people who set up the programs them can’t work out why.
2. Low-end proliferation. Although militaries always think they’re going to be able to control the proliferation of weapons, we know in practice that they don’t. Once weapons are out there, they spread. Dawes makes a comparison with the Kalashnikov.
Market pressures could result in the creation and widespread sale of what can be thought of as the autonomous weapon equivalent of the Kalashnikov assault rifle(https://www.npr.org/templates/story/story.php?storyId=6539945) : killer robots that are cheap, effective and almost impossible to contain as they circulate around the globe. “Kalashnikov” autonomous weapons could get into the hands of people outside of government control, including international and domestic terrorists.
3. High-end proliferation. Nations could compete to develop autonomous weapons because they believe they reduce the two constraints on waging war: “concern for civilians abroad and concern for one’s own soldiers.” In turn, this alters the cost-benefit calculations that nations make before going to war. They might, for example, also be set up to carry biological, radiological, or chemical weapons, since the risks are lower.
There will doubtless be lots of performative ethical noises (for example about “surgical strikes” and other military myths) that try to reassure everyone else that this is all just fine.
4. Undermining the laws of war. The final consequence is that autonomous weapons undermine the conventions that we have developed to try to manage the conduct of warfare—the Geneva Convention, for example, now almost 150 years old.
These are predicated on the principle that people who wage war are accountable for the conduct of their soldiers. For example, the right to kill enemy soldiers in combat doesn’t also give you the right to murder civilians.
But how can autonomous weapons be held accountable? Who is to blame for a robot that commits war crimes? Who would be put on trial? The weapon? The soldier? The soldier’s commanders? The corporation that made the weapon? Nongovernmental organizations and experts in international law worry that autonomous weapons will lead to a serious accountability gap.
In short, unless we can ensure that autonomous weapons only exist within a human chain of command, there will be war crimes without an identifiable war criminal.
One of the long trends in warfare is the shift from wars in which combatants are most likely to be killed, to wars in which civilians are most likely to be killed. (From memory, World War 1 was the last war in which more combatants were more likely to be killed than civilians). Obviously, this goes hand in hand with the increasing distance between the weapons and the dead, starting with Douhet’s 1921 book ‘The Command of the Air’. In fact, Laurie Taylor explored this in an interesting episode of the BBC podcast Thinking Allowed (28 minutes).
On nthis reading, autonomous weapons are the latest trend in a pattern that is now 100 years old.
j2t#205
If you are enjoying Just Two Things, please do send it on to a friend or colleague.