11 December 2023. Trust | Aviation
How to make sure that AIs don’t destroy social trust. // We’ll never have enough cooking oil for long-haul flights. [#524]
Welcome to Just Two Things, which I try to publish three days a week. Some links may also appear on my blog from time to time. Links to the main articles are in cross-heads as well as the story. A reminder that if you don’t see Just Two Things in your inbox, it might have been routed to your spam filter. Comments are open.
1: Making sure that AIs don’t destroy social trust
I spent some of my weekend helping my wife with a complaint about a bank that she has had to take to the Ombudsman, so perhaps I was primed for Bruce Schneier’s extended post on the distinction between trust and reliability.
This is from the first paragraph:
I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew in... And all the people that prepared and served my breakfast, and the entire food supply chain.
The piece argues, first, that trust is essential to the effective functioning of our societies; second, that these two types of trust, interpersonal trust and social trust, get confused with each other; and third, that this confusion will get worse as businesses and organisations depend more on automated rule-based systems driven by large language models. (He calls these AIs, but for me the algorithmic nature of these systems is their more important characteristic. It would matter less if these were computer-supported human decisions.)
(We assume the food is safe because of the health codes that the restaurant has to follow and the inspection regime that goes with it. CC photo via urbanjunkies.com.)
We can take the first point as read. There’s a lot of research on the importance of trust in societies that work well.
By inter-personal trust, Schneier means the trust we extend to people that we know:
When we say that we trust a friend, it is less about their specific actions and more about them as a person. It’s a general reliance that they will behave in a trustworthy manner. We trust their intentions, and know that those intentions will inform their actions.
In contrast, social trust is the mechanism by which we can assume, generally, that people we don’t know won’t harm us. In his example of the plane flight above, it’s the other drivers on the road and the people who prepared his food, who didn’t ram his vehicle or poison his breakfast:
We might not know someone personally, or know their motivations—but we can trust their behavior. We don’t know whether or not someone wants to steal, but maybe we can trust that they won’t. It’s really more about reliability and predictability.
This isn’t completely reliable: people do attack other people on the street from time to time, banks take money from bank accounts without authorisation, and so on. But most of the time it’s enough that most people are trustworthy enough.
As it happens, Schneier wrote a book about this back in 2012 in which he identified four systems for maintaining social trust. These were:
our innate morals, concern about our reputations, the laws we live under, and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.
To spell it out: the first two are the basis of inter-personal trust, while the second two are the basis for social trust. This it how it works:
I don’t trust the cooks and waitstaff at a restaurant, but the system of health codes they work under... Imagine that it’s a fast food restaurant, employing teenagers. The food is almost certainly safe—probably safer than in high-end restaurants—because of the corporate systems or reliability and predictability that is guiding their every behavior.
His post is long—it is based on a talk—so I’m going to focus on his points about how the last two are changing. Because the thing is this: social trust can scale, in a way that inter-personal trust doesn’t:
Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability—social trust. But because we use the same word for both, we regularly confuse them. And when we do that, we are making a category error.
This is because we think of the corporation as having behaviours that we associate with trust in other people, when all they are delivering is reliability and predictability:
We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as the law and their reputations let them get away with... They refer to themselves like they are people. But they are not our friends. Corporations are not capable of having that kind of relationship.
Corporations are, in our current world, mostly profit-maximising machines driven by investors and financial markets. Schneier is worrying about what happens to social trust when AIs are added to this corporate design. He thinks is a much more serious problem, and a more immediate one, than the discussions about existential risk.
Because, he says, AIs will be designed by corporations to serve the ends of those corporations, while also behaving as if they are our friends. They will have two features in particular: first, they will be more relational:
This relational nature will make it easier for those double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s truly the best deal, given your particular set of needs? Or because the AI company got a kickback from those providers?
Second, they will be more intimate:
One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone.
In short:
It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate. We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.
So how to we ensure that social trust does not get destroyed? The answer: this is why we have governments, because it is “government that provides the underlying mechanisms for the social trust essential to society.” It mostly does this through law and regulation.
But they need to be regulating the right thing. Regulating the AI won’t work, since they have no intention. We need to be regulating the humans who design, implement and manage the AIs.
There are some existing mechanisms that already exist to ensure social trust that we can adapt for regulating AIs. The first is the notion of “fiduciary duty” that we already see in areas such as health, where a doctor or pharmacist has a fiduciary responsibility to the patient, not the health management organisation or the pharma company. So we need data fiduciaries.
(We used to trust our pharmacists less. Public domain image via the Look and Learn picture archive.)
The second is the idea of the public model—which is not the same as a corporate model that is free to use, or even an open source model:
A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations.
But: we need governments to mandate these things. Because corporations are not moral. And we need social trust to work, to underpin the way our societies work.
2: We’ll never have enough waste cooking oil for long-haul aviation
There was a lot of noise a couple of weeks ago when Virgin Atlantic flew a jet across the Atlantic powered on “sustainable aviation fuels” (SAF). This was how The Next Web summarised the story:
The Tuesday trip from London to New York on a Vrigin Atlantic 787 has been celebrated by airlines and politicians as a “milestone” in the journey to net zero. Scientists and climate campaigners, however, have poured scorn on these claims.
The British government, of course, in the shape of our current Transport Secretary Mark Harper, used the occasion to roll out some standard “magic bullet” remarks about post-fossil transition:
"Today's historic flight, powered by 100% sustainable aviation fuel, shows how we can both decarbonize transport and enable passengers to keep flying when and where they want.
(Greenwashing transatlantic aviation. Photo: Virgin Airlines.)
There are technical details about the fuel mix online, but it was basically made from waste oil and fats—which isn’t a zero carbon fuel, as it happens. But the important point is that this was a demonstrator flight, partly funded by the Department of Transport, involving a consortium that includes Imperial College London, University of Sheffield, Boeing, Rolls-Royce, and BP.
Of course, the challenge for long-haul flight is that largely for weight reasons there’s not presently a feasible non-fossil fuel alternative to using sustainable aviation fuels as the power source. For short- or medium-range flights, battery or hydrogen is likely to provide an alternative.
The sustainability writer Chris Goodall has now popped up on his Carbon Commentary blog to explain that this type of SAF doesn’t even begin to fill the fuel gap on long-haul flights:
Aviation fuel made from waste oil and fats is not zero carbon. Perhaps more importantly, the quantities available for use in the UK and elsewhere are not sufficient to ‘decarbonize aviation’. And published official reports show that the government knows this. The actual share of aviation needs that can be met by these two sources is almost certainly less than 2%, even if these raw materials are entirely used for this purpose, rather than existing uses.
Because he is of an analytical turn of mind, Goodall has done the sums on the used cooking oil (UCO) market in the UK. If you’re trying to run an airline fleet, the sums don’t look good.
The UK produces about 250 million litres of UCO, which is mostly used to make biodiesel to power vehicles. But let’s assume for a moment that it was all used for aviation, in which case it would generate around 160,000 tonnes of aviation fuel.
Current UK airport demand for aviation fuel is more than 12 million tonnes. So even if all of that SAF was used as efficiently as possible, it would substitute for 1.3% of UK aviation fuel demand. But that may not be the best way to use it. For example:
Every single takeaway in the country, every restaurant and catering establishment would have to devote all its UCO to one particular use. Biodiesel and other uses (would) have no access to the UCO even though at the moment, for example, McDonalds uses its own waste oil for biodiesel for its distribution fleet.
Other calculations get to similar numbers. And it would only go up a tiny bit if it were being used solely for long haul flights, although replacing the short- and medium-hail fleet is a 30-40+ year project.
This is Chris Goodall’s conclusion:
For aviation to be fully decarbonised the world will need a mixture of battery aircraft for short trips, some use of hydrogen in medium-sized aircraft and a very large scale replacement for aviation kerosene for long distance travel. Although using UCO is appealing because of the ease of conversion to fuel, the volumes are tiny in the context of the global need.... SAF from waste oils is a dead-end.
Hence the comments about greenwashing. Indeed, the fact that there’s so much noise about such a desperate option tells you something about the extent to which the aviation sector is up against it when it comes to decarbonisation.
The government has also talked about other potential waste sources, such as forestry and municipal wastes, but these suffer from a combination of technical difficulties and small scale.
If we are seriously talking about sustainable alternatives to aviation fuel, we will need to make synthetic fuels, from hydrogen and direct air capture CO2. This is at an early stage, and there are huge questions about whether it will scale, and at what price point. Goodall points to Infinium and Norsk e-Fuel as companies active in this area. Of course, other sectors will also need synthetic fuels as well. We may just have to get used to the idea of flying a lot less.
(Infographic of Direct Air Capture. Source: Infinium)
The Decarbonising UK Transport report, which I co-wrote with Glenn Lyons and Charlene Rohr, is still a pretty good guide to this area.
Notes from readers: writers
Nick Wray responded to the piece on Friday about Monica Ali’s talk on writers and AI with a link to the Scottish newspaper The Herald, and a report on the response by Scots authors to reports of the AI database with copies of their work. It’s outside their usual paywall. The writer Damian Barr summed up the argument:
"AI can produce things that are passable at this point but not delightful or inspiring. It goes to this philosophical question of what do we want art to do. It's not just to reflect something back to ourselves, it's to change us in some way or challenge us and that's about new ways of asking a question and this doesn't do that, it is a dead resource…
"AI offers a profound economic and moral challenge."
j2t#524
If you are enjoying Just Two Things, please do send it on to a friend or colleague.