20 November 2023. Billionaires | AI
Billionaires are a policy problem. Time to rein them in. //. Using AI to teach history [#517]
Welcome to Just Two Things, which I try to publish three days a week. Some links may also appear on my blog from time to time. Links to the main articles are in cross-heads as well as the story. A reminder that if you don’t see Just Two Things in your inbox, it might have been routed to your spam filter. Comments are open.
1: Billionaires are a policy problem. Time to rein them in.
The writer and innovator Geoff Mulgan has become a bit measured since becoming an academic a few years ago, even sometimes a bit bland. And that’s a shame, because it didn’t used to be the case. Mulgan has written some great books.
But a recent article on the billionaire problem, in Prospect magazine, reminded me that he is both a good writer and has a sharp policy brain.
His starting point was attending a couple of events at which there was a decent selection of billionaires, from different parts of the world. His main takeaway from this:
(S)pending time with them crystallised for me just how big a billionaire problem the world now has, and why solving it may be a precondition for successful action on so many of the other problems we face. It also crystallised for me that we have a problem even thinking about billionaires and possible remedies to the problem they present.
(Jeff Bezos, by Deviant Art. CC BY-NC-SA 3.0)
He sees the billionaire problem as being about their pervasiveness, the way they dominate both politics and other areas of life, including business and, notably, philanthropy. This is the scale of the problem:
Over the last decade, globally, the richest 1 per cent have taken about half of all new wealth, and in the two years to December 2021 it was two thirds—twice as much as the other 99 per cent—leaving the world’s 3,000 or so billionaires worth some $13tn. In other words, we are seeing an extraordinary concentration of wealth... There may be a justification for such a remarkable imbalance of rewards, but I’ve never heard it. Similar patterns in the past often led to violent responses.
He points here, by the way, to Eric Vuillard’s book The War of the Poor, which describes how in 16th century Germany the poor “massacred the rich in large numbers” before mercenaries re-imposed order.
As Mulgan observes, no billionaire actually needs all of their money. They could get by, still luxuriously, on an awful lot less. He suggests that part of this an addiction problem. (Interestingly, Bob Costanza makes a related point in similar language about our societies being “addicted to growth”.)
But part of this is about a view of efficiency in production which isn’t matched by a view of efficiency in consumption—which is completely missing from economics.
Ask a typical economist and they will tell you that while a factory that’s closed half the time is inefficient, it’s perfectly efficient for a rich man to own 20 houses even if most of them are empty most of the time. Yet every undergraduate student of economics learns the concept of diminishing marginal utility—the idea that the more you have of something, the less additional good it does you.
And, of course, climate change is the most striking where this inefficiency of consumption matters. Just a reminder, since I wrote about this here recently, because the numbers here are staggering as well:
(O)ne study of 125 of the world’s richest billionaires found that their average annual emissions are a million times higher than those of someone outside the richest 10 per cent of humanity.
There’s an almost complete lack of political discussion about any of this, and this is because billionaires influence the policies of parties across the political spectrum, mostly just by funding them. (There’s a good section in the article on some of the detail here.) The only place that might be a limited exception is China.
Even donations suffer from an efficiency problem, although Mulgan doesn’t quite put it like that. The billionaire who boasts about their gifts to Harvard is pumping money into an institution that doesn’t know what to do with all the money it has already got.
So what is to be done?
Mulgan’s starting point, from his book The Locust and the Bee, is that you can’t understand capitalism without understanding that it alwaysrewards both production and predation. The question is where the emphasis is at any point in time:
There are times when the rewards flow generously to innovators and creators. But there are other times when rewards flow to rentiers, providing unwarranted windfalls to the lucky, the well-positioned or those who have fixed the rules of the game in their own interest. .. The vast shift of wealth over the last decade or so does not primarily reflect the genius of billionaires, but rather structural flaws in our systems that have given their addiction to money free rein.
There have been some half-hearted proposals from parts of the billionaire class to do something about all of this. Mulgan mentions the “Giving Pledge” and “Patriotic Millionaires”, although it seems hard to track whether they have much impact. So they may need a little help from the rest of us.
(‘Koch Brothers’, by DonkeyHotey/flickr. CC BY 2.0)
Despite the historic lacuna in economics about the effect of extreme wealth, economists are now taking this seriously. Mulgan points to Stiglitz and Piketty, of course, but also mentions Gabriel Zucman, (new to me), the winner of the prestigious John Bates Clark medal this year, who has been working on tax questions as well as “important work on the role of tax havens in supporting the billionaire class”. Piketty, for his part, has proposed a tax rate of 90% on wealth over $1 billion.
Mulgan thinks that the political landscape is shifting in this direction, partly because governments need the money, and because the arguments that wealth taxes stunt innovation don’t stand up to scrutiny:
Far too often our systems simply give windfall gains to people who have done little or nothing to make the economy more productive. As one example, the biggest gainers from advances in the UK’s knowledge-based and creative economy have probably been property owners in the big cities, such as the Duke of Westminster.
Mulgan’s own policy agenda would comprise three elements:
A radically different approach to taxes, especially on wealth, “recognising that taxes will need to be quite diverse to handle property... land, financial assets, artworks and so on”
Restrictions on forms of hyper-consumption, such as private jets
Ending plutocratic control of both media and social media.
As he says, we should at least be able to discuss the merits of these as policy options, rather than having these discussions closed down by billionaire influence.
Either way, Mulgan thinks that some kind of crunch is coming. We’ve seen this in the past, at the end of the 18th century, in France, and at the end of the 19th in the United States. The shift is going to have to come from politics and culture, but he still wonders why billionaires themselves are so incurious:
(F)ew billionaires appreciate that the power of their class, their grip both on politics and on communications, and their hoarding and promotion of waste on a massive scale, appals many today and is likely to appal future generations. (But) I have some hope in the next generation, many of whom are much more ecologically and socially conscious than their parents.
2: Using AI to teach history
I was running a working on the future of education in a London borough recently, and a head teacher mentioned that his school was using AI to remove the drudge work from their teachers’ day to day routines, so they could focus more on the teaching. Apparently they had freed up 20-30% of their time so they could focus more on teaching.
This seems a more likely outcome for AI to me than the-sky’s-going-to-fall-in-in-fifty-years discussions that I seem to see more of in my feed.
So perhaps I was primed when I saw an article in Hyperallergic about using ChatGPT in work with students which forced them to engage with it critically.
Sarah Bond teaches history at the University of Iowa, and she runs a screening of Gladiator at the start of her Roman Empire course. She’s as aware as you are that Gladiator is not a documentary guide to Roman history, and this is the point. Her starting point is that if we’re concerned about the use of AI in education, banning it—as some colleges are trying to do—is not going to work:
And yet, from books to alcohol to sex, sociologists largely agree that bans are socially ineffective. As University of Buffalo philosopher Ryan Muldoon wrote, studies show that media and technology prohibitions don’t stop behaviors. Instead, they encourage black markets and clandestine use.
And she tried banning it earlier in the year, and was still getting papers that were being flagged as likely written by an AI. So she tried a different approach:
Instead of assigning my rather routine Gladiator review, I asked students to query ChatGPT about the film’s historical inaccuracies. I also reached out to the person who knows the film’s historical deficiencies inside and out: Harvard scholar Kathleen Coleman, a professor of Classics who served as the historical advisor for Gladiator before it was released... In the end, however, the ahistorical elements of the movie proved to be too much. Coleman asked the filmmakers not to specify “historical advisor” in her credit line.
(Mosaic of gladiators, 3rd century, Rome; National Archeological Museum, Madrid. Photo: Richard Mortel, via Wikipeda. CC BY 2.0.)
It turned out that Coleman was already using AI in her classes. She runs a course on ‘Loss’, in which students have to compose letters of condolence. Coleman put a letter into this discussion, and asked her students to critique it. They didn’t hold back. Afterwards, Coleman revealed that her letter had been written by an AI.
“ChatGPT did an absolutely appalling job and the students just ripped the thing to shreds,” Coleman told me. The lack of emotion and humanity, she remarked, “made them realize that AI is so generalized, so cliche-ridden, and has absolutely no sensibility. It went on for more than a page, in hundreds of words of nothing but blabber.”
Humans, it turns out, are pretty good at detecting the human. So Bond suggests that it’s time to help students learn for themselves the limits of AI. She quotes the information scientist Ted Underwood:
He suggests that experiencing the pitfalls of AI is more instructive than warning students about them. We are also preparing them for a future that, without doubt, will require being AI savvy.
Going back to Gladiator, it turns out that ChatGPT isn’t very accurate when you ask it to list the inaccuracies in the film:
Let’s take ChatGPT’s assertion that the Colosseum had not yet been built during the time period of Gladiator. In reality, the Flavian Amphitheater — best known as the Colosseum — was dedicated 100 years prior to Marcus Aurelius’ death. Students in my course figured out quickly that AI currently struggles with dates and discerning AD/BC and CE/BCE dating systems.
Students also noticed quite quickly that ChatGPT wasn’t very good at citation, and also ChatGPT’s tendency to make stuff up to fill gaps (the euphemistically and helpfully described “hallucination”.)
More significantly, it wasn’t very good at explaining why inaccuracies appeared in the film. The students had to do further research to fill these gaps:
With research beyond AI, they found that anachronistic blunders like horse stirrups, for example, were necessary because the stuntmen needed them as part of their safety contract.
Nor could it help them with the changes that were made to the history shown in the film for what might be considered to be historiographical reasons:
(M)ore glaring errors, such as Gladiator’s allegation that Marcus Aurelius wished to bring back the Roman Republic during his lifetime, demonstrate how Rome’s past is wielded as a mechanism for commenting on contemporary society. The students soon realized that in modern media, we often receive the Rome that producers want, rather than the one that really existed.
There’s more in the piece in this vein, about critical readings of Gladiator that locate it in the moment of its release at the turn of the century. But I think the important point here is about AI, rather than Russell Crowe:
As Coleman remarked, AI is already successful at mimicry, but lacks the ability to be truly creative. Perhaps more importantly, the ability to communicate empathy, loss, and complex emotions remains uniquely human. AI may be able to state that a group of people were conquered by Rome, but it’s up to historians to explain why they continued to resist.
j2t#517
If you are enjoying Just Two Things, please do send it on to a friend or colleague.