15th February 2023. AI | Politics
Talking to ChatGPT about imaginary professors. // The case for Universal Basic Services
Welcome to Just Two Things, which I try to publish three days a week. Some links may also appear on my blog from time to time. Links to the main articles are in cross-heads as well as the story. A reminder that if you don’t see Just Two Things in your inbox, it might have been routed to your spam filter. Comments are open.
1: Talking to ChatGPT about imaginary professors
As regular readers will know, I’ve been trying to track the discussions and experiments about ChatGPT as they emerge. Two reasons: it’s clearly a significant development in terms of our cultural engagement with AI, even though there are disputes about its value; and, related, because it’s a breakthrough app in its area, in the same way that Skype was in the world of the videophone, or Visicalc (the first spreadsheet program) was in the world of business.
So I was interested to see (via John Naughton’s Memex 1.1 blog) a long article by the researcher Ali Minai in which he carried out some interesting experiments with ChatGPT.
But first, he acknowledges that among AI researchers, the subject of ChatGPT is divisive, and that not all of them are as impressed as non-experts tend to be:
(S)ome see ChatGPT as a new stage in the quest for artificial general intelligence (AGI). Now, they seem to imply, we have an AI system that effectively passes the Turing Test almost every time it is used,… Skeptics, including prominent AI figures such as Yann Le Cun, Gary Marcus and Francois Chollet, dismiss such claims as mere hype, arguing, as Chollet recently did in a tweet , “Strong AI is not within sight, and the recent advances are not moving us in the direction of strong AI.” LeCun tweeted “On the highway towards Human-Level AI, Large Language Model is an off-ramp”, emphasizing the widely expressed idea that learning from text alone is insufficient to learn about the world .
Although there has already been quite a lot of discussion out there about ChatGPT eating our jobs (this is a trope of technology discussions that goes back a few hundred years), it’s not good enough to do that right now, at least in any task “requiring depth of thought or fidelity of information”:
As the system will readily tell you if asked, it has no imagination, and yet – though it vehemently denies this – it is a habitual fantasizer. These inconvenient issues, however, will not keep ChatGPT from being rushed into many real-world applications right now, and that is where it poses several immediate threats.
This issue about the difference between fact and fantasy is problematic, because ChatGPT isn’t very good at presenting context. In most places where humans encounter text, we are able to categorise it:
We expect every Washington Post front page headline to be factual, but know that many things stated in “Lord of the Rings” are not. Similarly, when we ask a physicist a question about physics, we expect a factual answer, but we have a different expectation of a storyteller or poet. The problem is that – wittingly or unwittingly – ChatGPT is being seen as both physicist and poet, news reporter and storyteller. And, in most cases, there is no basis on which a user of ChatGPT can tell whether they are being given a fact or being told a story.
(Imaginary professors #1: Prof Coriana Hunch, Dramatic Arts and Dance. © Mica Angela Hendricks via her Busy Mockingbird site)
So Ali Minai found an intriguing way to tast ChatGPT’s relationship with truth, by asking it to generate a short bio of himself. First he just gave it his name, but that didn’t provide enough of a hook for ChatGPT to get started, so he added the name of his university (Cincinnati), but no more than that. A biography arrived, but quite important details were inaccurate or even wrong. Minai reflects on the process ChatGPT had gone through to get to this:
(S)ince, as far as I know, it does not search the Web in real-time to answer a query, the response means that, somewhere in its murky brain, there is a little space dedicated to knowing about me in a sketchy sort of way – probably because some of my publications were cited or included in its training data, but this little memory could not be triggered just by my name – it needed information about my institution too. Once triggered, it tried to guess… um, infer… more detail, some of which turned out to be wrong.
So—if he had been more famous, the bio would probably have been more accurate. As a next step, Minai asked ChatGPT for a bio of Willian Balder, a made-up professor at Cincinnati. (I hope this name was chosen as a pun on ‘balderdash’). A biography emerged, soon enough, and the article replicates it:
As in the case of my bio, one sees here ChatGPT’s compulsive need to invent things when it runs out of actual information – much like a student who is trying to respond to a question for which they know nothing except the information supplied in the question itself. I believe the technical term for this is “bullsh-t”.
(Imaginary professors #2: Prof Diglish, Economics and Business Management. © Mica Angela Hendricks via her Busy Mockingbird site)
And then finally, he asks ChatGPT about ‘Grimus Caterwaul’ first telling it just the name, and then advising it that Professor Caterwaul was another colleague at Cincinnati University. Although this was the same conversation, Minai gets very different answers. He draws some conclusions from all of this:
First, that ChatGPT is indeed a very powerful tool – one capable of capturing our attention and playing with our minds. It is not a toy.
Second, that, for information it has been trained on explicitly (e.g., individuals with detailed Wikipedia pages and other public information), it has good recall, but once pushed out of this comfort zone, it begins to make stuff up rather than admitting ignorance...
Third, when it does make things up, it does so in a plausible way, making those falsehoods much harder to detect. This is because the entire basis of language models is to generate text that is in context. For applications such as generating movie scripts or fairy tales, this is great, but not when the bot is being queried for accurate information.
From this small experiment, Minai identifies three dangers.
First, we may give too much control to AI systems that are too primitive to understand deep issues or human values. (I’d add credibility to control).
Second, we are outsourcing human knowledge, imagination and insight to “superficial, inaccurate, and untrustworthy programs.” Although one of the tropes of AI has been the danger of super-human machines outrunning humans, there is a risk that
long before becoming enslaved by super-human machines, we will have ceded control of our lives to sub-human ones.
Third, there is “limitless potential for deliberate misinformation and manipulation”. Whatever we think about the risks of disinformation in social media channels, text-based AI will make it far, far, worse. by increasingly human-like AI systems. The danger to civilized society from fake information disseminated over social media is already well known, but more relatable AI will amplify it beyond imagination. As he concludes:
As these systems get trained on all the text, images, videos, and code that humanity has generated, they pick up the wisdom and the folly, the knowledge and the misinformation, the nobility and the toxicity that pervade the lives of humans, but all without any anchoring in lived experience, shared values, or even full understanding of meaning. All the filters that allow us to keep the wolf within us in check are missing in AI systems, so the wolf is very likely to show up.
The comments are worth looking at as well.
If you want to see more of Mica Angela Hendricks’ imaginary Hogwarts’ professors, there’s a whole school of them on Instagram.
2: The case for Universal Basic Services
There’s a new report out from NEF (the UK’s New Economics Foundation) making—again—the case for Universal Basic Services. NEF has been a leading advocate of the idea for the past few years now, and seems to have succeeded in framing in a way that it’s making some political headway.
(Universal Basic Service. Source: New Economics Foundation)
I’m really just summarising the shorter blog post here; the report is also accessible. The excuse for the new report is that it’s about the role of UBS in helping to tackle the climate emergency:
UBS deliver secure social foundations by making sure everyone has access to life’s essentials and promote equity by meeting everybody’s needs, regardless of income or status. They can be designed to curb harmful emissions and safeguard natural resources. By putting human wellbeing and social justice centre-stage, they also help to build democratic support for climate action.
The thinking behind Universal Basic Service is that it’s a way of delivering the services that people need to be able to participate in society, in contrast to market provision.
The reason that it can help deal with climate change is by
influencing public attitudes and consumption patterns; by transforming provisioning systems; and by underpinning political programmes to bring about a green transformation.
The post has a short section on each of these.
Influencing attitudes and consumption
UBS promotes universal sufficiency: enough for all so that everyone can have enough. It involves pooling resources, sharing risks and working together to make sure everyone’s needs are met.
One of the important elements here is that discussions about sustainable consumption usually involve consumption at the level of the individual or the household. But shared provision is always more sustainable and ecologically efficient than individual provision:
Take transport, for example. Greenhouse gas emissions from cars and taxis are more than seven times higher than from buses. Accessible, co-ordinated public transport across a large urban area, even without free fares, has been found to reduce car traffic, improve air quality and lower carbon emissions . Free bus fares would accentuate that effect, especially if combined with measures to discourage private vehicles, such as congestion charging and parking fees.
Transforming provisioning systems
One of the elements of UBS is that it can effectively raise the regulatory floor of markets through procurement. They cite an idea called ‘social licensing’, which appeared in a recent Fabian Society booklet, under which providers are required to sign up to a set of standards—public interest obligations—as a condition of delivering services:
This can be used to ensure decent pay and conditions for service workers, as well as limiting profit extraction, establishing quality standards, and coordinating sustainable practice through networks of employees, service users, and suppliers. This way, providers can promote active travel, resource-efficient buildings and local food procurement.
The post uses the example of childcare, which tends in the UK to involve expensive care and quite a lot of regulation. Social licensing would, for example,
promote sustainable practice, for example, covering the way childcare centres are constructed, equipped and maintained; how much energy and non-renewable resources they use; and how children travel to and from home.
Underpinning a green transition
UBS creates secure social foundations that provide ‘in-kind’ benefits that enable social, economic, and public participation for everyone. One of the outcomes of this is, potentially,
Public services that deliver everyday necessities are worth much more to people on low incomes – who would lose a far bigger chunk of disposable cash if they had to pay for them directly. So UBS is highly redistributive and helps to ensure that the costs of climate mitigation are not loaded onto the poorest.
One of the things I note about this is that if new ideas are to make it to the edge of the political mainstream (meaning that they start to be considered as possible policy options) they need to go through a phase where they are ‘stabilised’, intellectually speaking. This usually involves simplifying them, or at least the narrative, to a point where their proponents, and come to that their opponents, are arguing about the same thing.
When I look at the idea of Universal Basic Service as against Universal Basic Income, it seems to have done this more successfully. One of the reasons for this is that it’s easy to get lost in the detail of the UBI schemes, and the UBI waters also get muddied by libertarians, who typically want to use it to restructure welfare provision radically.
Another is that UBS is a smart re-positioning of a familiar if distant idea (“the five giant evils” of William Beveridge”) at a time when poverty levels are climbing again, so doesn’t feel so strange. In theory, the two ideas might go together quite well. In practice, I suspect that only one will make it to the policy table, and I suspect that UBS is better placed to do this.
Update: McDonalds
The food theme keeps producing responses. My thanks to David Ballard for mentioning Chris Arnade’s work in writing about the role of McDonald’s in struggling communities in America, and photographing them:
McDonald’s was one of the few spaces in Hunts Point open to the public that worked. While wonderful and well-intentioned nonprofits serve Hunts Point, whenever I asked anyone where they wanted to meet or grab a meal, it was almost always McDonald’s.
When I asked why not the nonprofits or the public parks, the answer would be some variation of “What is that?” or “They’re always telling you what to do.” The nonprofits came with lots of rules and lectures about behavior, with a quiet or not-so-quiet judgment… McDonald’s was Hunts Point’s de facto community center, and if I wanted to understand Hunts Point, I had to spend time in the McDonald’s.
There’s a good interview with Chris Arnade at the LA Review of Books, and the photos are terrific.
j2t#425
If you are enjoying Just Two Things, please do send it on to a friend or colleague.