Welcome to Just Two Things, which I try to publish daily, five days a week. Some links may also appear on my blog from time to time. Links to the main articles are in cross-heads as well as the story.
#1: Telling different technology stories
(Marvin Minsky, Claude Shannon and others at the Dartmouth College Summer Project, 1956. Photograph taken by Gloria Minsky)
In a fascinating long article in the Australian Griffiths Review, the technology researcher researcher Genevieve Bell tracks the history of AI back to 1956, and the group of white male researchers that imagined it. It’s a rich story, from the Macys Conferences on cybernetics (curated partly by Margaret Mead and Gregory Bateson), to the Dartmouth College Summer Project on Artificial Intelligence, where this photo was taken, to the Cybernetic Serendipity exhibition at London’s Institute of Contemporary Arts. And from there to some of the moments of liberatory potential that seemed fleetingly to appear—caught best, perhaps, in the line by poet Richard Brautigan’s about a future in which we would be “watched over by machines of loving grace.”
Well, that didn’t happen, as Bell points out:
The cybernetic meadows and forests of Brautigan’s imagination have not been realised, and the machines that watch over us now seem to lack loving grace. The AI that was promised in 1956 has not emerged, and technological revolutions have not led us to transcendence or a whole-Earth point of view. According to a 2018 news feature by Nicola Jones for Nature, the world’s data centres consumed in excess of 200 terawatt hours of electricity each year – this is more than the consumption of some whole countries and represents 1 per cent of global electricity demand.
Silicon Valley has been a myth-maker. It has told stories about the future, but it simply erases others. In the current ecological crisis, we need to tell different future stories:
One that focuses not just on the technologies, but on the systems in which these technologies will reside… Ultimately, we would need to think a little differently, ask different kinds of questions, bring as many diverse and divergent kinds of people along on the journey and look holistically and critically at the many propositions that computing in particular – and advanced technologies in general – present.
Bell proposes the Brewarrina Aboriginal Fish Traps as a metaphor for these stories—an indigenous construction that is thousands of years old. It combines technical, ecological and cultural knowledge “in a system that was designed to endure”.
While we’re in the weeds with AI, it’s also worth heading over to the LRB blog to read Paul Taylor’s account of machine learning during the pandemic, and beyond. Even when it uses so-called ‘deep’ machine learning, the results are pretty shallow:
In 2019, an algorithm used to allocate healthcare resources in the US was found to be less likely to recommend preventative measures if the patient is black, because the algorithm is optimised to save costs and less money is spent treating black patients. Around the same time, Timnit Gebru, a leader of Google’s ‘ethical AI’ team and one of the few black women in a prominent role in the industry, demonstrated that commercially available face recognition algorithms are less effective when used by women, black people and, especially, black women, because they are underrepresented in the data the algorithms are trained on.
AI, in other words, replicated existing power and social structures, and seems to have few mechanisms either to correct for this or deal with the ethics of it.
Timnit Gebru isn’t working at Google anymore. She thinks she was fired, it says she resigned: the company has done its best to throw shade on the circumstances of her departure. But no-one disputes that it was down to her authorship of a recent draft paper that pointed out some of these home truths about AI rather than putting a happy smiley face on it. MIT Technology Review asked her about her view of events; the interview is jaw-dropping.
And the stench from Google on this issue keeps getting worse: the co-head of ethics research, Margaret Mitchell, was also fired a few days ago, apparently for trying to defend the reputation of colleagues and research group. (That’s not why Google said she was fired, of course).
You can’t help but think this was a bit of an own goal by Google. If they’d just let Timnit Gebru publish the paper, it would have quickly vanished into the thickets of the AI research community. As it is, they’ve given the whole issue the prominence it deserves.
(H/t for the Genevieve Bell article to Sahar Hadidimoud)
(Photo: Scott Smith)
It’s World Futures Day today, so I thought I’d mention an
article by the futurist Scott Smith that has a simple tip on how to make the future more visible: “make space for the future”.
He means it literally—at least for those people who are still working in locations with colleagues.
In this moment of deep pandemic-driven dislocation, it’s tempting to think it was a metaphysical, or at least metaphorical, suggestion. Something like hold space for the future. Well, yes, that too. But I did mean something concrete, physical, and practical: create a physical space in your environment for the future to live, be examined, be reconfigured, be contested. A wall, a tabletop, a white board, a shelf, a hallway.
The argument goes like this. Most people think about the future from time to time, but it tends to get squeezed to the edges by all the other things they have to deal with in the moment. So a space of some kind means there’s always something there to remind you. It could be a wall, it could be a table, it could be a virtual space like a Miro board, but it’s somewhere rough and ready, where people can put futures signals—pictures, artefacts, ideas—and other people can play with them.
One of the thoughts behind the idea was the “futures room” that some brands and businesses used to maintain as a source of inspiration, although on the few occasions I visited such places, I found the noise of the Official Future that was being projected a little bit deafening, along with a sense of silence from the missing futures that were being closed off.
And in practice, these don’t sound like the sort of space that Scott has in mind. His actual examples all seem a little more unfinished: a wall in a shared room, a tabletop, a shelf. Somewhere where it’s OK to play a bit.
As Scott observes:
[M]aking physical space for the future can help make cognitive and cultural space. We keep and display images because they hold mental space for memories. Surely we can flip this around and find new ways to frame anticipation.
Events: this is a trailer (53 seconds) for the Millennium Project’s contribution to World Futures Day. It’s happening at noon, wherever you are in the world.
j2t#041
If you are enjoying Just Two Things, please do send it on to a friend or colleague.