20 January 2023. Failure | Cities
Succeeding at innovation involves learning to cope with failure.// The three day office week.
Welcome to Just Two Things, which I try to publish three days a week. Some links may also appear on my blog from time to time. Links to the main articles are in cross-heads as well as the story. A reminder that if you don’t see Just Two Things in your inbox, it might have been routed to your spam filter. Comments are open.
It’s a bit of a bumper weekend issue, since there’s also a note about an article I was interviewed for and yet another ChatGPT update, this time featuring Nick Cave. Do have a good weekend!
1: Succeeding at innovation involves learning to cope with failure
What’s not to like about the idea of the Museum of Failure, developed by Samuel West? He was interviewed by the Bulletin of the Atomic Scientists, and the article is currently outside of the paywall. West is a psychologist who identified that the fear of failure was an obstacle to innovation, and so created the museum in 2017 as a touring exhibition. It travels on from place to place every three months, and is currently in Canada.
(Google Glass. Not successful. Photo: Royal Opera House Covent Garden/flickr. CC BY 2.0)
West was interviewed by Dan Drollette:
Drollette: What exactly is the Museum of Failure?
West: The soundbite is that this museum is a collection of failed innovations from around the world... The whole aim of the museum is to help people recognize that we need to accept failure if we want progress. And by that I mean any kind of progress, not just consumer products and new devices. The main point is that we have to accept failure, because it usually takes several iterations before we get things right—most experiments fail.
One of the implications is that companies, and governments, come to that, need to be better at learning from failures here, which is also West’s commercial interest here—he works with companies to help them to do this. But he’s also really clear that learning from failure isn’t just about parroting the Silicon Valley slogan about moving quickly and breaking things.
There’s been a tendency lately, especially in Silicon Valley, to accept the whole motto of “failing forward.” Or “failing fast.” They’re very good at accepting failure there. But they’re not any better than anyone else at learning from failure, because they then make the same mistakes over and over again. You want people to take meaningful risks and learn from them. That’s where the action is, when it comes to innovation.
He got into this area through an interest in how innovation happens. He noticed that there’s no shortage of ideas about how to design innovation processes, but that while failure was a necessary part of innovation, people weren’t very good at failing:
The main obstacles are that people are afraid of failing, and being embarrassed by it—those are two of the biggest obstacles for innovation. This happens even in the world of science, which is supposedly a place where you can’t get criticized if you do something and the experiment fails—especially if you do it the way everyone else before you has done it. But if you do something a little bit different and you fail, then it’s: “You’re not gonna get any more funding, sir.”
And of course, there are lots of ways in which innovations can fail. When I used to train people on using futures as a platform for innovation, I used to have an exercise called, cheesily, The Fifth Element (Luc Besson’s film was still around). We’d look at a failed innovation, and ask what weak signal of change the innovator or entrepreneur had seen that they wanted to act on. Because a successful innovation is a whole social system, not just a set of technologies. West makes a similar point here:
There’s the business model behind it, the user model, the application model of the tech—they each face the same hurdles. These aspects don’t get as much funding; they’re not as sexy as the latest, most fantastic gizmo for renewable energy. But they’re just as important: If nobody wants to use a technology, or it looks stupid, or makes a lot of noise, or is too complex, then it doesn’t matter how elegant the technology is—nobody’s gonna use it.
West thinks that his timing has been good here; that had he opened earlier than 2017 the Museum of Failure would have also been a failure. He mentioned a number of people working in the same space now, including the Institute of Brilliant Failures in the Netherlands, and the catchily titled Fuckup Nights:
They organize, like TED Talks in the evenings. It gives people a forum to say “Well, this, I tried this and I tried that.” It’s interesting, for sure, and it’s spread all over the world.
And Dan Drollette added a note here:
The organization’s mission statement says “Fuckup Nights is a global movement and series of events where stories of failure are shared. Each month, at events around the world, we get three or four people to face a room full of strangers to share their own fuckup. Stories of businesses that go bust, partner deals that fall out, products that have to be recalled, we tell it all.”
West has some entertaining examples of products that failed, which do open up the question, “What were they thinking of?”:
In terms of branding—where there was a mismatch—there was the Harley-Davidson Barbie doll. Fat-free Pringles. Green ketchup. Colgate frozen lasagna. But I think the best for what we’re talking about here is 2012’s Google Glass. There was nothing wrong with the technology per se, it was super exciting. But the way they launched it, and the way they implemented the technology to users, screwed it all up .
I’m not sure about whether there was “nothing wrong” with the Google Glass technology. I think it’s an example of where innovators don’t consider the social contexts of a technology. Google Glass left a lot of questions hanging in the air about what happened to its data exhaust, or, come to that, the consent of people that you interacted with. Or whether you just came across when wearing them as a self-centred dick.
For my money it was a classic example of a technology promoted by technologists who were blind to its wider implications. Which is why I often use Bill Sharpe’s Technology Axis Model with clients to think about the whole social life of a technology. I’ve written about that here.
2: The three day office week
There’s a piece at the online London news site On London that gives us a sense of where the London office market is going in the aftermath of the pandemic—and what that might mean for cities.
It seems that the new shape of work in the City of London involves being in the office three days a week:
Mobile phone data reported by the BBC suggests Tuesday, Wednesday and Thursday in the office is now the new normal, with Mark Allan, chief executive of property company Landsec , reporting activity in the City well down on Mondays and Fridays almost as quiet as weekends. “We’re not going back to how things were pre-Covid,” he said. “We certainly believe there are going to be fewer people in offices for the longer term and we are planning accordingly.”
The City of London freesheet newspaper, City AM is now publishing only a digital edition on Fridays, which also seems like supporting evidence here.
There are still office planning applications in the pipeline, say people whose job it is to talk up the City. But Charles Wright, the article’s author, points out that the trend line is down:
New London Architecture’s latest tall buildings survey shows applications down 13 per cent overall in London – the third consecutive year-on-year fall, albeit with 12 office towers in the City pipeline with an average height of 38 storeys. Will these skyscrapers materialise? Deloitte’s latest London “office crane” developers survey found 55 per cent of respondents expecting a decrease in their development pipeline over the next six months and some 70 per cent expecting a ten per cent fall in demand for office space in London in the long term.
There are similar findings in a report done by Arup, Gerald Eve and the LSE for the central London boroughs’ group Central London Forward. The office demand there is is driven by what developers call “the flight to quality”—new, airy, sustainable, etc—which also means that there’s a lot of old office space being taken out of circulation. The business question here is whether this is permanent—and non-quality office space will become a stranded asset—or whether demand will pick up.
The Arup report had some modelling in it that suggested that demand would reach 2019 levels again sometime in the 2030s, depending on which scenario evolved. This would be bad. But since the modelling shows a set of laughable hedgehog diagrams I think we can be sceptical even of that.
And people aren’t keen on going into offices unless there’s a lively atmosphere around them as well. One of the places this seems to be leading to, in both London, Washington, and New York, is an expectation that city centre areas need to have more people living in them, which might also salvage some of the asset value of low-quality office space, through conversion. This also helps spread demand for shops and services over more of the week.
Not everyone’s accepted that this ‘live-work-play’ model is worth pursuing, although perhaps they haven’t looked at the data on how much people hate commuting. But it seems to me that the pandemic has finally done for the idea of urban zoning, which was largely a product of the age of the car. If you want to keep your city centres alive—especially your financial centres—you’re going to need people to want to live there.
Other places: Predictive policing
Tara Yarlagadda writes about the science behind the movies at Inverse, and she wrote about the 2002 film Minority Report recently, since it’s currently streaming on Netflix. Obviously the film, like the Philip K. Dick story it’s based on, conjure up the idea of predictive policing. I talked to her for the piece.
Of course, the same issues that come up repeatedly with AI in general are amplified when it comes to predictive policing. The short story and the film take this to an extreme that police forces so haven’t matched: the (human) “pre-cogs” know that you’re planning to commit a crime, so you’re detained before you do.
Here’s an extract from the article:
Why commit a murder if it’s nearly guaranteed the police are going to arrest you? The idea in the movie — and more broadly in real life — is that predictive policing could ultimately deter and reduce crime.
“The idea that AI technologies would predict crimes with the same degree of accuracy as in the film is mostly a sci-fi invention in my assessment,” Sven Nyholm, an associate professor of philosophy at Utrecht University and author of the book Humans and Robots: Ethics, Agency, and Anthropomorphism, tells Inverse.
However, Sven says it is that future AI technology could help police predict crimes with a much greater degree of accuracy than we can now.
“In other words, the future might not be like what we see in Minority Report. But it might involve much more precise crime prediction than what is currently possible,” Nyholm says.
(Tom Cruise in Minority Report: seeing the future)
One of the things that strikes me about Minority Report—the film, not the story—is that the director, Steven Spielberg, used it to create a visual idea about data that was both influential and dangerous. This is the notion that some kind of digital super-intelligence can be manipulated by an investigator—that all the data is there and just needs to be sorted into the right order.
It looks great in the film, as the pieces are moved around a virtual screen in the middle of room. But as we know from lots of work since — I’m thinking of books like Cathy O’Neill’s Weapons of Maths Destruction — that’s probably the most dangerous AI myth of all.
Update: ChatGPT (episode 3)
The AI writing tool, ChatGPT, continues to generate interesting coverage. (I wrote about it here).
The Red Hand Files asked it to write a song in the style of Nick Cave. Here’s the first verse and chorus of that:
In the depths of the night, I hear a call
A voice that echoes, through the hall
It’s a siren’s song, that pulls me in
Takes me to a place, where I can’t begin
Chorus:
I am the sinner, I am the saint
I am the darkness, I am the light
I am the hunter, I am the prey
I am the devil, I am the savior
The thing is: Nick Cave responded to this artificial imitation. Here’s a shortish extract from his relatively long reply:
What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary but it cannot create a genuine song. It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication, a kind of burlesque...
Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past. It is those dangerous, heart-stopping departures that catapult the artist beyond the limits of what he or she recognises as their known self... This is what we humble humans can offer, that AI can only mimic, the transcendent journey of the artist that forever grapples with his or her own shortcomings.
It’s worth reading the whole thing.
(Thanks to Charles Arthur at The Overspill for the link).
Since it’s Friday, and in case you want an actual Nick Cave song rather than a copy, here he is singing ‘Galleon Ship’ during his ‘Idiot Prayer’ solo concert at Alexandra Palace.
.
j2t#417
If you are enjoying Just Two Things, please do send it on to a friend or colleague.