Welcome to Just Two Things, which I try to publish three days a week. Some links may also appear on my blog from time to time. Links to the main articles are in cross-heads as well as the story. A reminder that if you don’t see Just Two Things in your inbox, it might have been routed to your spam filter. Comments are open.
1: Putting limits on AI
Suddenly my feed is abuzz with news of attempts to regulate Big Tech’s use of artificial intelligence. The White House has released its ‘Blueprint for an AI Bill of Rights’, while the EU has gone a bit further than a blueprint. As MIT Technology Reviewreports, an EU Bill will allow citizens to sue technology companies for harm—if they can prove that an AI harmed them.
As John Naughton explains in his technology column in the Observer, the EU Bill effectively complements a piece of sister legislation that is designed to prevent tech companies from releasing harmful AI:
The aim of these laws is to prevent tech companies from releasing dangerous systems, for example: algorithms that boost misinformation and target children with harmful content; facial recognition systems that are often discriminatory; predictive AI systems used to approve or reject loans or to guide local policing strategies and so on that are less accurate for minorities. In other words, technologies that are currently almost entirely unregulated.
Before I come back to that, it’s worth saying a little more about the White House Blueprint. It’s been produced by the Office of Science Technology and Policy, which is a Presidential advisory group. To my reading, it’s long on intention and short on method, but I don’t know enough about US policy making processes to know whether this is part of a wider process.
(The cynic in me says the Democrats want to show that they plan to rein in Big Tech—one of the few things in the US Congress which has support from both parties—while making sure that they continue to mop up tech company campaign contributions.)
And sure enough it comes with a list of caveats: yes, it
is intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.
But, no, it
is non-binding and does not constitute U.S. government policy. It does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or international instrument. It does not constitute binding guidance for the public or Federal agencies and therefore does not require compliance with the principles described herein.
Anyways, there are five principles here, and they do all make sense:
You should be protected from unsafe or ineffective systems;
You should not face discrimination by algorithms and systems should be used and designed in an equitable way;
You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used;
You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you; and
You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
So my view would be that’s OK as far as it goes, but it doesn’t really handle cases where tech companies gather information through forms of artificial intelligence and then use market power to their advantage against competitors. Maybe that’s being left to the Federal Trade Commission to fix.
(Image: Mural by Beastman, Spotlight Sydenham, in Christchurch, Canterbury. Via Rawpixel, CC0)
Also, from the language in the blueprint, it’s not clear whether this also protects employees and contractors (Uber drivers, Amazon warehouse workers and drivers, for example) or whether this is just a set of consumer principles.
At least the EU seems to be filling that gap between principles and enforcement, even if the Bill still has to work its way through the EU’s legislative system. The Technology Review article (behind a partial paywall) describes it as
part of Europe’s push to prevent AI developers from releasing dangerous systems... The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.
Inevitably there’s dispute about the effects that this might have. Some user activists have pointed out that it doesn’t seem to take “indirect harms” into account. “A better version,” according to the non-profit Future of Life Institute, “would hold companies responsible when their actions cause harm without necessarily requiring fault, like the rules that already exist for cars or animals”.
A more serious issue might be that it’s down the consumers to prove harm:
“In a world of highly complex and obscure ‘black box’ AI systems, it will be practically impossible for the consumer to use the new rules,” (Ursula Pachl of the European Consumer Organisation) says. For example, she says, it will be extremely difficult to prove that racial discrimination against someone was due to the way a credit scoring system was set up.
Also—and this is a serious omission—the current bill doesn’t protect users from dangerous systems operated by public bodies. In the wake of the Dutch child benefits AI scandal, this is likely to change.
It’s not possible to suggest any kind of legislation or legal process that might constrain the power of the tech companies without hearing that innovation is being constrained, and surprise, surprise, that’s also happened here. (An umbrella group of tech organisations even manages to squeeze the word ‘chilling’ into a letter to the EU. Yeah, right.) Mathilde Adjutor, of the tech lobbying group CCIA, told Technology Review:
Under the new rules, “developers not only risk becoming liable for software bugs, but also for software’s potential impact on the mental health of users,” she says.
Of course, given what we know about the potential impact on the mental health of users, that might be a good thing. Just last week a British coroner gave a verdict on the death of the British teenager Molly Russell that she
died from an act of self-harm while suffering from depression and the negative effects of online content.
This may be a first. As Charles Arthur wrote on his Social Warming newsletter,
if Molly Russell could be so influenced by what she saw, then there must be thousands, perhaps millions of other children out there who are also being shown depressing content, algorithmically amplifying their woes and reflecting them back to their users... the platforms making bad things worse through the indifferent application of algorithms working to maximise attention, and insufficient moderation.
In his Observer column, John Naughton is pretty impatient with the industry’s endless litany about how regulation reduces innovation.
It’s difficult to decide which of the two assertions made by the CCIA – that strict liability is “ill suited” to software or that “innovation” is the defining characteristic of the industry – is the more preposterous. For more than 50 years, the tech industry has been granted a latitude extended to no other industry, namely avoidance of legal liability for the innumerable deficiencies and vulnerabilities of its main product or the harm that those flaws cause.
And ‘innovation’ is one of those words, like ‘competitiveness’, that means nothing without context. The design writer John Thackara has observed repeatedly that half of product and service innovation is either unproductive, or actually harmful. If it takes a bit of regulation to get the tech industry to move a bit more slowly, and break fewer things, or people, that is a good thing for everyone else.
And I suspect that the tech industry’s real concern here is that it has got so careless in buidling these things, despite the regular concerns expressed by their staff and by researchers, that they can’t actually work out the harms their machine learning and AI programs are likely to cause.
2: The deep history of the human species
There’s a short piece on Quillette celebrating the research by the Swedish scientist Svante Pääbo, who has just won the Nobel Prize for Physiology and Medicine. It’s written by the magazine’s editorial board, and they conclude:
he hasn’t just benefited humankind. He’s illuminated it.
Pääbo is known for his work on Neanderthals, and most of the coverage highlighted this work, but his work stretches across Neanderthals, homo sapiens, and Denisovans:
it’s important to note that his scientific contributions have shaped the entire field of paleogenetics, not just the study of Homo neanderthalensis. Indeed, his work has spanned all three known species of humans , one of which the world likely wouldn’t even know about but not for Pääbo and his research team.
(Neanderthal skull, left, homo sapiens skull, right. Image: Dr. Mike Baker, via Wikipedia)
His work in the 1980s included genetic analysis of Egyptian mummies and Neanderthal skeletons, but his research was ahead of the technology. It wasn’t until 2014 that he and his team were about to sequence a Neanderthal genome to a sufficient extent to compare it to modern humans. This wasn’t a trivial problem:
In the process, they pioneered laboratory and computational solutions to a basic problem of paleontology: During the passage of tens of thousands of years, the DNA contained in bones typically dissolves into tiny fragments. Any scientist seeking to reconstruct an ancient genome must not only patch it together from countless isolated bits and pieces, but also apply analytical methods to filter out contaminating DNA from millennia worth of bone-colonizing fungus and microorganisms.
Before his work, some scientists thought that the Neanderthals had been an ancestor species of homo sapiens. Pääbo’s work showed that they were different species that developed in parallel. However, when the two species met each other 50,000 years ago, they were able to interbreed:
Outside of Africa, most human populations contain a small but substantial portion of Neanderthal DNA—including, notably, a specific variant on the third chromosome that’s associated with an elevated risk of developing severe COVID-19.
They also discovered a third human species, the Denisovans, names for the cave where they found the bone from which the DNA sequence was extracted. The Denisovans also lived in the same time and space as Neanderthals and homo sapiens, and their DNA also lives on in the modern species:
Just as Neanderthals account for up to four percent of human DNA in parts of Europe and Asia, as much as six percent of the DNA of populations in Papua New Guinea and other parts of Melanesia, Aboriginal Australian communities, and parts of Southeast Asia, is thought to be of Denisovan origin. By way of example: one Denisovan-descended gene increases the ability of modern Tibetans to survive in high-altitude environments.
It wasn’t inevitable, in other words, that homo sapiens became the successful human species. There’s a parallel universe where the Neanderthals became the most successful human species, or the Denisovans. As Quillette’s Editorial Board say:
Under different circumstances, it might have been the Neanderthals or Denisovans who ultimately prospered while our line was extinguished. And in this alternate reality, it would be their great scientists—their version of Svante Pääbo—being feted for genetic discoveries relating to that ancient and mysterious hominin lineage known as Homo sapiens.
Seen via The Browser.
j2t#379
If you are enjoying Just Two Things, please do send it on to a friend or colleague.