Five little questions about automated decisions

In the early 90s, Umberto Eco published a collection of essays called How to Travel with a Salmon, a compilation of some of his newspaper columns. It contains two of my favorite pieces by him, ever. 

One is called On the Impossibility of Drawing a Map of the Empire at a Scale of 1:1. It spins out a speculative and ludicrous attempt to make a perfect map, inspired by Jorge Luis Borges’s extremely short story On Exactitude in Science. The conclusion (I’m oversimplifying) is that if you try to make what he calls a “faithful” map at a 1:1 scale, the map immediately becomes unfaithful because the people don’t inhabit the empire, they inhabit the map. And that empire no longer exists because it’s been replaced with a map.

It inspired my first PhD attempt into mapping and datafication, and it’s an essay I still think about regularly. It’s very funny, and you should read it. This impossibility of representing the world “faithfully,” especially using digital data, also led, indirectly, to my current topic because the question I started out with is, “If data has mass, then it’s material culture, and if it’s material culture, how does archaeology account for it?” which led to, “OK, then what kind of stuff is it?” 

Still, almost 25 years later, if you get me started on maps, I will tell you that my favorites are any of the early 17th-century maps of Ireland made by Elizabethan cartographer Richard Bartlett. But I’ll come back to him. 

The other of my favorites from Eco’s collection is How Not to Use a Cellular Phone, where he notes that the truly powerful people around us aren't using cell phones (that’s what we called them 30 years ago) at all. He writes that "Rockefeller doesn't need a portable telephone; he has a room full of secretaries so efficient that, at the very worst, if his grandfather is dying, the chauffeur comes and whispers something in his ear." 

No, he says, "anyone who flaunts a portable phone as a symbol of power is, on the contrary, announcing to all and sundry his desperate, subaltern position, in which he is obliged to snap to attention."

Power grants analog privilege

My friend Airi recently shared (in her newsletter, which you should get) an article on “analog privilege,” an abbreviated version of a longer article by law lecturer and doctoral researcher Maroussia Lévesque that I’m going to read later this evening, as a treat. Analog privilege, she writes, “describes how people at the apex of the social order secure manual overrides from ill-fitting, mass-produced AI products and services,” and she goes on to describe how the number of people who can avoid these blunt instrument systems is dwindling. These invasive technologies, she says, are climbing the social ladder. If you’ve mostly escaped pervasive surveillance (or think you have) your days may be numbered. 

I’ve already gone into my frustrations with software and apps, and the way the analog is disappearing underneath a culture of what feels like a total digital transformation, despite digital/algorithmic/AI (whatever you want to call them) systems not being capable of treating people fairly—we’re supposed to say “yet” as if everyone harmed along the way is simply collateral damage. 

It’s increasingly difficult to feel like these tools are at all for our convenience, let alone for our benefit. We’ve been talking for at least a decade about how much data is collected about us, whether or not we can or want to consent to it, and despite the existence of data protection laws. As these things apparently “develop” and “progress” (scare quotes on purpose), it always seems like they head in the direction of more power for fewer people, a direction that we’re expected to treat as inevitable.  

A neighbor friend mentioned a week or so ago that he was able to hold out with a Nokia feature phone until last year, when it was no longer possible to park his car without about 10 different smartphone apps. Nokia has brought back the iconic 3210, but it’s still only viable for most people as a backup phone, at best. 

Beyond our personal devices, as Lévesque points out, in order to understand the way that these systems are harmful, we need to ask who can opt out. Who can sidestep, who can click “reject all”? Whose experience is a personal one, as opposed to the personalized one, created by a (usually clumsy) data dragnet? How can something that’s for convenience and efficiency involve so little consumer choice? 

Who is privileged enough to be able to be assessed by a human, and not just by a human, by one who has been encouraged, either explicitly or implicitly, to view them through a positive, or at least relatively neutral lens, and who can shift their tone or re-explain things based on the user’s language, stress level, or access needs?

Injustice as a formality

I think sometimes about what I say was my daughter’s first piece of mail, even though it was technically addressed to me. When we brought her home from the hospital, we came into the apartment, met by our friend who had spent the week taking care of our dog, and making us dolmades and other Egyptian food. I carried my 4-day-old baby, who couldn’t know how terrified I was to be suddenly put in charge of this tiny human, especially during the peak of what we know now was only the first wave of COVID-19. 

I shuffled through the mail on the counter and found a letter from Migrationsverket (Swedish immigration). I’d sent in a citizenship application almost 2 years earlier, the first of which had been rejected by an automated system by mistake. We’d been told they had only 60 human case workers for over 100,000 applications per year. We’d exhausted every legal and judicial process, but I was hoping this letter would be good news on top of my overwhelming happiness.

But this was sent on behalf of “baby girl Ruffino,” a tiny little person who was still too new to have a registered name or a belly button, and who weighed as much as a small chihuahua. It explained that, as a child born to a non-citizen (me), she didn’t have a right to be in the country, and I needed to apply for residency on her behalf. It also added that if the child had one Swedish parent, this demand would disappear automatically as soon as that parent was registered.  

It works this way: the hospital registers the birth, and this triggers a tax office task to assign a Swedish personnummer  (similar to whatever your country uses as a unique numeric identifier for people), and that apparently pings Migrationsverket, which checks if the baby came out of an immigrant, and then responds to that parent with an automated form letter. It’s precisely this bluntness that is so cruel, and precisely what made so many people around me defend it as a perfectly innocent process—they could not understand why I was upset. After all, the letter said it was “just a formality” and didn’t really apply to my child once we registered her Swedish dad.  

It’s hard to convey the terror I felt, partly on behalf of new birthing people who were living with controlling partners who used their immigration status against them. On behalf of the parents of NICU babies, who might not even receive the letter in time to meet its deadline. Or just anyone who would now have to put together a residency application when they should be able to focus on new parenthood. But also the terrifying reminder that I, as a foreigner, am always being watched and logged in ways that a Swedish-born ethnic Swede never is. 

I stood in my kitchen, holding this brand new baby, with her blackened little umbilical nub nestled just above my bloated midsection and its fresh, still painful c-section incision, petrified at this place I lived in, where, even on the happiest day of my life, it can always be time for “the system” to remind me that this life I’ve built is only mine at the discretion of people who treat threats as mere formalities. When I’m in a state of panic, I can’t even read comfortably in English, let alone in Swedish, which is the language these decisions are made in, and how they’re communicated. I had to have the threat read to me. 

I thought about how, at Migrationsverket, there had been no willingness to spend money to continue the employment of humans whose job it was to process the applications that would grant qualifying citizens our basic human rights, but there was clearly money to build a system that automates threatening letters to newborn babies. This took place under the previous government, but it feels relevant to add that this year, Sweden is experiencing net emigration for the first time, and our Migration minister has framed this as a result of “the Government’s efforts.”  

But this is just a small example of an interaction with one of many systems that people around the world are subjected to, often with far more devastating effects. Automation (and back in 2020, we still called it “algorithmic decision making”) streamlines processes, sometimes in order to make injustice into a passive process, and their implementation is always a series of deliberate choices. No blunt instrument is ever created in the name of fairness, and no automated process is ever innocent, and this was not just no exception, it was a pretty egregious example. 

And my strongest opinion is probably that nobody should be allowed to express their opinion about automated systems until the most precious thing in their life has been in the hands of one that’s been built to treat them as the enemy. 

Five essential questions to ask about systems

When I give talks about design and power, sometimes I use the words of the late British MP Tony Benn, who had what he called the “five essential questions of democracy,” also sometimes known as his “five little questions.”

In his 2001 farewell speech to the British Parliament, he said,

“In the course of my life I have developed five little democratic questions. If one meets a powerful person — Adolf Hitler, Joe Stalin or Bill Gates — ask them five questions: ‘What power have you got? Where did you get it from? In whose interests do you exercise it? To whom are you accountable? And how can we get rid of you?’ If you cannot get rid of the people who govern you, you do not live in a democratic system.”

I like to use these questions as an analytical framework whenever I encounter, not just a powerful person, but an automated system of any kind. I like to invite others to do the same. Because while we make fun of the “I want to speak to the manager” folks, the greater danger is posed by systems that deliberately eliminate any manager that can either speak or be spoken to. We can ask these questions more explicitly, over and over, until we’re seen as extremely annoying, and still keep asking. We can be little points of friction.  

This framework can also enable us to see the difference between systems and services that bring people into processes and places they might not otherwise have access to—museum collections, virtual library borrowing, online meeting software, and tools that enable access and intercommunication that was not possible a few years ago—and those that remove people from processes and systems without an analog backup, which is what’s increasingly happening with some of our most essential interactions—customer support, telehealth apps with long triage questionnaires that demand energy when you’re already sick, and, of course, threat letters to babies born to immigrant parents.

Asking these questions allows us to draw distinctions between online-by-default (which can be great!) and online-only, which excludes by design. For example, my Swedish bank, SEB, no longer has actual branches (they have offices, where you can make an appointment, but you can’t do transactions). Whereas my Irish bank, AIB (help, I still can’t log in!) backtracked on its decision to close its branch offices and end its relationship with physical post offices. They did it because their customers pushed back, and explained just how much of a negative impact it would have—and the bank (I know, shocking) actually listened.  

It’s clear that this is an outlier, and so many of our systems are quickly sliding away—by which I mean deliberately being slid—from anything representing democratic processes. 

Decisions made by the messy assemblage of people, places, political economies, and things we call a “system”, even if they only exist as potential threats—such as my letter from Migrationsverket—start to accumulate, and everyone loses power, even those not directly subjected to it. Because if someone who didn’t fall under the migration agency’s remit started asking questions, what answer would they get to question 5, “How can we get rid of you?” At the time of my letter, the government was controlled by the (allegedly) center-left Social Democrats. Whether or not those of us who don’t want to make deportations more efficient, or want a rights-based set of migration policies are in the majority (sadly, in Sweden, I think we’re a minority), we don’t even have meaningful political representation.

But I still want to believe there’s room for hope. And one thing that gives me that hope is how quickly the backlash seems to be coming against automated systems, whether we call them algorithms, AI, or pervasive surveillance. Earlier this year, the European Court of Human Rights banned the weakening of encryption, thankfully not swayed by the disingenuous “who will think of the children” argument.  

People are becoming aware of how every little interaction and behavior is being tracked and monetized, or collected with the potential—even if forever unrealized—to be weaponized against them, or someone they care about. There’s even a small rise in the sale and use of “dumbphones,” partly due to the feature phones being more affordable for low-income people, and partly from the perception that screen time is a public health crisis (this is overblown, but that’s a different topic).

The most privileged among us don’t have AI-powered personal butlers, they don’t have AI-powered anything. They have human staff. They don’t, as Eco wrote more than 30 years ago, need to flaunt a “portable phone as a symbol of power” because their ability to evade or sidestep automated decision systems is a symbol of an unusual degree of power, which few people have access to. 

Even without knowingly asking Benn’s questions, people really do seem to be questioning where these systems come from, who is overseeing them, how—or if—we can opt out of them and have a human intervene. Even without fully understanding the systems themselves, we can still see the impact of data and datafication on people, so that it can be seen as what philosopher Timothy Morton refers to as a hyperobject. This is a type of object that we know exists, and that can’t be directly observed, but we can often map out its consequences.  

And another reason for hope is that the backlash has always been there. Before Eco’s humorous essay in 1994. Before the now-famous 1979 IBM slide, “A computer can never be held accountable. Therefore a computer must never make a management decision.”  

Understanding is not a prerequisite

When I was researching maps and mapping in Plantation Ireland  (the 16th and 17th centuries) for the PhD I failed to finish, it was often noted that the Gaelic Irish population—who were being violently deprived of their land—didn’t use maps, which was sometimes explained through the implicit assumption that they didn’t understand them, or know how to use or read them. Maps were part of turning the landscape into controlled, legible, tidy surfaces for decisions that didn’t have to respect context, complexity, or human lives.

Many people knew how to read, including in English, and how to understand what a map was, and that not using them was as much of a choice as using them. People understood the consequences of the data that translated a landscape of lived experience, and a very different power structure, into one that could be quantified and represented as a single, universalizing view, on paper or vellum, even if they didn’t, individually, fully understand it technically (even though many probably did).  

And now back to Richard Bartlett. He was a mapmaker to Queen Elizabeth I, and at the turn of the 17th century, he was sent to map the province of Ulster, in the northwest of the country. When he got to a part of what is now county Donegal, an area that had always been shown on maps as covered with trees because it was both heavily wooded and heavily defended by its Gaelic Irish residents, he was “allowed” to create his map. And then the locals chopped his head off. It’s reported that they didn’t want their land to be “known.” Whether or not that’s true, mapmakers started to travel with armed guards after that.  

The empty cartouches (those are the things that look like paper scrolls) on the maps are sometimes interpreted as if he didn’t get a chance to finish his maps, but it was actually pretty common for mapmakers to leave something for later. I just think it’s kind of amusing, this image of a mapmaker being killed mid-drafting. I love these maps because the whole discourse around them represents the way that data-driven power has always met with resistance, and I sort of love the way knowledge of that resistance has been projected onto these maps.   

This resistance to being mapped isn’t unique. I once read an account by a mapmaker walking through an area in the 1580s, who described being pelted with objects by people who knew what he was there for, even if they couldn’t read the map or understand the numbers in the land survey. The entire premise of Irish playwright Brian Friel’s play Translations is that 19th-century Irish people understood and resisted having their places and landscapes renamed in English terms because they knew what came next. 

I’ve read firsthand accounts across hundreds of years’ worth of Irish history, of frustrated colonial representatives and agents describing being mocked, lied to, undermined, and deliberately misled by people who were exercising the power they had to poison the data, add friction, or even offer violent resistance. And none of this is unique to Ireland, it’s just where my own knowledge has a little more depth. 

Writers during this phase of intense and violent colonial conquest explicitly wrote about the purpose of quantifying, measuring, and representing people and places. It was always a step toward what they saw as a “civilizing” process, and that was always violent, if not outright genocidal. 

 It’s a useful reminder that people have always resisted being known more completely than they’re comfortable with. For hundreds of years, even without a concept of “democracy” as we think of it today, people have been asking or acting on questions that Tony Benn summed up: what power do you have, where did you get it, who is it for, and to whom are you accountable? 

I also know that for this same hundreds of years, we haven’t been able to get to question five, and get rid of them—hope is a choice, it has to be—but I want to believe we can be inspired by centuries, even millennia, of people being extremely annoying as praxis, individually and collectively.  It’s as important to know about the explicit and intentional purpose that fundamentally defines the first “data” as it is to be aware that no group in history just passively accepted it.  

What can we do about it?  

That little baby who got the threat letter from immigration is now four years old, and comfortably bilingual. She has her own belly button, which she enjoys showing off (it’s a great belly button!). My incision has healed. I finally got my citizenship after almost 3 years of waiting. I thought it would make me feel safer than it does. 

And I find myself increasingly frustrated when people talk about “capitalism” as the problem. Yes, a lot of what we’re experiencing that sucks in our lives is a first- or second-order effect of capitalism and the six or seven dudes who sit on top of our financial systems, but this framing makes us feel helpless, fatalistic, and like there’s just one big problem: the system. It’s not because of “capitalism” that we’re all told we’re addicted to our devices and having too much “screen time” by the same people who demand products and services that encourage us to see offline activities, like using cash, as “dirty” and refuse to provide analog versions of essential activities like interacting with public services, riding public transit, or booking a medical appointment. It’s much more complicated than that. 

There’s no one solution because “the system” is an abstraction made up of a messy assemblage of interrelated people, places, and policies. Maria Farrell and Robin Berjon wrote about this beautifully in their argument for “rewilding” the Internet. It needs to be solved with a huge, multi-pronged effort that includes policy changes, cost of living measures, livable wages, universal access to services, and rights-based politics that nobody can do alone. We can even do things like ask why we’re building software or an app instead of a website (websites are great!). We can host our own stuff, or do something in-between, like I do. I stopped writing on Medium and started my own blog (even though it’s still on Squarespace).  

Rather than repeating whatever the press release promises about future efficiencies (this always means “layoffs”), convenience, and frictionless experiences (this always means surrendering control), we can ask: what data have you got? Where did you get it from? In whose interests do you deploy it? To whom are you accountable? And how can we get rid of you? And if there’s no satisfying answer to that last one, we can confidently say that this service, especially in public services, does not belong in a democratic system. 

My personal belief is still that a better world shouldn’t require a business case, but some people’s idea of a better world is basically the one we have now, but worse. So, one way to pull at the threads of the system is to start building those business cases. Small ones, like lower customer satisfaction, churn, bad results for sustainability, ongoing loss of revenue, or big ones, like measuring the potential for a less data-hungry competitor to eat the lunch of a company that can’t stop monitoring and monetizing every interaction, in order to build what they believe could be a “faithful map” of a person or community. It’s not enough, but adding friction to processes, both on the product and user side, is a start.

What does technology look like if it does something other than concentrate power into the same few hands? I’d like to say I’m not sure, but one of the most incredible developments in human history is built on exactly that: the networking protocols we know as the Internet, with a capital “I.” We can’t ignore American hegemony, of course, but this specific protocol was the “winner” among a number of other options, partly because it was so cooperative. One kind of nerd communication system beat out the others because it was more collaborative, more cooperative, and more open than others. All of the tech that we think of as innovation was built on a system that was designed to support nerd friendship

When we focus on the tech, we get story after story of breakage and damage. When we focus on the people, cooperation, and collaboration, we find stories of resistance and friction, and we also find things like…the Internet. So one thing we can do is remind ourselves that we’re not alone, on the planet, or throughout time, and that we have agency. 

Anyone can tug at a thread or two, or create one. 

Next
Next

We deserve better than an AI-powered future