We deserve better than an AI-powered future
It’s no secret that AI hype is one of my rage triggers. But before you mash out an angry email demanding I acknowledge that the way you use it is an exception, I’m not talking about automation and machine learning generally, where there are very good and useful applications. Also, you don’t need my validation.
I recently published the first version of my company’s AI policy, which is, more specifically, a policy around using LLMs to generate text and images. And it’s really a policy about how the fundamental principles of making things work for users and organizations don’t change.
That’s why it felt important to have a strong stance on where AI fits into what I do—because, mostly, it doesn’t.
There was a lot I didn’t include in the policy itself, which I tried to keep short. But I have a lot to say about why I approach any new technology with my critical sensors turned on max, and why that’s better for users, for businesses, and, I believe, for the wider world.
Here are some of my concerns about generative AI, and the reasons it felt important to make my position known. Buckle up!
It’s not clear what AI is
I’ve lost count of the number of conversations I’ve had in the past year with people who are hearing they need to “find ways to use AI” in their work. They’re frustrated, not only because this is another cack-handed mandate from executives, but also because this never comes with an explanation of what their employers actually want them to do. “What is AI?” never gets an answer.
If you ask an actual technical researcher what AI is, they’ll probably answer your question with another question, about what you mean by “AI” because it’s not, in itself, a technology, or even a tool. If you ask the question to someone who posts a lot about “the future of work” on LinkedIn, well, let me know what they say—I’ve learned to filter them out of my brain.
Like algorithms in the mid-2010s, “Big Data” before that, “growth hacking,” and even “new media” in the early 2000s, most of the people using these terms can’t tell you what they mean. While you’re at it, ask 5 people to define “digital transformation,” and you’ll get 3 bemused responses and 2 buzzwordy answers from ChatGPT, trained on HBR clickbait and McKinsey reports.
But, if I have to find a definition I like, it’s derived from Kate Crawford’s Atlas of AI (which I enthusiastically recommend): AI is a marketing term for some types of machine learning that provide automated, computer-based decision making. The term “artificial intelligence” has been around a long time, and, as Emily Bender points out, AI has always been a marketing term. In other words, AI started as a sort of slippery term to get funders to open their wallets. And that’s what it still is.
Since the launch of ChatGPT, what people mean by “AI” most often is an LLM/generative tool, which is why I’m focusing on that here.
Again, you don’t need to send a condescending email. I promise, if you’re doing something useful with it, and you can describe what it is you’re doing and why, then I’m probably not talking about you. (I know that I’ll get a few he-mails, carefully mansplaining a definition of AI that I didn’t ask for, but I have been a woman on the Internet for 30 years as of this year, and I’m used to it.)
Whatever it is, almost nobody wants it
I just don’t know anyone who wants to have their work replaced by AI, who truly wants to be an LLM babysitter, or who wants to use a service where something humans once did has been replaced by a machine or a robot. Turns out, it’s very few.
Google is suggesting use cases that are so bad their ads get pulled just because people think the idea is pathetic. Because hey guess what, most of us would rather read something with typos you made because you were in a hurry or because you’re 8 years old, than something you didn’t write because you didn’t take the time to express yourself.
As my neighbor pointed out to me recently, even if it’s true that ChatGPT has the logical reasoning skills of a human 5-year-old (it doesn’t, because it can’t “reason”), why would he want his colleague to hand him a report written by a kindergartner, rather than by a human 40-year-old with decades of work experience?
Just like grocery stores replacing human cashiers with self-checkout, wellness startups secretly replacing human counseling with robots or countless companies forcing every caller through what feels like a slightly chattier IVR, removing humans from a situation doesn’t usually improve it. Despite how flawed we are as fleshy bags of anxious water, it seems we still prefer each other over robots.
At the core, a good organization with a strong user experience puts its focus on people and relationships, both internal and external. It’s my job to help clients find the things that align business capabilities with customer or user needs, and “remove human interaction” is almost never in the middle of that Venn diagram. Except maybe if you have extremely gluttonous shareholders, in which case, I do my best to help you placate them without throwing your users or customers under the bus.
It (mostly) doesn’t work
The fact that AI tools don’t work reliably, and don’t do what they promise, should matter much more than it does. I believe we need to reframe the wishful thinking about how if we keep using it, it will get “better” (and then, who is harmed while we wait for that?) so that the makers and beneficiaries of these systems can be held accountable for their false promises. And then we can stop anthropomorphizing it with terms like “hallucination” and just start saying that it’s shitty.
Unlike a Google search, you can’t figure out where the information came from, and it’s hard to find out if it’s true, and who you should credit if it is. Whenever I hear an otherwise smart and thoughtful person say that they “ask ChatGPT” for things that require reliable, factual answers, I don’t know what to do with my hands.
Google’s AI overview suggested using glue to keep the cheese on pizza, and regurgitated Onion headlines as if they’re fact. Even when they’re used in ways that don’t risk the spread of misinformation or disinformation, generative AI tools usually hide their sources, making them difficult to fact-check without more searches and more effort.
They’re also security and privacy risks, they’re incredibly efficient and effective at creating and spreading misinformation, and companies are running out of training data, which is causing “model collapse.” It’s not getting better, it’s getting weirder, and worse, and more broken. It’s all just polluting the information supply, faster than ever. Generative AI doesn’t even make us more productive.
It’s shitty and it’s bullshit, and I don’t believe that, on the whole, the customers you want to have as your core market are “people who will enjoy stupid bullshit.” I don’t promise that everything I do will come out perfectly, but I won’t purposefully implement things that I already know are disrespectful to your users.
It’s not “getting better” like breaking in a pair of stiff leather shoes that will mold perfectly to your feet, it’s like finding out those shoes are made of cardboard and you just heard a roll of thunder.
Claims of ownership aren’t settled
Not only are generative AI tools opaque about where alleged facts come from, they’re trained on data that companies have helped themselves to. They insist that everything that’s available online should be considered fair use, but they’re getting sued—a lot (because they’re wrong). My emotional reaction is one of delight that they’re being sued so much, even though my rational brain reminds me that the only winners in copyright lawsuits are usually copyright lawyers and the huge companies they represent.
I’ve been handed half-baked projects that I could tell right away were plagiarized, and I’ve gone to great lengths to make sure those things were addressed before they became a problem. I’ve been plagiarized by peers, even by so-called friends. I’ve seen my old articles republished with and without my byline, sometimes on large platforms, and received nothing for it. Being plagiarized totally sucks, and I’m in the business of preventing it, not scaling it up.
I also don’t appreciate that, as a writer with a 20-year career (so far), including nearly 4 years as a weekly newspaper columnist, I’ve probably had my own work ingested by these companies, and there’s nothing I can do about it. Sam Altman drives a $5 million car, and I’m trying to figure out what I can sell because I need new glasses and dental work at the same time.
Besides, I think we should consider what happened to Altman’s own Y-combinator cohort-mate, Aaron Swartz, before accepting that the word “open” in OpenAI’s name is anything more than tacky branding.
It’s not going to stay cheap
I’m not very good at math, but it doesn’t take more than a basic understanding of arithmetic to see that as these AI companies ramp up, the costs will scale, too. This means things like computing power, energy bills, water bills, environmental fines (you know it’s coming), GDPR violations, those copyright lawsuits I mentioned, companies’ own court costs for those copyright claims, paying for chips whenever there’s a shortage or a slowdown, and what happens when worker mobilization is successful (which it should be), especially among workers who train and moderate content.
And then there’s the infrastructure. It takes about a year to build a data center, which usually requires steel, which is in short supply. This is also bad news for the power companies, who also need that, and did I mention that it takes between 5 and 15 years to upgrade power grid infrastructure? Even if AI-driven companies start investing in their own power generation (if that’s even possible), or if the Altman-backed nuclear fusion company provides the “breakthrough” he personally demands so he can expand his empire, all of this physical construction and manufacturing takes time, and will end up paid for by someone else, which we can be certain about because it’s just a statement of fact about VC-backed companies. And don’t get me started on how long, expensive, and labor-intensive it is to install new terrestrial or subsea fiber, which, by the way, also requires steel, factories, chips, and global logistics.
I’m not going to help a client build a thing or design a process that sits on top of something we can already see won’t remain affordable, reliable, or even basically usable, and that will make customers trust them less, all at a time when it’s more expensive for them to run a business, and for all of us to just afford to be alive.
AI fails on every sustainability front
From what I understand, the claims that every LLM prompt is like pouring out a bottle of water don’t come from a verifiable source, but we’ve known for a long time that data centers are power and water hogs, and that LLMs need a lot of computing power. By 2022, “the cloud” had a greater carbon footprint than the aviation industry, and that’s before the public launch of ChatGPT. Everything about computing is resource-heavy, and this is the most resource-heavy computing that’s ever been done. That’s before you get to the physical infrastructure demands (see above).
It hurts people, it destroys jobs (especially in the creative industries), forces precarious people into other, worse jobs, it requires rare earth metals that are often mined in dangerous conditions by children, and it’s doing real societal harm. And for what? Just like building more highways makes people drive more, building more AI systems just makes people make more content nobody asked for, that now has to live on servers in “the cloud” that demands even more of all of the above.
The extractive nature of these models, human, financial, and planetary, fuel ongoing colonialism and cryptocolonialism, the consequences of which will continue to be borne by the people set to benefit the least from whatever flimsy financial promises it can make good on.
This is why I compare it to fast fashion. If this is what people have access to, we shouldn’t shame them or blame them, and I don’t blame people who are leaning in to AI as they desperately cling to the only decent job they can get where they live.
I believe that most sustainability decisions need to be made (and enforced) at the regulatory and policy level, but that doesn’t mean I think individual decisions don’t matter. But I don’t think we should be deliberately and enthusiastically moving away from the goals of creating equitable societies on a livable planet.
It’s only making a few people money, and they’re mostly bad
Apart from the makers of chips and hardware, the AI industry is not making money, which is making investors worry, which means it’s hard to tell which tools will still be around in 2 or 3 years.
Sam Altman is making money, Elon Musk is making money, and Marc Andreessen and Peter Thiel (whose name I really hope you know by now) are doing better than ever. There’s a small group of people, mostly men, who are connected to each other, and to people and plans to do things like ethnically cleanse San Francisco, the bad and deceptively named Long Termism and Effective Altruism movements, along with the extremely creepy natalist movement, and many of them are openly backing fascism. Those are the people getting rich off of AI, not because it’s revolutionary, but because they’ve figured out how to make money off of anything and everything, at the expense of anyone and everyone.
So, considering that the promise of AI is also mostly wishful thinking that promises a future where you don’t have a job, the people behind it have deeply and openly political motives, and are reinvigorating a eugenics movement, as a consultant, the last thing I’m going to do is promise you yet another fanciful shortcut to a magical future that won’t be good for your business, and only guarantees that you’ll never meet any sustainability goals, you might inadvertently contribute to global fascism, and your users will probably hate you because they just want to interact with a person.
I’m a little worried about what happens when the bubble bursts because, as much as I enjoy watching the hype merchants backpedal, just like with every bubble that has ever been, the consequences will be borne by people who have already lost out because of the bubble itself.
And as the companies start to fall, they’ll consolidate into fewer and fewer hands, which, again, will be disproportionately in Silicon Valley, all of which is bad for the global Internet—moving us further from the distributed network of networks that it was supposed to be—and that’s bad for human beings everywhere.
This isn’t my first bubble
Yes, AI is different—it’s more pervasive and more destructive—but it’s another hype cycle, (each of which provides much evidence and learning so we don’t repeat it, even though we always do), and there are decades of AI hype criticism to learn from. Even I’m not saying anything new, I’m just another person writing an overly long blog post filled with too many links to click on.
I’m also not a technical expert, but most of the smartest skeptics I know about AI are highly technical people, including machine learning experts, computational linguists, and data scientists. They’re asking hard questions that focus on where we are now and how we got here, but they’re often drowned out by hype merchants promising us that a monorail chatbot implementation will save our town boost quarterly earnings be the good one from the future, not the bad one that’s available right now.
I’m also thinking of the criti-hype about the existential risk posed by a superhuman AGI that, from my understanding, isn’t possible, won’t be possible for a long time (if ever), and certainly doesn’t have a use case. Whatever need generative AI might meet at some point, we need clean drinking water more. Moreover, the hype, and the criti-hype are both distracting us from the problems we have right now, today.
I even found something I wrote over a decade ago, when it was all about “big data,” and we were still on the eve of “algorithms” that makes me feel like a broken record:
Imagine what next genocide will look like with the technology we have today. Imagine if it were enabled by some of the prosaic demographic data you can get from an analytics dashboard or buy from a marketing company. Imagine it, and know that it will definitely happen…
Over the next few years, the biggest problems we’re going to face in tech are not those of engineering, hardware, software, or network speeds (5G is coming, and soon), but the way that ethical, legal, political, and even philosophical issues shape and are shaped by the technology we build and use.
Ten years ago, I was only half-right about 5G (as in, it is here, but it also hasn’t fulfilled its promise), but I’ve been writing about my frustrations with the credulity of the industry and the media for longer than that. I’m not a skeptic because I’m grumpy (I am, in fact, a delight), but because I love cool, nerdy stuff, and this is just about money and a race to the bottom.
So I’m pretty confident that when I say, “generative AI is mostly worthless slop that no one wants, and its days are numbered,” it will age a heck of a lot better than the human-made garbage (read: threats) like, “you won’t lose your job to AI, you’ll lose it to someone who knows how to use AI.” Especially when you’re more likely to use it because executives don’t care if you’re good at your job, they just want to do mass layoffs to increase their share prices.
Whatever this version of AI is, some form of it will stick around and find its level. It will prove marginally useful to a lot of people, and extremely useful to a few, and will, like all the data-based hype cycles before it, require everyone to clean up their freaking data before it can deliver a single benefit, which is exactly the hard, boring, expensive work they’re trying to avoid by using magical hype solutions in the first place. And then we’ll get another, probably even more inane hype cycle in 18 months or so.
As I write this, Ed Zitron’s latest newsletter hit my inbox, and he’s shared a useful list of possible indicators that the bubble is soon to burst.
I believe that people want and deserve better
When I hear arguments about how “we can’t stop progress,” or “we might as well get on board,” what I hear is a bunch of people insisting that a better world is not possible, and I don’t like to listen to those people. They think I’m the party pooper, but they’re the ones resigning themselves to an amoral future that doesn’t sound like any fun.
Or people who think it’s naive, or that people like me (and there are a lot of us, I know it) hate change (Sometimes, yes! Because some kinds of change are bad!), or—my favorite—the accusation that we only want to protect our jobs, as if that’s somehow a bad thing. The Hollywood writers’ strike highlighted that, while most everything we enjoy in this world was, at least to some extent, brought to us by writing, most writers are struggling. Writers and creative professions have always been devalued, and executives have always tried to find ways not to pay us, so I consider myself part of a long tradition.
Another bummer about the hype is the downplaying of really cool stuff that’s happening, like the use of LLMs to support Indigenous languages, the people around the world building community Internet networks, and the story of the Internet we could have had—and still could.
There’s awesome machine learning stuff, like Merlin BirdID, an app from Cornell University’s Ornithology Lab that helps you identify birds by their song. There’s a citizen science project in the UK, where volunteers spend 4 weeks counting the butterflies they see, as a way of crowdsourcing biodiversity indicators. Someone used infinite scroll to make a website about the deep sea, which lives in my bookmarks bar so I can look at it whenever I’m sad.
More recently, developments in streaming technology and content delivery, underwater cameras, VR and computer vision for fairness, and Ilona Maher’s social media content mean everyone can enjoy watching more of the Olympics than ever before.
Workers, users, and businesses deserve better than to be expected to hand over power and control to a very small number of unbearably wealthy people, mostly men based in the US. I understand that people believe this type of position is “too political,” but it’s essential to describe and foreground the politics involved with the things we use. It might be political to describe their politics, but it would be bad business not to do it.
Besides, this criticism is no more and no less political than the political nature of the decisions that destroy jobs, wreck the planet, and steal labor, content, and drinking water, under the guise of “AI innovation.”
I believe in what Mariame Kaba says, that hope is a discipline, and that we can have a better tech industry, and a better world. And I know I’m not alone because so many of us loved and shared Maria Farrell and Robin Berjon’s call to “rewild the Internet” earlier this year.
I don’t think of myself as a party pooper, I just believe we can throw a much different, far better party.