
Stop Telling Me What I Want
This column is a reprint from Unwinnable Monthly #186. If you like what you see, grab the magazine for less than ten dollars, or subscribe and get all future magazines for half price.
———
What’s left when we’ve moved on.
———
I rolled my eyes when people said cultural conservatism is back. I thought tradwife and “old money rich” styles were just aesthetics, or maybe not just aesthetics, but I could think of many reasons besides conservatism that people would be attracted to them. For example, cottage core style is comfortable and whimsical, and a lot of female influencers who aren’t conservative dress that way to promote a fairy princess vibe. “Quiet luxury” could be a response to the economic downturn that started when the pandemic began. Both these fashions are aspirational of a life of ease and access; pick the urban or rural version, your choice.
That’s the issue with cultural expressions like fashion, I guess: you can think you’re expressing one simple thing but actually be subliminally expressing many others. Or, you can have plausible deniability for just wanting to look cute while also pushing reactionary ideas. We all know the blue sweater thing from The Devil Wears Prada, but it’s not just the factors that make someone likely to pick up a piece, it’s what they intend to subtly communicate while they wear it: a dog whistle, sometimes. But lately it seems they want everyone to know.
I was wrong, in other words. Not that I actually think conservatism is the dominant aesthetic of our time, I don’t. I think conservatism has been pushed to us in a million ways, often through manufactured consent. At the same time, I believe we’ve all been conditioned to accept conservative (and even, yes, fascist) ideas in our own lives in dozens of ways, such that even people who would describe themselves as liberal and caring about human rights act in ways that go against their values.
This issue of Unwinnable Monthly is about AI, and my fellow columnists are writing about finer details of its use. Instead, I want to write about the problem of rhetoric. Commercial generative AI poses that human creativity and simple tasks can be replaced by machines, so we have more free time. (To do what? Manual labor or waiting on hold with the robot that replaced your insurance rep, probably.) Meanwhile the “intelligence” behind AI is provided by people in other countries paid pennies. Horrific environmental impacts, rotting your own brain. I want to say people are more than passingly familiar with these issues. And yet. Intelligent, caring people keep using AI. Some of them anyway. Why?
This is a question I have been pondering for a while now. The general public has had more than enough time to get informed about AI’s impacts. That story about foreign workers was published over two years ago. On a personal level, I share about how AI affects me as a writer and my artist friends weekly. Yet, people I know still use it for emails and joke memes all the time. It feels as though if you’re not directly impacted, even if you’re not a true believer, no number of facts or appeals can convince you to just, please god, stop using it.
I can compare this argumentative issue to another one I get into sometimes, discussions about COVID-19 precautions. I have been making appeals to people I know to give a crap about COVID since 2022, when I got it, got quite sick and decided to better inform myself. But I only share my opinions in conversation when someone brings it up first. I also try to focus on benefits to the person I’m speaking to rather than criticizing behavior. It’s hard! I don’t always handle it well. And as far as I can tell, unfortunately, my impact has been very small.
Now, I find myself making almost the same argument about AI: it’s having a negative impact on our culture and many of our livelihoods, yet no one seems to care. We’re dismissing a huge amount of destruction and suffering because of convenience. Surely, it’s going to come back around to hurt us someday, right?
But it seems I used my goodwill points on talking to people about COVID. Now, when I talk about gen AI, I get an identical response: either eyes glazed over or, at best, yeah, it’s bad but what can you do? It’s kind of an inoffensive disinterest, but beneath it I can see a current of dismissal. If I’m a reliable source, they should be worried; they don’t want to worry, so I’m not reliable.
On a recent episode of the podcast Death Panel, the hosts discussed an NPR article from last year that blamed the author’s immunocompromised husband for making his family continue to wear masks. In that episode, Bea calls the response to learning the state doesn’t care about our health and deciding there’s nothing to be done about it “toxic nihilism.” This response is identical to the reply I get from people who have started habitually using AI. Here are some common talking points: AI has become so ubiquitous, we’ll all have to start using it. The environmental impacts are overblown. And people suffer every day; how is AI contributing to that more than anything else?
It’s that last one that really scares me, and gives the whole thing away. When we know about all the downsides AI presents, and use it anyway, we’re entering into a devil’s bargain where our actions result in some unquantifiable amount of human suffering, now or later. But it’s totally abstract. So, it’s easy to push it away: I’ll never meet the person who dies mining cobalt for my cell phone, or the person I infect with COVID, or the person who loses their house in a flood caused by climate change that my contribution to a 2% energy usage increase will bring about. And I’ll never meet the hundreds of thousands of artists whose work ChatGPT stole; or, I won’t know I did, anyway. So why should I imagine it?
This is fascistic logic that us in the US and Global North (in general) have gotten really comfortable with. It’s impossible to completely eliminate your negative impact on other people if you live in the US, for example. It’s hard to even reduce it. But to concede before you even begin that trying is a lost cause? And to dismiss interventions that can be made quite easily (though not without sacrifice, it’s true) as purity tests?
People are challenged by someone saying no. It’s seen as holier-than-thou. When I insist on my right to say no, I hear that it’s complicated. Saying no is anti-intellectual or anti-nuance. Sure, sometimes. But after what amount of research, or what amount of horrible news or scientific studies or whatever, am I allowed to say no?
It turns out, never. Infinite nuance becomes another way of stalling for time. People with justified objections get trapped in needing to seem authoritative without lecturing, informative but not know-it-alls, informed on their opponents’ positions but not so informed that they look paranoid. At the end of the day, if people don’t want to listen to you, they won’t listen to you. There’s very little you can do when someone’s decided to tune you out.
And at the moment, I think we are suffering from a critical mass of tuning everything out. Toxic nihilism has us convinced that if we’re not doing the max, we might as well do nothing. And more than that, it’s right that we do nothing; we’re realists for doing nothing.
I never used to think of myself as a stubborn person. In fact, I thought the extent to which I could see both sides of an issue was a liability. Now I think it has good and bad aspects. But the past few years have made me incredibly stubborn. This unfortunately makes me a worse communicator. And when I’m talking to someone committed to the opposite position I hold, immovable object meets unstoppable force, etc. We have a tendency to tell the unstoppable force to be more realistic and chill out, not shame people, and not be a know-it-all. But from my perspective, more responsibility lies with the receiver of the message: someone shouldn’t have to be a perfect messenger for you to take their ideas seriously, even if it’s only for the duration of the conversation.
I prize my stubbornness now. I don’t want it to be taken away from me. So no, I don’t want AI on my computer or in my movies or videogames, I don’t want to call my AI insurance rep, I don’t want to use it for email, I don’t want to have to enter the AI torment nexus to buy toilet paper or get my medication. And I don’t want my work to be used to train AI, something that is happening to me anyway, right now.
But the AI industry wishes to make it impossible, or incredibly socially costly, to say no. For that reason alone, it’s worth exercising your right to do so. We all have free will and personal dignity, and insisting on our beliefs is worth something. I’m sick of needing to respond to an incomprehensible situation with careful words; from now on, I’m just saying no.
———
Emily Price is a freelance writer, digital editor, and PhD candidate in literature based in Brooklyn, New York. You can find her on Bluesky.