Feature Story
A glitcy view of colors being spread out on a TV screen.

Ethical AI

This is a feature story from Unwinnable Monthly #186. If you like what you see, grab the magazine for less than ten dollars, or subscribe and get all future magazines for half price.

———

Glitchy graphics make up this title picture. "Ethical AI" is written across the graphic.

I adore crime fiction. I love a detective, strung out and desperate, in an indecent world, trying to solve a crime for not enough money and that maybe no one really wants solved. Dingey offices. Half empty bottles of brown liquid. I love it. I might name a dog after Dashiell Hammett.

What I can’t really countenance is true crime. Not anymore, really. Too much of it is gaze heavy, leering, rubbernecking that profits off of the suffering of victims and survivors. For all of the books, podcasts, TV series, etc. about real cases, what happens time and time again is that the real perpetrators become famous. Their songs are sung, sometimes over and over again. Netflix is happy to make specials dedicated to exploring their deeply troubled lives while paying minimal attention those who suffered at their hands. Meat to the grinder.

I used to like true crime! I think there’s something meaningful in telling stories and centering survivors in order to prevent the celebrity murderer from ever taking root. But let’s be real, we don’t live in a world where that regularly, meaningfully, happens. Instead we get show after show about the same killers, the same miscarriages of justice, and every few years a new case comes along to spice things up.

But what if we could satisfy that taste with an endless stream of content that didn’t harm anyone? What if the content grist mill could be sated but without any killers getting famous and any survivors being retraumatized. AI would be the perfect engine for that.

As it might be clear, I don’t think particularly highly of the constant flow of true crime content that pours so easily from the tap. But what if that tap could be filled with completely sanitized content? The machine mind could be tuned for each person. If you want murder, it has them. If you want bad docs? Your prescription is filled! All without real human victims!

In some ways, it’s the perfect con. An infinite desire for content filled infinitely by a machine with no ability to profit.

I’ve been trying to find the “one ethical case” for generative AI for a while. In general, I’d say that generative AI is the existential threat to the creative arts. Not because it makes them outdated, because all it can do is take what we do and turn it into a snake eating its own tail. It turns us into “mom and pop shops” that exist only on the fringes and are constantly under threat from the big box stores.

And the thing about big box stores isn’t only that they ran out your local hardware store or your local toy store. It’s that they soak up resources at an astronomical rate. They build warehouses designed to fail. They turn walkable municipalities into seas of parking spaces. They pay their workers as little as they can all while putting as many people out of work locally as possible and moving the profits to distant places.

Thus, the thing about a machine taught to feed you killerless, victimless crimes is that it has to be trained. It has to be fed all sorts of stories with real killers and real victims and it has to constantly iterate on those stories forever. Like an even stupider version of Virtuosity. But instead of it making a killer it makes an infinite stream of them made of an infinite stream of real killers killing an infinite stream of victims made of an infinite stream of real victims. Forever.

And that machine is not running on anything close to fully renewable sources. It’s taking land we might otherwise use for housing or gardening or a nice shop. Those data centers don’t create jobs, they just create waste heat killing our planet. Not to mention that all of those stories being fed into the machine are stolen. They did not consent to having their stories fed into this. And here we are again. In order to build a machine to feed this monstrous hunger we just expand the harm.

The idea of an ethical use for generative AI starts with a faulty premise. There is not an ethical use for a product built on theft. The ends will always come with tainted means. A system built on that which was stolen will always be haunted by that original sin. Sure, we might get cool things from it. We might get a few funny pictures. But in the end, we’ll wake up one day and look around and realize that we sold the world a few memes and walked away with nothing.

———

David Shimomura is the editor in chief of Unwinnable. Follow him on Instagram and Bluesky.