Nonhuman Meditations
A ring-tailed lemur gazes off towards something out of frame in this nighttime photograph.

Distinguishing Nonhumans as Imagined by Humans

The cover art for Unwinnable Monthy #186 features a distorted painting of a man in a suit whose head is made entirely of warped hands and fingers – the kind of monstrosity generative AI would make.

This column is a reprint from Unwinnable Monthly #186. If you like what you see, grab the magazine for less than ten dollars, or subscribe and get all future magazines for half price.

———

Thoughts about being something else.

———

Now’s the time to share some definitions for this column. It’s been called “Nonhuman Meditations” – and wouldn’t that include all AI, aren’t they all like robotic vampires?

No. No, they’re not.

When I put down “nonhuman,” I mean like, for example, nonhuman primates – chimpanzees, ring-tailed lemurs, snow monkeys, etc. Humans are categorized as primates, but not all primates are human; hence the distinction. In an even broader category, they’re living creatures that aren’t human, or those with some level of sapience or sentience. That distinction can apply to many other beings, imagined or real.

I never mean “nonhuman” to include something as literally nonhuman as a real object – and real AI like ChatGPT and Midjourney can be considered objects. Generative AI technology isn’t really a new living mind as imagined in science fiction. ChatGPT and Midjourney aren’t producing anything without direction, not because they want to create.

This column’s approach to “nonhuman” also includes the fictional, clearly. Imagined nonhumans are interesting to think about in this regard. They have a long history, from early myths and fairy tales to today’s pop culture, from a dragon to Pikachu. Granted, imagined characters aren’t literally alive either – in a way. But that little literal detail can (and often should) be easily set aside most of the time, especially when contemplating a wider context. Imagined nonhumans are created by humans, who end up ultimately imbuing their creations with pieces of themselves (or other people, other animals, etc.) or even fragments of how they think or what they can imagine, and that gives them enough life. In contrast to generative AI, anthropomorphic object characters would fall under the “nonhuman” category for this column (for example, the main cast of The Brave Little Toaster). AI characters, robotic characters – they remain fair game for this column (see NOS-4-A2). Whether imagined as made of flesh, fur, plastic, metal, or some other state of matter, all of these nonhuman characters are more alive than what AI technology can generate. And it’s technology misused and abused in harvesting swaths of people’s work without their consent or compensation. ChatGPT and Midjourney have nothing without actual humans to extract from.

A Japanese macaque sits on a tree branch as snow falls softly around it.

Then how much do imagined nonhumans carry over from their human creators? Are fictional nonhumans just humans in disguise? Sometimes there have been disparaging comments on stories with nonhuman characters that complain this could’ve just been about humans. Why bother with making them look otherwise or presenting them as anything else? That sentiment raises another question: Is that really such a problem?

It’s not like there’s actually anything inherently wrong with fictional nonhumans that behave pretty close to humans, no matter what some critics express. And it doesn’t have to be perceived as a waste like those critics suggest. What about visual appeal? People can just appreciate the nonhuman aesthetic. They may, in fact, enjoy what is essentially a human character but visually depicted as distinctly nonhuman; they may like that they can get both elements split between the interior and the exterior of a character. I would like to look at something different while feeling something familiar, or something that I can understand on some level. I like how the nonhuman can always look fundamentally different to human existence, always offering something else to explore.

Any questions of a nonhuman appearance and implying it’s less than the purely human figure in art and fiction feels somewhat reminiscent of when some people wonder why animate a narrative that could’ve been live-action, especially if it seems “grounded” enough. (A connection between the two feels fitting, since the nonhuman can be highlighted so often in animation, with concept and medium intertwined.) The animated hit Blue Eye Samurai illustrates the assumptive question and the rebellion against it as co-creator Amber Noizumi shares in a behind-the-scenes featurette (at around the 7:46 mark):

“People have come to us saying, ‘This is a live-action story. Why animation?’ Why not animation? Why pick any medium for art? It has its own beauty and its own process. And I just feel so fortunate to get to be a part of it.”

The same could be said for any use of nonhuman characters (or other creative choices). The element of personal affinity can be a contributing factor. Tuca & Bertie creator Lisa Hanawalt described to Time how relating psychologically and emotionally to different birds influenced the show’s direction. But before sharing that, when first responding to their question of what made birds the right fit for the main characters, Hanawalt started with this: “I just like looking at birds and thinking about them.”

There’s also just a lot of variety in imagining nonhumans. There’s (fortunately) no one way to do it. There have been creators that try to delve more into what would be unique to the nonhuman perspective. Even as Kung Fu Panda’s anthropomorphic Tigress stands upright on two legs and speaks, she can switch to running on all fours and her growls are maintained. The titular deer Bambi keeps a natural body while most of his thoughts and personality become more anthropomorphic for readers of the book or audiences of Disney’s animated film adaptation. The lively lamp Luxo Jr. of Pixar Studios skips the eyes, unlike the wide stare illustrator Kevan Atteberry amusingly slapped onto Microsoft’s old Clippy.

Tiffany Haddish and Ali Wong as titular Tuca & Bertie in the episode “Bird Mechanics.”

Tiffany Haddish and Ali Wong as titular Tuca & Bertie in the episode “Bird Mechanics.”

And so imagined nonhumans come alive with what their human creators imagine for them, what they give them from pieces of themselves or other true creatures they know or other sparks of inspiration funneled through what they perceive and can translate into a shareable form. In a way, fictional nonhumans inherit from the humans that craft them with care and attention.

Some real AI today extracts from humans instead, often against their will. What they generate can’t come alive in the way nonhuman characters made by humans can. And there’s a limit to what they can give. When there are human creators, an audience’s experience with fiction and a piece of art they engage with can have more going on. Audiences can gain even more by looking at the creative process behind what they read, watch, play, or otherwise spend time with. They can learn about the human creators, even other stories and anecdotes about what made them want to create in the first place. For example, Richard Adams started crafting Watership Down as a story to entertain his daughters during a long car ride (they were on their way to see Judi Dench perform in Twelfth Night). And audiences can see what influences inspired creators, finding more things to read, watch, play, or engage with in some other manner. At around the 52:56 mark in episode one of “MOTHER,” She Wrote: An EarthBound Podcast, co-host Cat Blackard recognizes this promise of depth in human creativity, noting that “like any good piece of art, learning more about it and discovering where it came from is also part of the art.” Of course, audiences could focus only on the art, and not the creator; that’s especially possible if the art in question is already vast on its own, if the audience’s imagination runs far and wide in response to it. Or tweak the ratio of most focus on the art with some attention toward behind-the-scenes information. How audiences engage with a work of art is always up to them, and their approach can always shift – but they should know that human creators made it happen.

Midjourney and ChatGPT can’t really go further. They have no creative insight to give.  There was no in-depth thought or emotion put into what was generated because no one actually alive directly made it. There’s no real behind-the-scenes story supporting multiple facets of a work to explore. (Why did you choose that color scheme? Why did you choose to develop that character this way? AI won’t have any real answers.) Generative AI only outputted something based on a prompt. It can only trace back to humans it extracted from, which just serves as another reminder of the disregard for the labor of human creatives.

The cover of a paperback version of Richard Adams' Watership Down shows a close-up on a rabbit sitting in a field of golden wheat.

There are humans resisting this, trying to give support instead of treating each other as expendable. There’s been tech on the opposite end of the spectrum, with more regard for people. Glaze is a program created to protect art from unethical AI usage, first developed by scientists from the University of Chicago and now integrated with the art platform Cara (also made to oppose AI-generated work). Shawn Shan, lead for the UChicago project, was recognized by the MIT Technology Review as the Innovator of the Year in 2024 for his work in protecting artists from AI exploitation and extraction. That sort of recognition could indicate a rise in resistance against AI. For another example, animation news blog Cartoon Brew reported that animation software companies Moho (Wolfwalkers, Dawn Of The Croods) and TVPaint (Genndy Tartakovsky’s Primal, Star Wars: Visions) currently say they will not incorporate Generative AI tools. And in issue #250 of ImagineFX magazine, writer Tanya Combrinck described that along with legal action and lobbying, public awareness and keeping the art community informed are other key priorities in resisting AI. Eva Toorenent, an illustrator and a representative for the European Guild for Artificial Intelligence Regulation, told Combrinck that more of the public seems to side with artists now, with people “more aware of AI’s enormous environmental impact, ethical concerns surrounding its use, and its other potential dangers, such as misinformation, fraud and deepfakes.”

This all could point toward the negative perception of AI as part of a potential domino effect leading to more protective actions for not only artists, but also writers, musicians, translators, voice actors and people from other effected fields. The human cost is concerning. On a different level, it’s also just concerning for art and storytelling. Dragons have been imagined by humans for centuries. Who wants to see dragons and other nonhumans separated from their human creators? It’s vital to keep them entwined together, it’s vital to keep the human in nonhuman creation.

———

Alyssa Wejebe is a writer and editor specializing in the wide world of arts and entertainment. Her work has included proofreading manga, editing light novels, and writing pop culture journalism. You can find her on Bluesky and Mastodon under @alyssawejebe.