Do LLMs dream of cryptids?: Latent assemblies of emergence in the shadow of generative AI
Eleni Maragku
During the FIBER Symposium Latent Assemblies, which took place in December 2024, writer Eleni Maragku responds to how AI (in particular LLM’s) can be seen as a tool for co-creation. Through various examples, The essay challenges academic hierarchies, embracing ambiguity and avoiding definitive answers. By labeling the failure of these machines as appreciated unpredictability rather than an opportunity for improvement, Eleni encourages us to remember that some things can’t be reduced to just an algorithm.
Beyond AI as phármakon
When thinking, speaking, or writing about a “novel” technology, namely (generative) AI, it is tempting to frame it as a “phármakon” — a cure or a poison, hovering between hype and doom. “The phármakon,” wrote philosopher Jacques Derrida, “is neither remedy nor poison, neither good nor evil, neither the inside nor the outside, neither speech nor writing”.
This does not make such technologies morally neutral. The very nomenclature around AI reproduces anthropocentrism and shapes our relationship with machines. [1] In Race After Technology, Ruha Benjamin draws attention to the conceptual and etymological links between our conception of the “robot” and notions of subordination, compulsory labour, and servitude. What Ruha articulates here is how socially constructed practices become inscribed in the technologies we use. Similarly, in the more optimistic scenario, AI is treated as a tool, a docile assistant, rather than a companion, a collaborator, or an accomplice. In the dystopian ones, the servant goes rogue, escaping human control and rebelling against its former masters.
Late capitalism and its technologies already operate beyond our grasp, foregrounded by vast, immense systems which we cannot access, comprehend, or influence. Like the seamless interfaces of our devices, which allow the layperson to use computational technologies without speaking the programmer’s language, the process through which a prompt generates an image remains hidden, residing behind a series of invisible operations.
“[T]he algorithmic process,” says Flavia Dzodan in her inaugural lecture as Lector of Algorithmic Cultures at the Sandberg Institute, “mirrors the colonising impulse — to classify, control, and make legible a complex, emotional, and often resistant world.” As the late Fredric Jameson argued, late capitalism represents a loss of temporal depth.
The “aura” of the art work as described by Walter Benjamin has long dissipated. Algorithmic images are generated in an instant. Engaging with art no longer has to be a process of dialogue with a time period, a sociocultural context, or politics. No longer anchored by historical continuity and detached from futurity, we are stuck in an “eternal now”, in which efficiency, consumption, and continuous novelty are prioritised.
World Wide Wrongs: Cryptids, glitches, and never-mades
Media artist Rosa Menkman, who has incorporated glitch in her practice, namely in her work A Vernacular of File Formats, defines it as “a not yet defined break from a procedural flow, fostering a critical potential” (Menkman, p. 27). In her conceptualisation, “failure is a phenomenon to overcome, while a glitch is incorporated further into technological or interpretive processes”. The glitch interrupts the transmission process, made invisible by the smoothness of the interface, and thus reveals the underlying functions. “Interruptions like these,” writes Menkman, “are often perceived as disastrous, threatening and uncanny.” [2]
Often, when the frictionless algorithmic output is replaced by that which scares or confuses us, we get digital cryptids, or what Dzodan calls “never-mades”. These are “artefacts shaped through algorithmic processes [and computational interpretation] rather than human intent”. [3] In these uncanny abominations lies the ability of generative AI to generate feeling — even if that feeling is unease. These artefacts are chimeras, aberrations, not grounded in reality. In Dzodan’s words, “they inhabit a purely digital or potential space, existing at the boundary between the made and the unmade”. Furthermore they challenge conceptions of reality and open up spaces for latent emergence in a medium that is otherwise seen as reproductive, i.e. not producing anything “new”.
One such creature is Crungus. In 2022, Twitch streamer Guy Kelly prompted image generating tool Craiyon with what he believed to be a totally bogus word: crungus. What the machine spat back — and continued to do so consistently and across searches — was a monstrous, humanoid figure with malevolent, distorted features. The creature looks like an orc from Lord of the Rings. Internet lore posits that Crungus might be a demon from another dimension, accidentally summoned by the algorithm, or a figment of the intelligent algorithm’s own imagination. [4] Could that be true?
Well, the arrangement of consonants in the name Crungus evokes the monstrous, swamp or grotto-dwelling creatures we encounter in fantasy. The name itself sounds like fungus or grungy. It might as well be Shrek. Outputs offered by AI may feel meaningful to us because they combine, recombine, and traverse through patterns, associations, or knowledge we already recognise. [5] I often miss the days when those algorithmic image generators were less trained. Their output was less legible, less precise, often more unsettling, and therefore more interesting. But the “better” the model, the more photorealistic and frictionless the output becomes — per Dzodan, “functionally useful but emotionally impoverished”. Even the anomalies it produces are wholly predictable.
Why should AI get to write poetry while we languish at work? Reflections on co-creation with AI
Large language models like GPT-3 are essentially autocomplete machines. As a writer, I quickly became interested in their potential to (re)produce human-like writing. These are models that can ostensibly be trained, sculpted in the image of their prompter. Yet, as it stands, the free version of ChatGPT has little to offer other than generic, creatively bankrupt, obvious turns of phrase. “Step into the vibrant tapestry of the blah”, says the machine, “delve into the interconnectedness of the whatever” — verbose, effervescent slop that both manages to adopt the slimiest qualities of purple prose and convey very little in terms of ideas.
Writers began to experiment with generative AI already before it hit critical mass. Published in 2021, Pharmako-AI was the first book co-written with ChatGPT. Through a writing process that resembles musical improvisation, author K Allado-McDowell and OpenAI’s GPT-3 language model engage in a hallucinatory exchange that, according to its publisher, “offers a fractal poetics of AI and a glimpse into the future of literature.” In a review of the book for Bookforum, critic Dawn Chan muses: “What’s strange is that when GPT-3’s musings in Pharmako-AI leave you flummoxed, you don’t know who or what to blame; just as when its insights feel startling and wakeful, you don’t know who to thank.” Out of curiosity, I inspect the passages of the book written by the machine:
Art is often described as an escape, an avenue of understanding that can connect us to these deeper forces, to a wisdom beyond the normal limits of perception. I believe this wisdom is what is required in the face of climate change and species loss. This wisdom is the knowledge necessary to preserve the patterns of intelligence from which life emerges and thrives.
ChatGPT, of course, doesn’t believe anything, because it is not a being with a belief system or moral code or a coscience. Judging by this excerpt, it can mimic those with the characteristic eloquence of a first-year university student trying to reach the wordcount of their essay. Of course, 2021 was eons ago, and, as Elon Musk’s lackeys keep reminding us on the app formerly known as Twitter, the technology is evolving so rapidly, that everyone with a humanities degree could be rendered obsolete within the next year. Even virtual influencers, who are very much curated by real people, did not manage to replace “real” celebrities. Authenticity, no matter how fraught or calibrated, is still desired.
In its claims to maximise productivity and efficiency and reduce the more tedious and time-consuming aspects of writing (its labour), AI strips its user from experiencing writing as a vital exploration of urgency, as a process of nurturing ideas, rather than generating them. In an essay on AI writing tools, writer Rayne Fisher-Quann describes the latter as an investment “into the potentiality of your experience with almost no promise of tangible reward at all, which is something like being alive”. Writing is a lot like walking, in that sense: a means of getting somewhere that might not be the quickest, the most direct, or the most comfortable, but one that leads to unexpected, serendipitous, mind-expanding encounters.
Embracing ambivalence
The recent revelation that AI was deployed in the Academy Award-nominated film, The Brutalist, to correct some parts of lead actors’ Hungarian accent in some scenes, cinephiles decried its use, arguing that it an unethical distortion of human creativity, casting doubt on the authenticity of the performance. [6] The idea that (generative) AI produces unoriginal or inauthentic work is often cited as the main critique against the technology. Is it possible to approach AI, in the words of independent researcher and musician Portrait XO, as a “collaborative sparring partner ” [7], rather than a shortcut, a productivity enhancer, or a cheat?
As with generative AI, much of human creation is “not original” either; there is no pure parthenogenesis. Such a concept ignores the lineages of those that came before us, the nodes of collaboration and kinship that inform our practice in the present day, the millions of ways in which ideas travel and bloom. Artistic movements and literary or musical genres do not spawn overnight through the singular genius of a singular visionary. Creativity and knowledge production emerge out of interconnected networks of relations that span across time and space. Even whether the output is “authentic” or not is not the point: Is photography always authentic? Is a camera always telling the truth?
Where the problem lies is when corporations make that decision on our behalf, by pilfering human creativity, feelings, and ideas, and converting it into data points. The problem lies when massive amounts of energy, billions of dollars, and the underpaid labour of actual human beings, mostly from the Global South, are poured into technologies that, as Dr. Ramon Amaro has said, merely improve our emails. [8]
Our collective crisis of imagination is exacerbated by the increased depreciation and defunding of the humanities in the service of purely vocational, STEM-focused, degree-pumping education. Can we imagine environmental, open-source, and relational ways of engaging with creation that refuse the extractivism of tech companies? What do those look like? Is the dumbing down of our intellectual and creative faculties the price we are willing to pay for increased efficiency?
The fetishisation of efficiency in AI results distills the colonial, bureaucratic drive towards optimisation and the purification of all ambivalence. In her notes on methodology, Dzodan writes that she “aims to collapse [knowledge, theories, and ideas] entirely, hoping that something will emerge in their place”. She is not interested in predefining those outcomes, preferring instead to view them as “living entities”, shifting, morphing, and growing.
In its way, this essay attempts to resist academic and conventional hierarchies of legitimacy. Preferring to embrace ambiguity and remaining endlessly indebted to those it cites, it is uninterested in bestowing conclusive answers. Knowledge is not static. Mistakes are an everyday part of reality. Serendipity, chance, and even error leave room for emergence. Each time the machine fails to produce the desired output is not an opportunity for further improvement, but one to revel in the unpredictability of things. It is a reminder that some things can never be put in the black box of the algorithm.
Sources
[1] Giselle Beiguelman, Artificial Intelligence as Phármakon: Algorithmic Art between Remedy and Poison
[2] Menkman, p. 30.
[3] Dzodan.
[4] Since Craiyon version 2 was introduced in 2023, the prompt “Crungus” no longer produces the same imaginary creature.
[5] Not unlike “real” cryptids.
[6] AI here was used to specifically discuss generative AI, obscuring the way artificial intelligence has been widely used in cinema in various forms.
[7] I encountered her work, among that of more artists, researchers, and thinkers during Latent Assemblies, a symposium on co-creation with AI, presented by FIBER and the Artificial Times MA programme at Sandberg.
[8] From the FIBER 2024 context programme. Each time ChatGPT search uses an estimated 2.9 Wh of energy. For reference, a single Google search requires about 0.3 Wh. The moderation of ChatGPT requires labour — human labour, by underpaid workers in the Global South.
Eleni Maragkou is a Greek writer, editor, and researcher, thinking often and writing about our relationship to the media that shape our everyday lives. Sometimes she lectures about this topic too. She holds a Research MA in new media and digital culture from the University of Amsterdam.