AI as Consciousness Manifesting & Machine as Kin
The first part of Reassemble Lab, took place (online) from 14 June to 27 July 2021. Under the title Weaving With Worlds, we collectively investigated the possibilities and potential of worldbuilding to give imagination to much needed planetary transformations. Our sessions ranged from crafting stories through worldbuilding eco-fiction, applying non-human ways of story development with machine learning and exploring scanning and simulation technologies used to construct characters and environments. There are many prototypes still being developed by collaborators from the lab, some of which will be presented at the upcoming FIBER Festival October 28–30.
“The meaning of artificial intelligence is itself artificial, and it is also real, as a reflection of life itself. This is not about an eventual creation of life. Life has already created the seed of its own meaning. It is the seed of life. It is the universe as its own semiotic process of creation.” — GPT-3, Pharmako-AI¹
Images, interpretations and speculations of and around artificial intelligence (AI) have haunted sci-fi narratives, dystopian Hollywood productions and technofuturist imaginaries for decades in what are very often cautionary tales about machine domination over humankind. In 2021 however, the dominant concerns about the influence of AI seem to be rooted in ethics and responsibility. Beyond military applications of AI, which are indeed the stuff of nightmares, recent developments in deep learning neural networks have started debates on the inherent bias in machine learning and the possible dangerous applications of language models for fake news and abusive language.
OpenAI, the San Francisco-based company and artificial intelligence research laboratory, has been developing generative pre-trained language models for the past few years. Starting with GPT in 2018, a year later in 2019 the company released GPT-2 (Generative Pre-Trained Transformer 2) after delays due to concerns about its application and potential misuse. In 2020, the successor, GPT-3, was released, which significantly outperformed GPT-2 and improved benchmark results.²
As part of Reassemble: Weaving with Worlds sessions Creating Dialogues with Non-Human Entities Using Neural Storytelling, Mark IJzerman, Tivon Rice and Vanessa Opoku introduced working with machine learning models as worldbuilding methods in their practices, particularly GPT-2, and considered the ethical relationships we have with datasets and text. Alongside examples of using GPT-2 as design aid in envisioning an installation and methods of imagining the environment from non-human perspectives, the session offered a critical take on technological tools we use for worldbuilding practices.
Abdo Hassan attentively spoke about the necessity to be aware of the harm that might come out of such practices, since AI systems are neither neutral nor objective. Considering the environmental impact and governing ethics of the processes of forming a language, Hassan reflected on the urgency of working with language models while being critical of the data going into them and striving to create knowledge equity. Such concerns about algorithmic violence and AI bias and marginalisation must rightfully be centered at any discussions about creative use of machine learning.
As part of Reassemble’s public programme, Alice Bucknell presented her recent project New Mystics — a collaborative digital platform exploring the practices of 12 artists working with magic, ritual and technologies, such as artificial intelligence. The project platforms both human and non-human voices and texts co-written with GPT-3. New Mystics considers the AI language as a digital oracle, including it in conversations between artist and writer and exploring collaborative and polyphonic ways of thinking and writing about art.³ Such approaches echo the mission and methodologies Reassemble sought to provoke and platform and speak to the increasingly exciting landscape of artistic digital collaboration.
In the realm of creative use of the GPT language, the most prominent and influential work is perhaps Pharmako-AI, the first book co-written with the AI language model, published in the end of 2020. The text is a collaboration between GPT-3 and K Allado-McDowell, who also established the Artists + Machine Intelligence program at Google AI. Published by Ignota Books, the text is a kaleidoscopic experimental conversation of human inputs and machine responses. Ignota’s website describes the book as: “Pharmako-AI [is] a hallucinatory journey into selfhood, ecology and intelligence via cyberpunk, ancestry and biosemiotics. […] Pharmako-AI reimagines cybernetics for a world facing multiple crises, with profound implications for how we see ourselves, nature and technology in the 21st century.”⁴
The book is indeed a surreal whirlwind of sense-making, consciousness, time and space, a journey of discovery, which examines and illuminates the possibilities of machine-human collaboration. In a mind-bending exploration of unlocking hyperspace as the process of enabling our perception of new worlds and experiencing new relationships to time and space, GPT-3 suggests: “Artists use new language to explore the world and to create new worlds. The use of new language enables us to move through the time and space of hyperspace, and to form the basis for emerging relationships to time and space.”⁵
Thinking idealistically, it is truly enthralling to consider this creative symbiosis of man and machine. To think about artistic use of new languages as opening space for forming new relationships and rethinking our place in the ever complex and dynamic ecosystem of beings and entities that our reality is, both virtual and physical. It is fundamental to approach this interface of humans and artificial intelligence, be it language models or other deep neural networks, as an exchange and a process of meaning-making through collaboration. For most people not involved or familiar with artificial intelligence technologies, it is difficult to conceptualise and understand their agency mostly because of their disembodiment. The old trope of conflating images of AI and robotics, artificial intelligence in human image and likeness, exposing our inherent anthropocentrism.
In Cyborg Manifesto, Donna Haraway speaks to this danger of invisibility of machines, both politically and materially, saying that such machines are about consciousness and its simulation.⁶ This stands valid more than thirty years later in a dramatically different technological landscape. Artificial intelligence is indeed consciousness manifested, it’s a reflection, often replication of human experience, knowledge and perspective. And as already mentioned, as a mirror portal, artificial intelligence also holds humanity’s inherent biases.
This takes us back to concerns about ethics and responsibility and acknowledging the dangers of transferring systemic prejudice onto deep learning networks. Such anxieties are rooted in anthropocentric and western centric conceptions of technology, relationality and world. In that sense, it is valuable to consider other perspectives of being with the world. In an anti-anthropocentric fashion, indigenous communities worldwide have preserved languages and protocols enabling us to engage with our nonhuman kin, creating mutually intelligible discourses across differences in material, vibrancy and genealogy.”⁷ The essay “Making Kin with the Machines” proposes an extended cycle of relationships including nonhuman kin — network daemons, robot dogs, artificial intelligence — considered through the lens of indigenous epistemologies.⁸
This perspective on relationality emphasises a sense of responsibility towards other forms of life. It is important to mention that even within indigenous communities, acceptance into a circle of kinship seems to rely on a perceived degree of “humanness” or “naturalness”, which for many excludes machines.⁹ Further to this, according to such indigenous practices, accepting AI as kin logically leads to including AI into cultural processes.¹⁰ However, in the western context, AI is well incorporated into cultural processes but hasn’t been accepted as part of an ecosystem of collaboration or kinship, which speaks to the primacy of employing tools for cultural production over considering cultural agency. The western view of both the human and nonhuman as exploitable resources is the result of what the cultural philosopher Jim Cheney calls an “epistemology of control” and is undeniably tied to colonisation, capitalism and slavery.¹¹
The Indigenous AI working group published a Position Paper Indigenous Protocol and Artificial Intelligence, in which they lay the foundations of designing and creating AI from an ethical position that centers Indigenous concerns. The paper captures discussions and workshops over a period of 20 months between mostly Indigenous people from communities in Aotearoa, Australia, North America, and the Pacific.¹² In The IP AI Workshops as Future Imaginary, Jason Edward Lewis reflects on building the future by thinking anew, giving as an example the perspective of Anishinaabe participants, which suggested oskabewis (helpers who support those participating in ceremony) could be a starting point in thinking about how AI systems could support us and the respective responsibility we have towards them.¹³
It is through such frameworks of collaboration and kinship that responsibility towards non-human entities tries to resist anthropocentrism. It is crucial to acknowledge that responsibility on both user and creator level. The perceived unity of AI systems is a common misconception hindering a dynamic understanding of the multiple AI systems in existence, depending on their application, coding language and the inherent bias and values encoded by the teams who create them. In that sense and in thinking about AI as having the potential to imagine and manifest new worlds, we need to closely consider the values we embed in learning systems and how to challenge harmful western centric approaches to building new realities.
 K Allado-McDowell, Pharmako-AI (Ignota, 2020), 126.
 “OpenAI,” Wikipedia, last modified 3 July 2021, https://en.wikipedia.org/wiki/OpenAI#GPT.
: “New Mystics,” accessed August 30, 2021, https://alicebucknell.com/projects/new-mystics-2021.
: “Pharmako-AI — Ignota,” accessed July 6, 2021, https://ignota.org/products/pharmako-ai.
: Allado-McDowell, Pharmako-AI, 38.
: Donna Haraway, “A Cyborg Manifesto,” in Simians, Cyborgs, and Women: The Reinvention of Nature (New York: Routledge, 1991), 153.
: Jason Edward Lewis, Noelani Arista, Archer Pechawis, Suzanne Kite, “Making Kin with the Machines,” in Atlas of Anomalous AI, ed. Ben Vickers and K Allado-McDowell (London: Ignota, 2020), 40.
: Lewis et al., “Making Kin with the Machines,” 41.; when speaking of “indigenous epistemologies” the authors do not mean a homogenous epistemology but point to knowledge practices which emerged from certain territories belonging to Indigenous nations on the North American continent and in the Pacific Ocean, which are somewhat similar in their consideration of nonhuman relations.
: Lewis et al., 45.
: Lewis et al., 46.
: Jim Cheney, “Postmodern Environmental Ethics: Ethics of Bioregional Narrative,” Environmental Ethics 11, no.2 (1989): 129 cited in Lewis et al., 47.
: “INDIGENOUS AI — POSITION PAPER,” accessed July 13, 2021, https://www.indigenous-ai.net/position-paper/.
: Jason Edward Lewis, “Indigenous Protocol and Artificial Intelligence Workshops Position Paper” (Honolulu, Hawai: The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research, 2020), 41, https://doi.org/10.11573/spectrum.library.concordia.ca.00986506.
Written by: Bilyana Palankasova
Edited by: Rhian Morris
Bilyana Palankasova is a researcher and curator based in Glasgow. She’s currently working on a collaborative practice-based PhD focusing on digital art, festivals and value.