De-coding the Cloud: In Conversation with Abdo Hassan

FIBER
9 min readSep 1, 2021

--

The unique opportunity for resistance presented by our algorithmic realities

The first part of Reassemble Lab, took place (online) from 14 June to 27 July 2021. Under the title Weaving With Worlds, we collectively investigated the possibilities and potential of worldbuilding to give imagination to much needed planetary transformations. Our sessions ranged from crafting stories through worldbuilding eco-fiction, applying non-human ways of story development with machine learning and exploring scanning and simulation technologies used to construct characters and environments. There are many prototypes still being developed by collaborators from the lab, some of which will be presented at the upcoming FIBER Festival October 28–30.

Abdelrahman Hassan is a data science practitioner, activist and poet. His practice is multifaceted and revolves around bridging critical theory with the critical practice of data. That is, how we can facilitate a disruptive and critical data imaginary that allows for an equitable and power-aware world building practice. This interview was conducted during FIBER’s Reassemble Lab Part 1: Weaving With Worlds, where Abdo led a session on ‘Algorithimic Inequality’, to develop a notion surrounding decoloniality in data. He investigated, along with the participants, questions such as: How do we move away from exploitative notions of data-collection and world building? How do we become more aware of our positionality as creators? and how can we be more proactively critical of our algorithmic creations? Equipped with literature and a multi-lens perspective on sustainability and decoloniality, he ran a workshop inviting participants to deconstruct and reconstruct their notions of what world-building algorithms can do.

In an interview we spoke with Abdo about the unique opportunity for resistance presented by our algorithmic realities, the line between harm and bias in relation to algorithmic violence and the dissonance necessary for ecological and personal healing to take place.

Rhian Morris: In reference to your workshop ‘Algorithmic Inequality’, why is learning about algorithmic violence important in regards to world building, and what can an artist/maker do with this knowledge?

Screenshot from Abdo’s workshop, ‘Algorithmic Inequality’, presented during Weaving With Worlds lab.

Abdelrahman Hassan: Algorithmic violence is a tricky subject. Algorithms often work to mechanise violence, producing it in unseen ways. The violence that algorithmic systems unleash is often undiagnosed, hiding behind a facade of objectivity. Take for example an algorithmic system that uses a neural network to generate captions for a picture. Although seemingly innocent, we must consider the environmental toll that (re)training and running such an algorithm would take. We should also consider the human labor often included early in the pipeline of labeling the datasets. As creators we should also work to identify, expose and treat the bias carried forward from training. Many image and facial recognition algorithms have been proven harmful post deployment since they discriminate against women and people of color. They also often reproduce social stereotypes and harmful associations.

When we world-build, we have to realize that technology does not operate in a vacuum. We also have to realize that although bias is inevitable, harm is avoidable. An artist should then seek to interrogate these harms in the placement of his/her/their work.

Furthermore, we need to always take the decolonial option when building new worlds. Algorithmic systems assume roles in social settings. Often, a system takes on the paternalistic role of the decision maker, the undisputed wizard or the submissive assistant. As creators, we need to envision newer, less eurocentric roles for our systems. What if algorithmic systems can be storytellers, facilitators, challengers of power or simply empathetic listeners?

RM: The ways in which technological developments shape a multitude of different futures can be liberating for some and dehumanizing for others. They are forces of extraction, exclusion and division, while they simultaneously offer the possibility to give form and imagination to new, necessary realities. What possibilities are there for artistic practices to adapt their use, or even reassemble them to accommodate representation and inclusion?

AH: That’s a great point. Algorithmic realities and futures are not all doom and gloom. I adopt a vantage point of radical optimism. The bare essentials of the technologies we adopt is that they are tools. Given the right framing, awareness and critical literacy, they can be a driving force for positive change. The possibilities in this sense are numerous. Data-driven and algorithmic systems can help us identify and better diagnose already existing biases. I often reference Mimi Onuoha’s project surrounding Missing Datasets and Caroline Sinders’ project concerning feminist datasets. Both are excellent examples of using data to ask the question of what is missing. Silicon Valley has embedded a solutionist view of technology within us. However, in many cases, tech can be used to explore the problem-space rather than the solution-space. This requires a mindset change where we don’t take the output of these world-building technologies at face-value.

Mimi Onuoha ‘Missing Datasets’ (image courtesy of artist)

Beyond diagnostics, I believe algorithmic systems can serve a more productive function. We can use algorithms to amplify unheard voices, make alternative mappings and open avenues for cross-cultural solidarity that were not possible before. In fact, given the challenges we now face are collective, this large scale, algorithmically prompted, collaboration can be seen as an essential salvation. Take the example of ecological emergency. Without transnational collaboration, our efforts are futile. Without a hyper literacy of the impacts of the systems we use, our efforts are also futile. Algorithms can help us evaluate the solutions that are currently in place, incentivize collaborations and help us in building new ecological imaginaries. With tools such as the Website Carbon Calculator, we not only expose the harm done by our creative processes, but we invoke a community-in-the-loop approach to alleviating harm and rebuilding inclusive and sustainable processes.

RM: You were also present in Session #3 where we introduced the lab participants to GPT-2, and voiced concerns of artistic use of this technology. What does GPT reveal about our anthropocentric ways of being in the world, most notably the underlying unproductivity of our cultural, social and political biases?

AH: GPT, in all its variations, is a magnificent tool. I’m still in awe of the possibilities that it can produce, and of the engineering feat that it entails. The problem often lies in our narrative around it. GPT, or any other transformer system, will neither be our saviour nor will it be our captor. On the one hand, there is a discourse that somehow this powerful tool will broaden our horizons with its generative powers, creating new artworks and revolutionizing human machine interaction. On the other hand, there’s a discourse rooted in anxiety that transformer systems will replace human functions, taking our jobs and disrupting the natural aura’s of authenticity. I find both views to be extreme. They both miss on the fact that GPT, much like other generative models, is a parrot in its essence. It works to learn and reproduce patterns of whatever data they’re given. Such systems are also largely unsustainable ecologically. That is why I find such narratives around GPT unproductive. Again, I find the discourse favoring the solution before fully understanding the problems.

Energy needed to train a GPT2 model is equivalent to lifetime emissions of 5 cars, or driving to the moon and back. Slide from Abdo’s presentation during Session#3 of Weaving With Worlds.

RM: If language and AI are self referential systems, how can we hack these systems to contribute to a deeper understanding of our shared environment?

AH: We can only hack these systems if we understand the vector of power they operate along. Orwellian thought has taught us that language and its governance is a function of power. Before we deploy AI systems we have to first assess where the power is, what the language used to maintain it is, who it harms and who it favors, and what needs to shift in order to achieve an equitable future. Our collective understanding of what a ‘cloud’ is, for example, is shrouded in mysticism. What we must do is de-code the cloud, linguistically and in practice, to unveil the environmental harm that such a structure entails.

Screenshot from Abdo’s workshop, ‘Algorithmic Inequality’, presented during Weaving With Worlds lab.

RM: Where does the line between harm and bias lie in terms of algorithmic violence?

AH: Bias is something inherent to the way we think and the way digital and data driven systems work. A machine needs to mechanize some sort of bias in order to work. Neural networks and AI precisely try to make generalizations out of the data points they are fed. This is particularly what makes them powerful. It is our role as critical thinkers to assess these generalizations. Are they exclusive? Can they cause harm? Can they dispossess a population or further marginalize someone based on race, class, ability or gender identity. Often, harm is caused by an emergent bias that is not clear when the system is being developed. An example of this is the cropping bias in the twitter image algorithm which, with usage, learned to favor images of lighter skinned males. Here, we must be open to acknowledge that even with the purest intentions, systems can grow to generate harm. A possible approach to this is to be responsive to this emergent harm, to listen to those harmed, and to make adjustments. The conversation and the pipeline for harm alleviation should be inclusive and participatory. It is paramount that we involve the end user in the design, deployment and governance of algorithmic systems.

Slide from Abdo’s presentation during the Algorithmic Violence workshop

RM: What can we learn or perceive differently when we create or interact with fictional and virtual worlds?

AH: We can learn that science fiction often becomes science fact. The technologies we use today and the manner in which we use them, once resided in the imagination of a day-dreaming pioneer. It is the marriage of imagination and power that often creates reality. Hence, when we build a collective imagination, we have to be wary not to create one that reproduces the misfortunes of our time. I often find that speculative projects can be exclusionary in nature; either using inaccessible language, marginalising the indegnious perspective, or being actively dismissive of dissonant voices. In the process of building a new home, no one should be left out.

In the process of building a new home, no one should be left out.

RM: And finally, we are interested to hear what space uncertainty and doubt hold in your process, and whether you think this can be a tool for ecological awareness?

AH: Uncertainty and doubt are central to my process. I personally do not know all the answers. There’s no recipe for algorithmic emancipation. In fact, I’m constantly learning and unlearning what I know about algorithms. During an algorithmic mapping project I conducted with Future Based, I learned that critical engagement means embracing uncertainty and allowing new perspectives to emerge. Ecological and personal healing both require dissonance, and allow discomfort to take place. Meaningful conversations around power tend to be uncomfortable. That is because we have to assess our own positionality and our contribution. To a certain extent, we are all accessories complicit in maintaining an unsustainable status quo. Our algorithmic realities however provide a unique opportunity for resistance. We are all as much producers of data as we are consumers of it. While cultivating critical data literacy, we can also cultivate critical data practice. While merging both literacy and practice, the modes of doing such as those that this lab enables, can form an collective ecological ensemble that can turn our playful inventions into critical interventions.

Atlas of Algorithmic (in)equality, image courtesy of Abdo Hassan

Interview conducted and edited by: Rhian Morris

--

--

FIBER

Amsterdam based platform and festival for audiovisual art, digital culture and electronic music. Upcoming events: FIBER Festival 2024