
Arshin Adib-Moghaddam is Professor in Global Thought and Comparative Philosophies at SOAS University of London, where he received his Chair as one of the youngest academics in his field. A distinguished scholar and world-renowned public intellectual, Adib-Moghaddam has held several honorary positions, including as a Senior Member of Hughes Hall, University of Cambridge, where he finished his MPhil and PhD, and the first Jarvis Doctorow Junior Research Fellow at St Edmund Hall, University of Oxford. In addition, Adib-Moghaddam is the inaugural Co-Director of the SOAS Centre for AI Futures. His newest book, “The Myth of Good AI: A Manifesto for Critical Artificial Intelligence“, has just been published, initiating his new book series “AI Futures” published by Manchester University Press. The book challenges dominant narratives about artificial intelligence from a global thought perspective and calls for justice-oriented, globally inclusive approaches to technology.
Where do you see the most exciting research/debates happening in your field?
As ever, the main struggle is fought between the guardians of privileged knowledge, and most people who request better science that speaks to more social and political accountability, emancipation, and justice. In academia and beyond, such approaches are prevalent in the critical or decolonial movement, widely conceived. The latter is composed of intellectuals, activists, and laypeople who promote knowledge that is inclusive, as non-discriminatory as possible, and certainly truer than the canon. They are rewriting and restacking the human archives as we speak. So, this struggle has real-world implications. It is not only fought in the universities. President Trump, among many right-wing politicians, campaigned based on pushing back against critical or progressive knowledge. As we are conducting this interview, colleagues in the United States are essentially purged from their positions, as departments in critical theory, gender studies, critical race studies, etc., are closed. Essentially, culture itself is being dismantled in Trump’s United States, and the reverberations can be felt everywhere.
To continue to battle for better science, then, is the main contention, as it makes for better education and rather more just societies. Decolonial theory and practice have been recognized as a threat exactly because it equips anyone with the concepts and understanding to be truly emancipated citizens, which is so crucial for a functioning and inclusive society. So, it is this clash between truth through critical thinking and power exercised through force that I would identify as the most exciting struggle with all its visible political and ideological manifestations.
How has the way you understand the world changed over time, and what (or who) prompted the most significant shifts in your thinking?
I have always been very eclectic in my reading. I would move from Ibn-Sina (Avicenna), who wrote about freedom and happiness in 11th century Persia, to Michel Foucault, who was a pivotal figure in the counter-cultural movement of the 1970s. My German background exposed me to the Frankfurt School, doyens of critical thinking such as Herbert Marcuse, Theodor Adorno, or Max Horkheimer. From that intellectual geography, I repeatedly ventured into ideas prominent in West Asia and beyond: fFor instance, Iranian intellectuals such as Jalal al-e Ahmad or Ali Shariati, or political philosophers such as Mohammad Arkoun. From there, the Global South was very near: tThe hybrid and incredibly empowering approaches put forward by giants such as Anibal Quijano, Edward Said, or Hamid Dabashi continue to feed into my thinking, as they had this incredible talent to see through some of the deception surrounding us. I believe this never-ending, mental training regimen that is largely non-disciplinary is the reason why I chose my profession and my current title at SOAS, which never really existed before.
First and foremost, this ongoing mental training exercise, a daily routine of reading and reflection in libraries since early adulthood, created a suspicion towards any man-made systems and a penchant to make them more responsive to humans. I believe if there is one common thread to my research, it is this criticism of totalising power systems. My mind can’t function in totalities. It automatically breaks things down to their particles to try to show how they are assembled, mostly in order to govern/manipulate/suppress us. I don’t think any of us can be satisfied with the political systems we live in. They are all cruel, unjust, and even sadistic in one way or another. So, I write about some of that.
You refer to “posthuman warfare” as a concept. How close are we to that reality, and what are its ethical implications?
Post-humanity is a scary eventuality. It refers to a future, postulated to be a couple of decades away, when humans are subordinated to machines. This is already happening, certainly symbolically, when we must prove to our computers that we are humans by solving puzzles or giving away personal information. These are the first visible inroads into our sovereignty as humans. Isn’t it ironic that we are made to prove to machines that we are human? They are already making us visible in a way that we can’t make them. So, we have entered a trans-human state. We are not entirely in charge of our fate anymore, as we are made to outsource our human faculties into machines. Thus, the invention of Artificial Intelligence, for the first time in human history, does threaten to do away with our individuality, with our agency, our ability to shelter our privacy, etc. This technology is so incredibly intrusive and opaque that it is already everywhere. To resist further movement towards a post-human condition, we need to educate ourselves about the repercussions of this AI technology and render it useful for our human security and community-led civil society initiatives. Emancipative AI can be a great source of hope for human security if it is employed from the bottom-up as a means to of turning the tables against on the tech-giants, much in the same way as Facebook and Twitter were conducive to some major mass movements such as the Arab Spring in 2001. Techno-resistance is entirely possible, then.
You argue that Enlightenment values laid a biased foundation for modern AI. How do these philosophical roots contribute to algorithmic discrimination today?
The Enlightenment created a science-frenzy that was incredibly potent for European societies and indeed the world. Of course, amazingly useful inventions emanating from someone like Marie Curie’s mind had nothing to do with some of the insidious ideas that were cultivated in the laboratories of the social sciences that I am referring to in the book. Beyond those inventions for the betterment of the human condition, the Enlightenment period created a very particular privilege for heterosexual, socially elevated white men: iIt gave them a (pseudo)scientific justification to rule. There emerged a cob-science that rendered their claim to power natural, inevitable, ordained by god and nature. This is the essence of racism as science. This is very different from prejudice or bias. The Enlightenment established racism as a science, i.e., an institutionalized hierarchization of humanity from the civilized white men to the savages positioned below. It was this racism as a system of social stratification and institutionalized power that made it possible for European empires to govern their subject people in the cruel and sadistic manner they did. The point in my work on AI is that residues of these types of racism can be discerned from the AI systems that we use daily. This bad data skews mortgage decisions, job applications, etc. Therefore, this ongoing discrimination is wreaking lives on a daily basis.
Why do you believe Western notions of ethics are insufficient for governing AI, and what should a globally rooted ethical AI framework look like? You say, “Where there is power, there is always resistance.” What does meaningful resistance look like in the age of AI?
So-called “Western” notions of ethics are products of global struggles themselves. I make this argument via a close reading of ethics in chapter 1 of the book. Just because so-called “western” Philosophy has claimed universal ethics for itself by cleansing everyone else from the archives during the Enlightenment, does not mean that there is such a thing as a distinctly “western” notion. The brilliant social anthropologist at Cambridge, Jack Goody, rightly called that systematic destruction of global thought a “theft of history.” So, the modern universities and their disciplines were created within an incredibly racist and misogynistic context. But that does not mean, of course, that ethics does not carry a global heritage. So, the point that I make is that we have to approach the issue from a global-thought perspective by acknowledging synergies and commonalities that refrain from creating new hierarchies of knowledge. The East is in the West, and the West is the East. Global thought reveals a globality of ethics that would inform truly “good” AI systems that function for our human security, exactly because that global approach is rather more inclusive, capturing the blind spots that centred knowledge systems have hidden. So, a globally rooted AI ethic starts with the acknowledgment that all knowledge is at the same time global and local. In this emphasis on synergies, in this global thought exercise, an ethical AI framework that emerges from the grassroots can be transmuted into something incredibly emancipating. This is why we need AI community centres in every town and village. The digital world makes such connections from the local to the global entirely possible, and in many ways, this is already happening. The book touches upon some successful examples. It also shows how we can survive the AI age with daily routines and self-improvement exercises.
How can decolonial philosophy help us “debug” machine ethics, as you suggest in the “Debugging Machine Ethics” chapter?
As indicated, decolonial philosophy is a necessary precursor to Global Thought. The old structures of untruth need to be disassembled in order to erect better scientific structures on top of them. In this spirit, the “Debugging Machine Ethics” chapter seeks to unsettle the dominant narratives that underpin contemporary advancements in machine ethics, situating them within a philosophical framework attuned to the plurality of global epistemologies. The chapter introduces a robust case for the emergence of ‘critical AI studies’ — a field that resists the hegemony of European Enlightenment paradigms by tracing the genealogy of emancipatory ideas beyond Eurocentric confines. In doing so, “critical AI studies” illuminates how motifs traditionally associated with so called Western Renaissance and Enlightenment thought find their antecedents within the rich, transregional intellectual currents of other ideational systems, for instance, the al-Hikma intellectual tradition. Rooted in the Persian-Muslim milieu, this tradition itself is a confluence of Arab, Indian, Hellenic, ancient Roman, North African, and Zoroastrian philosophies — an intricate rhizome of global thought that challenges linear, singular histories. Similar analogies can be added from anywhere else in the world, and I touch upon examples from South America, Africa, and Asia, too. In summary, the chapter draws upon recent developments in the social sciences and embraces a critical, decentred approach to AI ethics. As such, it calls for what can be referred to as epistemic humility that recognizes and incorporates the multiplicity of knowledge traditions shaping our understanding of artificial intelligence.
You highlight that AI systems replicate racism, ageism, ableism, and misogyny. How should tech companies address the systemic roots of these issues? How can ordinary citizens reclaim agency in a world increasingly coded by invisible algorithms?
First and foremost, we have to make them. So, the process of change needs to be driven from the bottom up, as indicated, from communities to the tech-giants and the government. As discussed throughout the book, this is already happening. For instance, international human rights lawyers are looking into privacy legislation to safeguard our personal data. Other forms of techno-resistance include so-called data-pollution tactics and so-called hacktivism. The tech-giants have a monopoly over AI technology, even more so than governments. To break up this monopoly requires a global, community-based effort spearheaded by the silent majority.
What is the most important advice you could give to young scholars of International Relations?
Identify what is beautiful in your life and what makes other people happy. Use the negativity, the envy, the struggles, to fuel your passion for knowledge. Start your journey into scholarship from that commitment to yourself and others, and never forget it. If you manage to stay centred in that mental space, everything else will follow naturally.