Hybridity and Humility: What of the Human in Posthuman Security?

This is an excerpt from Reflections on the Posthuman in International Relations. An E-IR Edited Collection. Available now on Amazon (UKUSACaGerFra), in all good book stores, and via a free PDF download.

Find out more about E-IR’s range of open access books here.

Thinking about ‘posthuman security’ is no easy task. To begin with, it requires a clear notion of what we mean by ‘posthuman’. There are various projects underway to understand what this term can or should signal, and what it ought to comprise. To bring a broadened understanding of ‘security’ into the mix complicates matters further. In this essay, I argue that a focus on the relation of the human to new technologies of war and security provides one way in which IR can fruitfully engage with contemporary ideas of posthumanism.

For Audra Mitchell (2014) and others, ‘posthuman security’ serves as a broad umbrella term, under which various non-anthropocentric approaches to thinking about security can be gathered. Rather than viewing security as a purely human good or enterprise, ‘posthuman’ thinking instead stresses the cornucopia of non-human and technological entities that shape our political ecology, and, in turn, condition our notions of security and ethics. For Mitchell (2014), this process comprises machines, ecosystems, networks, non-human animals, and ‘complex assemblages thereof’. Sounds clear enough, but this is where things begin to get complicated.

First, what exactly is ‘post’ about the posthuman? Often lumped in together under the category of ‘posthumanism’ are ideas of transhumanism, anti-humanism, post-anthropocentrism, and speculative posthumanism (see for instance David Roden 2015).[1] Each variant has different implications for how we think ‘security’ and ‘ethics’ after, or indeed beyond, the human. Furthermore, one must ask whether it is even possible to use concepts of security, ethics, and politics after, or beyond the human. These concepts are not only deeply entwined with social constructs; they are also fundamentally human constructs. To think ‘after-the-human’ may well then render ‘security’ or ‘ethics’ as concepts entirely obsolete, or at the very least deprive them of a clear referent. And as if this was not enough to wrap one’s head around, we further may need to clarify whether it is post-humanity or post-humanness we aim to understand when we strive to think beyond the human. It appears, then, that the posthuman turn in security studies risks raising more questions than it helps to answer at this stage. A clarification of how these terms are used in the literature is thus necessary.

To date, the most clearly defined strands of posthumanist discourse are ‘critical posthumanism’ and ‘transhumanism’, as elaborated in the work of Donna Haraway, Neil Badmington, Ray Kurzweil, and Nick Bostrom, among others. Both discourses, although very different in their approach and focus, posit a distinctly modern transformation through which human life has become more deeply enmeshed in science and technology than ever before. In this biologically informed techno-scientific context, human and machine have become isomorphic. The two are fused in both functional and philosophical terms, with technologies shaping human subjectivity as much as human subjectivities shape technology. The question of technology has thus, as Arthur Kroker (2014) puts it, become a question of the human. The question of the human, however, looks decidedly different when viewed through modern techno-scientific logics of functionality and performance. From this perspective, which promotes homogeneity, reproduction, replacement and prophylaxis as a means for the ‘technological purification of bodies,’ the human appears in ever more degraded terms, as a failure to live up to the promises of technology itself (Baudrillard 1993, 68). The greater the technological augmentation and alterations of life, and the more we invest in technological prostheses and substitutes for it, the greater becomes the necessity for humans to submit to the superiority of the artificial proxy, which carries within it a technologically informed ordering principle that remakes society in its own image. Thus, as contemporary life becomes ever more digitally mediated and technologically enhanced, the human appears more and more as a weak link in the human-machine chain – inadequate at best, ‘an infantile malady of a technological apparatus’ at worst (Baudrillard 2016, 20).

The interplay between man and machine has, of course, a long-standing history that can easily be conceived of in posthuman terms. This history suggests, in contrast to the rash of hysteric pronouncements about the novelty of our times, that we have always already been technologically enhanced and conditioned. I would concur with this point, but see it as doing little to undermine the importance of thinking the human-machine question anew, especially in light of the rapid proliferation and deployment of new technologies related to the waging of wars and the ordering and securing of populations and bodies. In such a context, it is necessary to identify and indeed challenge our submission to technological authority in social and political domains. If we take seriously that new technologies (as artefacts and practices) constitute ‘hegemonic political values and beliefs’ (Ansorge 2016, 14), then we ought to first question the rationales that are given for such technologies. In particular, we must puncture the pervasive ideology of progress, which says that these new technologies are simply a motor behind the movement toward ever-greater levels of autonomy and artificial intelligence. The rapid movement towards greater autonomy in military affairs itself entails another process, which is the reconfiguration of new machine-humans for a transformed ethos for the administration of war. It is therefore crucial, at least from a critical perspective, to get a handle on the kind of machine-human subjectivities our new ways of war and security are producing.

In a fervent drive for progress, scientists and roboticists work feverishly to replace what we hitherto have known and understood as human life with bigger, better, bolder robot versions of what life ought to be – fully acknowledging, if not embracing, the possibility of rendering humans increasingly obsolete. Machines are designed to outpace human capabilities, while old-fashioned human organisms cannot progress at an equal rate and will, eventually, ‘clearly face extinction’ (Singer 2009, 415). Fears about this trajectory are being voiced by elites and experts of all stripes. Technology tycoon Elon Musk, for example, has recently issued a dire warning about the dangers of rapidly advancing Artificial Intelligence (AI) and the prospects of killer robots capable of ‘deleting humans like spam’ (in Anderson 2014). Musk is not alone in his cautious assessment. Nick Bostrom (2015), in a recent UN briefing, echoes such sentiments when he warns that AI may well pose the greatest existential risk to humanity today, if current developments are any indication of what is likely to come in the future. A group of 3037 AI/Robotics researchers has signed an open letter calling for a ban of autonomous weapons. The letter was signed by a further 17376 endorsers, among them Stephen Hawking, Elon Musk, Steve Wozniak and Daniel C. Dennett. Other science and technology icons, like Bill Gates, have joined the chorus too, seeing new combinations of AI and advanced robotics as a grave source of insecurity going forward.

Statements like these betray not only a certain fatalism on the part of humans who have, in fact, invented, designed, and realised said autonomous machines; they also pose the question of whether the advancement of technology can indeed still be considered a human activity, or whether technology itself has moved into a sphere beyond human control and comprehension. Humans, as conceived by transhumanist discourses, are involved in a conscious process of perpetually overcoming themselves through technology. For transhumanists, the human is ‘a work-in-progress,’ perpetually striving toward perfection in a process of techno-scientifically facilitated evolution that promises to leave behind the ‘half-baked beginning[s]’ of contemporary humanity (Bostrom 2005, 4). Transhumanism, however, is – as David Roden (2015, 13-14) points out – underwritten by a drive to improve and better human life. It is, he notes, a fundamentally normative position, whereby the freedom to self-design through technology is affirmed as an extension of human freedom. Transhumanism ‘is thus an ethical claim to the effect that technological enhancement of human capacities is a desirable aim’ (Roden 2015, 9). However, the pursuit of transhumanism through AI, the ‘NBIC’ suite of technologies (which comprises of nanotechnology, biotechnology, information technology and the cognitive sciences), or networked computing technologies more generally does not guarantee a privileged place for humans in the historical future. Rather, the ongoing metamorphosis of human and machine threatens ‘an explosion of artificial intelligence that would leave humans cognitively redundant’ (Roden 2015, 21). In such a scenario, the normative position of transhumanism necessarily collapses into a speculative view on the posthuman, wherein both the shape of the historical future and the place of the human within this become an open question. Indeed, in the future world there may be no place for the human at all.

This perhaps unintended move toward a speculative technological future harbours a paradox. First, the conception of science and technology as improving or outmoding the human is an inherently human construct and project – it is neither determined nor initiated by a non-human entity which demands or elicits submission based on their philosophical autonomy; rather, it is through human thought and imagination that this context emerges in the first place. The human is thus always already somehow immanent in the technological post-human. Yet at the same time, it is the overcoming, at the risk outmoding, human cognition and functionality that forms the basic wager of speculative posthumanism.[2] Thus, while the posthuman future will be a product of human enterprise, it will also be a future in which the unaugmented human appears more and more as flawed, error-prone, and fallible. Contemporary techno-enthusiasm therefore carries within it the seeds of our anxiety, shame, and potential obsolescence as ‘mere humans’.

This new hierarchical positioning of the human vis-à-vis technology represents a shift in both. Put simply, the ‘creator’ of machines accepts a position of inferiority in relation to his or her creations (be these robots, cyborgs, bionic limbs, health apps, or GPS systems, to give just a few examples). This surrender relies on an assumed techno-authority of produced ‘life’ on the one hand, and an acceptance of inferiority – as an excess of the human’s desire to ‘surpass man’, to become machine – on the other. The inherently fallible and flawed human can never fully-meet the standards of functionality and perfection that are the mandate for the machines they create. And it is precisely within this hybridity of being deity (producer) and mortal (un-produced human) that an unresolved tension resides. Heidegger’s student and Hannah Arendt’s first husband, Günther Anders, has given much thought to this. His work extensively grapples with the condition that characterises the switch from creator to creatum, and he diagnoses this distinctly modern condition as one of ‘Promethean Shame.’ It is the very technologisation of our being that gives rise to this shame, which implies a shamefulness about not-being-machine, encapsulating both awe at the superior qualities of machine existence, and admiration for the flawless perfection with which machines promise to perform specific roles or tasks. In this distinctly modern condition, human worth and moral standards are measured against the parameters of rational and flawlessly functioning machines, producing a normed environment in which the human cannot fully fit in. To overcome this shame, Anders argues, humans began to enhance their biological capacities, striving to make themselves more and more like machines.

The concept of shame is significant not simply as ‘overt shame’ – which is akin to a ‘feeling experienced by a child when it is in some way humiliated by another person’ (Giddens 2003, 65) – but also as an instantiation of being exposed as insufficient, flawed, or erroneous. This latter form of shame is concerned with ‘the body in relation to the mechanisms of self-identity’ (Giddens 2003, 67), and is intrinsically bound up with the modern human-technology complex. To compensate, adapt to, and fit into a technologized environment, humans seek to become machines through technological enhancement, not merely to better themselves, but also to meet the quasi-moral mandate of becoming a rational and progressive product: ever-better, ever-faster, ever-smarter, superseding the limited corporeality of the human, and eventually the human self. This mandate clearly adheres to a capitalist logic, shaping subjectivities in line with a drive toward expansion and productivity. It is, however, a fundamentally technological drive insofar as functionality per se, rather than expansion or productivity, is the measure of all.  Nowhere is this more starkly exemplified than in current relations between human soldiers and unmanned military technology.

Consider, for example, military roboticist Ronald Arkin’s conviction that the human is the weakest link in the kill chain. Such a logos – which is derived from the efficient and functional character as technology as such – suggests that the messy problems of war and conflict can be worked away through the abstract reasoning of machines. Arkin (2015), one of the most vocal advocates of producing ‘ethical’ lethal robots by introducing an ‘ethical governor’ into the technology, inadvertently encapsulates both aspects of techno-authority perfectly when he asks: ‘Is it not our responsibility as scientists to look for effective ways to reduce man’s inhumanity to man through technology?’ For Arkin, the lethal robot is able to make a more ethical decision than the human, simply by being programmed to a use a pathway for decision-making based on abstracted laws of war and armed conflict. The human, in her flawed physiological and mental capacity, is thus to be governed by the (at least potential) perfection of a machine authority.

This is by no means a mere brainchild of outsider techno-enthusiasm – quite the contrary: the US Department of Defense (DoD) is an active solicitor of increasingly intelligent machines that, one day soon, will be able to ‘select and engage targets without further intervention by a human operator,’ and will possess the reasoning capacity needed to ‘assess situations and make recommendations or decisions,’ including, most likely, kill decisions (Zacharias 2015). Consider, for example, the DoD Autonomy Roadmap, presented in March 2015, which sets out the agenda for greater levels of machine intelligence and learning (MPRI), as well as rational goals for human-machine interactions and collaboration (HASIC), and a concept of ‘Calibrated Trust’ intended to create for the human an understanding of ‘what the [machine] agent is doing and why’ (Bornstein 2015). With the drive for greater technology autonomy comes the apparent desire for greater technology authority. ‘Human-autonomy teaming’ is a partnership in which the human, at least ostensibly, still decides when and how to invoke technology’s autonomy (Endsley 2015). Whether this is possible or even realistic in contexts, such as those where humans lack the sensory capabilities or computing powers required for the task they are undertaking, is very much a question that needs asking. Particularly as on a not-too-distant horizon looms the spectre of intelligent machine autonomy, equipped with superior sensors, agility, and reasoning capacities. The hierarchies of authority over the most morally challenging of decisions, such as killing in war, are likely to experience a shift overall toward a pure techno-logos.

Leaving the heated debate about Lethal Autonomous Weapons Systems (LAWS) – or Killer Robots – aside for a moment, this logic is testament to the Promethean Shame identified by Anders half a century prior. In his writings, Anders astutely realised the ethical implications of such a shift in hierarchical standing. As Christopher Müller notes in his discussion of Anders’ work, the contemporary world is one in which machines are ‘taking care’ of both functional problems as well as fundamentally existential questions; ‘[i]t is hence the motive connotations of taking care to relieve of worry, responsibility and moral effort that are of significance here’ (Müller  2015). It is in such a shift toward a techno-authority that ethical responsibility is removed from the human realm and conceived of instead in techno-scientific terms. Ethics as a technical matter ‘mimes scientific analysis; both are based on sound facts and hypothesis testing; both are technical practices’ (Haraway 1997, 109). Arkin’s argument for the inclusion of an ethics component in military robots is paradigmatic. In his understanding of ethics, Arkin (2015) frames the logical coding of robotic machines as ethically superior to the human – indeed, he calls the module an ‘ethical governor.’ The rationale underpinning this position takes for granted a number of things. One is that the human can, and indeed must, be measured against the technology to assess her functional performance. Another is that the characteristics of those who pose a risk to security can clearly be ascertained and acted on within this techno-logos.

And finally, it is assumed that this reasoning is rational and consistent and therefore moral. Together these turn ethics into a task of identifying and eliminating persons of risk as humanely as possible, and with as few people’s lives on the line as technology can permit. The underlying question, then, shifts from whether it is ethical to kill, to whether technological systems would do the killing better than humans. If it has been determined by algorithmic calculation, for example, that all military-aged males in a certain geographic region, displaying certain suspicious patterns of life behaviour, pose a potential security risk, then the ethical task at hand is to kill better and more humanely. The ‘ethical’ dimension of a kill decision is thereby engineered into a technological system, so that the actual moment of a real ethical decision is always already pre-empted and thereby eliminated. Where ethics is abstracted and coded, it leaves us with little possibility to challenge the ethicality of the context within which the ethical programme unfolds. And where ethics is coded, it curbs the ethical responsibility of the individual subject. I address this problem of a scientifically informed rationale of ethics as a matter of technology elsewhere (see for instance Schwarz 2015). What I would like to stress here, though, are the possible futures associated with this trajectory.

The speculative nature of posthumanism requires that we creatively imagine how traditional concepts of humanity, such as ethics or security, might comprehensively be affected and altered by technology. In other words, the rise (and fall) of homo technologicus requires that we address the question concerning technology imaginatively. A challenge in modern thinking about technology was and still is the apparent gap between the technologies we produce and our imagination regarding the uses to which this technology is put. Here, I return to Günther Anders. For Anders (1972), this is a gap between product and mind, between the production (Herstellung) of technology and our imagination (Vorstellung) regarding the consequences of its use. Letting this gap go unaddressed produces space for a technological authority to emerge, wherein ethical questions are cast in increasingly technical terms. This has potentially devastating implications for ethics as such. As Anders notes, the discrepancy between Herstellung and Vorstellung signifies that we no longer know what we do. This, in turn, takes us to the very limits of our responsibility, for ‘to “assume responsibility” is nothing other than to admit to one’s deed, the effects of which one had conceived (vorgestellt) in advance’ (Anders 1972, 73-74). And what becomes of ethics, when we can no longer claim any responsibility?

Notes

[1] Roden engages with the various discourses on ‘posthumanism’ today, going to great lengths to highlight the differences between various kinds of ideas that are attached to the term. While these are relevant in the wider context of this article, I lack the space to engage with them in full here.

[2] At this point we can clarify the differences between humanism, transhumanism, posthumanism, and the posthuman. By ‘humanism’ I mean those discourses and projects that take some fixed idea of the human as their natural centre, and by ‘transhumanism’ I mean those that grapple with or aim at an active technical alteration of the human as such. I reserve the term ‘posthumanism’ for those discourses that seek to think a future in which the technical alteration of the human has given rise to new forms of life that can no longer properly be called human. The ‘posthuman’ is a name for these new, unknown forms of life.

References

Anders, Günther. 1972. Endzeit und Zeitende. München: C.H. Beck.

Anderson, Lessley. 2014. “Elon Musk: A Machine Tasked with Getting Rid of Spam Could End Humanity” Vanity Fair. Online at: http://www.vanityfair.com/news/tech/2014/10/elon-musk-artificial-intelligence-fear/ [Accessed 15 November 2016]

Ansorge, Josef. 2016. Identify and Sort: How Digital Power Changed World Politics. London: C. Hurst and Co. Ltd.

Arkin, Ronald. 2015. “Lethal Autonomous Weapons Systems and the Plight of the Noncombatant”, Presentation to the CCW Meeting of Experts on Lethal Autonomous Weapons in Geneva, 13-16 May 2015. Online at: http://www.unog.ch/80256EDD006B8954/(httpAssets)/FD01CB0025020DDFC1257CD70060EA38/$file/Arkin_LAWS_technical_2014.pdf/ [Accessed 23 November 2015]

Baudrillard, Jean. 2016. Why Hasn’t Everything Already Disappeared? Translated by Chris Turner. London: Seagull Books.

Baudrillard, Jean. 1993. The Transparency of Evil: Essays on Extreme Phenomena. Translated by James Benedict. London: Verso.

Bornstein, Jon. 2015. “US Department of Defense Autonomy Roadmap: Autonomy Community of Interest”, presented to the NDIA Annual Science and Engineering Technology Conference, 24-26 March 2015. Online at: http://www.defenseinnovationmarketplace.mil/resources/AutonomyCOI_NDIA_Briefing20150319.pdf [Accessed 6 January 2017].

Bostrom, Nick. 2015. “Briefing on Existential Risk for the UN Interregional Crime and Justice Research Institute,” 7 October 2015.  Online at: http://webtv.un.org/watch/chemical-biological-radiological-and-nuclear-cbrn-national-action-plans-rising-to-the-challenges-of-international-security-and-the-emergence-of-artificial-intelligence/4542739995001 [Accessed 04 January 2017].

Bostrom, Nick. 2005. “Transhumanist Values.” Journal of Philosophical Research 30 (Supplement): 3-14.

Endsley, Mica R. 2015. “Autonomous Horizons: Systems Autonomy in the Air Force – A Path to the Future,” United States Air Force Office of the Chief Scientist, AF/ST TR 15-01, June 2015. Online at: http://www.af.mil/Portals/1/documents/SECAF/AutonomousHorizons.pdf?timestamp=1435068339702/ [Accessed 25 November 2015]

Giddens, Anthony. 2013. Modernity and Self-Identity: Self and Society in the Late Modern Age. Cambridge: Polity.

Haraway, Donna. 1997. Modest_Witness@Second_Millennium. FemaleMan©_Meets_OncoMouseTM. London: Routledge.

Kroker, Arthur. 2014. Exits to the Posthuman Future. Cambridge: Polity.

Mitchell, Audra. 2016. “Dispatches from the Robot Wars; Or, What is Posthuman Security?” The Disorder of Things, 27 July 2014. Online at: http://thedisorderofthings.com/2014/07/24/dispatches-from-the-robot-wars-or-what-is-posthuman-security/  [Accessed 15 November 2016.]

Müller, Christopher.  2015.“We are Born Obsolete: Günther Anders’s (Post)humanism”, Critical Posthumanism [Website], January 2015. Online at: http://criticalposthumanism.net/?page_id=433/ [Accessed 15 November 2016]

Roden, David.  2015. Posthuman Life: Philosophy at the Edge of the Human. Abingdon: Routledge.

Schwarz, Elke. 2015. “Prescription Drones: On the Techno-Biopolitical Regimes of Contemporary ‘Ethical Killing’”, Security Dialogue 41 (1): 59-75.

Singer, Peter. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st Century. London: Penguin.

Zacharias, Greg. 2015. “Advancing the Science and Acceptance of Autonomy for Future Defense Systems” Presentation to the House Armed Services Subcommittee Hearing on Emerging Threats and Capabilities, 19 November 2015. Online at: http://www.airforcemag.com/testimony/Documents/2015/November 2015/111915zacharias.pdf/  [Accessed 15 November 2016]

Further Reading on E-International Relations

Please Consider Donating

Before you download your free e-book, please consider donating to support open access publishing.

E-IR is an independent non-profit publisher run by an all volunteer team. Your donations allow us to invest in new open access titles and pay our bandwidth bills to ensure we keep our existing titles free to view. Any amount, in any currency, is appreciated. Many thanks!

Donations are voluntary and not required to download the e-book - your link to download is below.

Subscribe

Get our weekly email