Interview – Raluca Csernatoni

Dr. Raluca Csernatoni is a research Fellow at Carnegie Europe in Brussels, Belgium, where she specialises in European security and defence, with a focus on emerging and disruptive technologies. At Carnegie, she is a Team Leader and Senior Research Expert on new technologies for the EU-funded project, EU Cyber Diplomacy Initiative – EU Cyber Direct (EUCD), and led Carnegie Europe’s research project on ‘The EU’s Techno-Politics of AI’ supported by the McGovern AI Grant. Csernatoni is currently also a Professor on European security and defence, focusing on new technologies, at the Brussels School of Governance (BSoG) and its Centre for Security, Diplomacy and Strategy (CSDS), at Vrije Universiteit Brussel (VUB), Belgium. At the CSDS, she is a Senior Research Expert on digital technologies in the context of the EU-funded project, ‘Indo-Pacific-European Hub for Digital Partnerships: Trusted Digital Technologies for Sustainable Well-Being’ (INPACE). She is also a Co-Leader of the ‘Governance of Emerging Technology’ Research Group with the Centre on Security and Crisis Governance (CRITIC) at the Royal Military College Saint-Jean, Canada. Her academic articles appeared in peer-reviewed journals such as Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Science, Global Policy, Geopolitics, European Foreign Affairs Review, European Security, Critical Military Studies, Global Affairs, and European View. Her co-edited book, Emerging Security Technologies and EU Governance: Actors, Practices and Processes, was published with Routledge Studies in Conflict, Security and Technology Series in 2020.

Where do you see the most exciting research/debates happening in your field?

I believe that the most vibrant work currently sits at the interdisciplinary intersections of International Relations (IR), Critical Security Studies, Critical Technology Theory, and Science-and-Technology Studies (STS). Here, scholars are unpacking how code, data and algorithmic infrastructures reorganise global power dynamics, war and peace, recast security logics, and blur the civilian-military divide. What is more, this debate no longer stops at what we would call “high politics”. Rather, it interrogates supply chains, semiconductor chokepoints, colonial data extractions, human-machine interactions, ethical concerns, and the embodied labour that sustains “cloud” and AI warfare.

Some emerging strands I am currently interested in are digital militarism, feminist post-humanism, and decolonial AI, which cross-pollinate with classic IR concerns about sovereignty and order, but also require generating methods that mix ethnography, critical discourse analysis, and opening the “black box” of technologies. Increasingly, IR researchers work in multi-disciplinary labs with engineers, lawyers, and artists, building speculative prototypes to surface the politics baked into technological design and innovation.

This interdisciplinary and praxis-oriented turn challenges IR’s traditional state-centric gaze, foregrounding how Big Tech actors, private platforms, standards bodies, and hacker collectives can shape geopolitical and security realities. In short, the most exciting debates happen wherever disciplinary boundaries are deliberately questioned to study the co-production of technology, security, imaginaries, and world politics.

How has the way you understand the world changed over time, and what (or who) prompted the most significant shifts in your thinking?

It is hard to identify such influences – my love for Science Fiction would be one of them – as my thinking is constantly shaped by inspiring new ideas. More recently, I have been revisiting certain works that are reorienting some of my understanding of world politics. First, engagement with Günther Anders and Byung-Chul Han is displacing instrumental readings of technology by revealing its formative effects on subjectivity, temporality, and political imagination. Anders’s reflections on the “Promethean gap”, namely humanity’s inability to morally apprehend the scale and speed of its technical creations, illuminate contemporary debates on autonomous weapons and algorithmic governance. Han’s notion of “psychopolitics” shows how digital capitalism internalises surveillance, turning individuals into entrepreneurial data-labourers whose self-exploitation lubricates planetary computation. Second, re-reading Donna Haraway’s work on cyborg ontology unsettles the human-machine divide, allowing me to theorise emerging and disruptive technologies as hybrid socio-technical assemblages rather than discrete capabilities. In short, Anders’s work foreshadows and warns that automation and nuclear-era speed compress political imagination, leaving humanity existentially “too late” for the machines it builds – think, for instance, how this resonates with current discussions about the advent of “superintelligence”. In this respect, Haraway’s cyborg, a hybrid of organism and machine, offers a vocabulary for interrogating post-human or “more-than-human” AI agency and the gendered, militarised substrates of innovation. Taken together, these authors have shifted my attention toward an ontology in which technology, power, and knowledge are co-produced. My research now spotlights the material cultures of AI systems, chips, and digital infrastructures, tracing how they mediate geopolitical order, security, and epistemic authority. So, I would therefore argue that understanding world politics nowadays requires grappling with human-machine co-evolution as a central analytic problem. Today, my chief concern is how emerging and disruptive technologies reorder social contracts, epistemic authority, and the conduct of war — all questions that require grappling with human-machine co-evolution rather than treating tech as mere strategic “capability”.

In your view, does the concept of European strategic autonomy clarify or complicate the EU’s defence agenda, especially considering its broad scope, which includes technology, digital policy, regulation, and data sovereignty?”

Strategic autonomy began as a defence-industrial slogan but has metastasised into a floating signifier stretching across digital regulation, high-tech innovation, supply-chain policy, economic security, and data governance – this is what I argue in my article “The EU’s hegemonic imaginaries: from European strategic autonomy in defence to technological sovereignty”, published in the European Security journal. This flexibility in the term clarifies goals – like reducing dependence on US platforms and building more resilient tech stacks. But it also complicates the means to reach those goals, since many states, international bodies and EU institutions, now claim the term with varying interpretations, leading to overlapping funding instruments and mixed signals for (defence) industry, business, and tech sectors. As I highlight in the article, the concept’s hegemonic power lies in stitching “low-politics” digital and tech issues to “high-politics” security and defence debates, thereby legitimising expansive EU intervention in various policy fields. Yet, without clear metrics, autonomy risks becoming a rhetorical umbrella obscuring trade-offs between openness, competitiveness, and values.

Overall, strategic autonomy simultaneously illuminates and obscures the Union’s defence trajectory. From a conceptual angle, it clarifies purpose, anchoring the EU’s post-Lisbon drive for global security actorness in a telos of reduced dependence on U.S. security guarantees – especially useful in the current context of the second Trump administration. In terms of empirics, however, its elasticity, stretching from industrial sovereignty, digital innovation, economic security, to data governance, complicates policy and capability planning, allowing divergent member state or institutional preferences to persist. Hence, its breadth, now also spanning munitions stockpiles, chips, cloud, and even narrative control, creates overlapping initiatives and diffuse accountability. So, I would argue that the concept functions best as a mobilising myth: it legitimises investment and cooperation without predetermining concrete choices. But myths expire if not anchored to verifiable milestones. Unless the EU translates autonomy into reality, the term risks breeding complacency rather than capability and eroding political trust over time.

How might regulatory reforms — such as removing the stigma against dual-use investments and streamlining procurement — unlock private and institutional capital for defence?

In my view, money flows into defence when three things line up, mostly clarity, speed, and scale. In terms of clarity, Brussels has identified which “dual-use” projects, from trusted AI chips, secure cloud services, to counter-drone sensors, fit comfortably inside responsible investment rules. Thus, by ostensibly removing ethical question marks, the EU is signalling that pension funds and other commercial players can invest in defence without fearing headline risk or reputational costs.

An updated taxonomy guidance now classifies dual-use and certain defensive capabilities as “socially sustainable”, thus enabling pension and sovereign-wealth funds to allocate without breaching stewardship mandates. When it comes to speed, the European Commission’s Defence Readiness Omnibus initiative acknowledges that the Union’s current regulatory framework, designed for peacetime, must be adapted to enable rapid capability development and deployment. Hence, shorter paperwork cycles mean firms wait weeks, not years, between pitching an idea and signing a contract. That tighter timeline makes the sector far more attractive to venture investors and the commercial civilian tech sector, who cannot leave cash idle forever. Regarding scale, the new Security Action for Europe (SAFE) instrument, together with the common shopping list in the “ReArm Europe” plan, turns scattered national purchases into predictable, multi-year demand. Taken together, these reforms recast defence from a reputational hazard into a mainstream asset. They also show that Brussels is willing to share responsibility with markets, a shift that could accelerate Europe’s path toward genuine strategic autonomy. Yet, key questions remain. Does the speed gained justify weaker oversight? Do fast-track procedures that bypass the European Parliament trade democratic oversight for speed? Will subsidising dual-use industries deepen European cohesion or revive intra-EU industrial rivalry? And does steering private money toward arms production shift the Union from its “civilian power” self-image toward a more traditional great power posture?

Your report, Charting the Geopolitics and European Governance of Artificial Intelligence, emphasises how narratives of AI power and disruption shape technopolitical realities. Can you elaborate on how these narratives become self-fulfilling prophecies, and what alternative narratives might lead to more balanced AI governance approaches?

Indeed, discourses surrounding AI technologies are highly performative and hyped, depicting AI as either an existential threat, a technological silver bullet, or a decisive geostrategic asset in the current great power race that legitimates extraordinary research subsidies, accelerated procurement pathways, huge investments in data centres and compute power, as well as permissive data-access regimes. These interventions legitimise the hegemonic power of Big Tech, attract speculative capital, inflate firm valuations, and furnish empirical “evidence” that the technology is radically transforming both peace and war. Thus, in the report, I argue that such narratives pre-structure investment, regulation and public expectation, locking in innovation trajectories that privilege certain actors over others. Conversely, it is worth noting that alternative imaginaries could interrupt this path-dependence. By conceptualising AI as a situated socio-technical system, this approach could better foreground the power dynamics spurring the current AI race, illustrating the labour conditions, environmental costs, extractivist exploitation, and epistemic violence underpinning AI systems.

Current narratives cast the “AI race” as a contest among the U.S., China and, to a lesser degree, the EU. This framing marginalises the Global South, obscuring how data extraction, cloud-region geopolitics, and compute concentration reproduce colonial hierarchies. Examples include biometric surveillance rollouts in Kenya or content-moderation outsourcing in the Philippines, highlighting who bears the social and ecological costs of “frontier” innovation. The alternative would be to encourage governance models that valorise transparency, energy efficiency, and open and modular design. By shifting the discursive centre of gravity from supremacy to public goods alignment, states and international organisations like the EU could cultivate an innovation ecosystem in which civic accountability, human rights, democratic values, and distributed oversight constitute markers of success, not the latest shiny AI model supposedly getting us closer to “superintelligence”. Indeed, storytelling functions as a stealth industrial policy and a form of tech lobby, shaping the material and ideational configurations of future AI systems.

The report also touches on Ukraine as an ‘AI war lab.’ What are the long-term implications of this real-world testing of military AI technologies for global governance frameworks and international humanitarian law?

Ukraine has indeed become a live-fire laboratory where battlefield algorithms evolve at wartime speed, reshaping three pillars of global governance. Rapid iteration encourages a deploy-first debate-later ethos that potentially outruns international humanitarian law. For instance, Prof Marijn Hoijtink observes that contemporary conflict is defined by “prototype warfare”, in which experimental systems are fielded before they stabilise. Autonomous loitering drones and AI target-recognition tools used in Ukraine, therefore, risk setting a precedent — namely, once a prototype “works” or is “battle-tested”, it will be difficult to prohibit it later. The current fighting erases clear lines between soldier and contractor, or between military and civil society. Civil society now procures or produces drones in Ukraine, private firms steward much of the code, data, and cloud infrastructure, so responsibility for unlawful harm is diffused. States, vendors, and even freelance engineers could be asked to provide combat-damage information, yet I would argue that no common duty binds them. Thus, the war produces vast proprietary datasets that will feed exportable conflict-as-a-service platforms. Without rules on pre-deployment safety checks, post-strike audits, and public incident reporting, these systems will spread faster than norms on meaningful human control. Hence, to prevent opacity and corporate gatekeeping from becoming standard, policymakers must embed binding review clauses in arms-transfer licences and extend humanitarian law to cover algorithm design, training-data provenance, and continuous oversight, turning this so-called experimental advantage into accountable practice.

Yet, current oversight of military AI is a patchwork of soft-law instruments rather than binding treaty rules. The UN Convention on Certain Conventional Weapons hosts a Group of Governmental Experts on lethal autonomous weapons, but after almost a decade, it has produced only non-binding guiding principles. NATO, the Organisation for Economic Co-operation and Development, and the U.S. Department of Defense have issued ethics and responsible military AI frameworks, yet each leaves compliance to national discretion. The EU’s AI Act covers dual-use systems only indirectly, explicitly carving out military AI from its purview. A notable recent forum is the Responsible AI in the Military Domain (REAIM) summit, launched at The Hague in 2023, and which endorsed a non-binding “blueprint” that urges risk assessments, meaningful human control, and strict export standards for AI-enabled weapons. The summit also stood up an independent Global Commission (the “GC-REAIM” advisory council) tasked with drafting oversight metrics and reporting annually on national practice. These initiatives add welcome multi-stakeholder energy, yet they remain fragmented, voluntary, and lack enforcement teeth.

What were the primary drivers behind the recent deregulatory shifts in the EU’s AI policy, and how has it affected the EU’s ability to maintain its ethical standards?

The evolution of the EU’s AI Act from a precautionary framework to an instrument of industrial policy reflects competitiveness anxieties in Brussels. Hence, facing sluggish growth and a widening innovation gap with the U.S. and China, Brussels reframed the AI Act from a human-centric and “trustworthy AI” precautionary shield into an industrial policy lever. Overall, by accepting the “Silicon Valley myth” that AI models promise to make the world more prosperous, efficient, fairer, and more humane, it could indeed be claimed that calls to remove burdensome AI regulation are legitimate. Yet this dominant narrative, sustained by the perceived superior expertise of private-sector tech actors, portrays AI as purely technical innovation and minimises its problematic socio-technical implications, from digital divides and exploitative labour practices to ideological bias and legal breaches.

In my recent Carnegie Europe report, “The EU’s AI Power Play: Between Deregulation and Innovation”, I argue that various forces drove Brussels’ deregulatory turn regarding the AI Act. As I already mentioned, the competitiveness anxiety has made officials fear that strict rules would widen Europe’s funding, talent, and infrastructure gap with U.S. and Chinese firms, as the Draghi Report warned. Moreover, sustained lobbying from France and Big Tech has increasingly recast AI safeguards as innovation threats, securing carve-outs for national security and foundation models. Last but not least, a growing European strategic autonomy narrative around home-grown AI solutions has sold the recent deregulatory flexibility as vital to cut reliance on foreign platforms to mitigate critical tech dependencies. Thus, the result is diluted ex-ante risk controls, cancellation of the AI liability directive, and heavier reliance on voluntary codes, all of which I believe erode the Union’s human-centric brand of “trustworthy AI” and the AI Act’s so-called “Brussels effect” as a global norms setter.

With the appointment of Henna Virkkunen as the European Commission’s executive vice president for tech sovereignty, security, and democracy, what benchmarks or indicators would you use to assess the success of this new leadership role?

I do believe that Henna Virkkunen’s new role sends a loud message, that Brussels is done treating tech as a side issue and now sees it as core to Europe’s security, economy, and democracy. Putting one vice president in charge of all three shows that the EU wants to set its own standards, grow local champions, and reduce its reliance on American or Chinese tech, from chips to cloud services.

Still, as I argue elsewhere, one high-profile hire will not fix everything. Virkkunen inherits big projects like the Chips Act, AI Factories, and the secure 5G rollout, as well as spearheads new initiatives like the Quantum Europe Strategy, but they need fresh energy, money, and tighter cross-policy institutional coordination. Her real test is turning the buzz phrase of “tech sovereignty” into something you can measure: more pooled funding, shared supply-chain rules, and a single European line on thorny files such as AI governance. She also has to walk a fine line. Unlike her predecessor Thierry Breton, famous for a blitz of new digital and tech-related regulatory “Acts”, Virkkunen has hinted she prefers a lighter touch that leaves room for start-ups and industry to breathe. So, her appointment matters, but follow-through matters more. If she can streamline policies, steer real money into strategic tech, and keep the bureaucracy from overreaching, the EU edges closer to genuine autonomy. If not, “sovereignty” risks staying a slogan. Success will indeed depend on hard numbers, industry buy-in, and steady political backing that goes well beyond the name on the office door. One more issue, as previously mentioned, the EU’s recent tilt toward looser tech rules could clash hard against her democracy portfolio. Hence, Virkkunen will have to prove that “move fast and innovate” on tech can coexist with “protect democratic values and human rights”.

How can the EU ensure that its AI governance framework remains adaptable to future technological developments and unforeseen risks?

We should be reminded that technology and society co-produce one another, and any rules must therefore evolve along both lines. Thus, the EU should embed reflexive, participatory loops into the AI Act. The EU should thus institutionalise forms of anticipatory governance, building on interdisciplinary expertise as much as possible. A permanent Foresight and Futures Board, mixing historians of technology, sociologists, security analysts, lawyers, artists, scientists, civil society, and engineers, could potentially run red-team exercises, scenario workshops, and “wild-card” stress tests on nascent AI architectures. One thing to avoid is keeping these reflections only in rarified expert circles. There should be more public debate about the impact of dual-use AI systems, hence, there is a dire need to complement this top-down scanning with bottom-up inputs, such as structured public debates, sustained engagement with unions, civil society, and educational institutions, as well as more ethnographic field reports from AI deployment sites. I believe that only by embedding these iterative, multi-perspective looking-ahead mechanisms could the EU potentially turn nasty surprises from crises into useful regulatory data.

What is the most important advice you could give to young scholars of International Relations?

I have a couple of ideas, but take them with a grain of salt, as these are strictly informed by my own experiences. First, try to treat technology as a constitutive force in today’s world politics, and not merely a backdrop. The next generation of power struggles will be fought through semiconductor chokepoints, cloud standards, and algorithmic infrastructures rather than only at embassies or summits. So, learn to “speak tech”, namely, spend enough time to grasp how AI, quantum technologies, advanced chips, biotechnologies, and data flows, among others, reorganise knowledge, sovereignty, economy, labour, and security. At the same time, do keep the critical instincts of IR. Ask who benefits, who is excluded, and how new technological artefacts embed old hierarchies or new power dynamics.

Finally, practise more engaged scholarship. IR’s future lies in collaborative, interdisciplinary, and praxis-oriented projects where academics, activists, artists, policymakers, and practitioners co-design both research questions and policy interventions. To publish in journals is indeed necessary, as the “publish or perish” saying still goes, but co-creating ethical audits, policy and diplomatic outreach, or public-facing explainers is equally valuable. One last point — young IR scholars should recognise that ubiquitous AI models challenge core cognitive and social norms in scholarship. I fear that this foreshadows a new political economy of knowledge production, where algorithmic assistance coexists with human-machine mediated artisanal analysis, and shapes what counts as authoritative expertise.

Further Reading on E-International Relations

Editorial Credit(s)

Ridipt Singh

Please Consider Donating

Before you download your free e-book, please consider donating to support open access publishing.

E-IR is an independent non-profit publisher run by an all volunteer team. Your donations allow us to invest in new open access titles and pay our bandwidth bills to ensure we keep our existing titles free to view. Any amount, in any currency, is appreciated. Many thanks!

Donations are voluntary and not required to download the e-book - your link to download is below.