
According to Dan Kotliar, technological advancements are accompanied by a certain degree of hype, or hyperbolic discourse. The internet, for example, was accompanied by a democratization hype with scholars and pundits arguing that the internet would enable new forms of democratic participation. Social media was associated with a revolutionary hype which suggested that it would help topple tyrants, despots and dictators who could no longer exert total control over public opinion. The Arab Spring protests help boost this. Over the past two years, the world has been in the grips of an “AI hype”. Journalists, tech moguls and academics have all stated that AI will radically change daily life, impacting numerous professions while altering how knowledge is produced, how art is created, how citizens are policed, how policy is formulated and how human relationships are formed. AI doctors will replace physicians, AI bots will displace psychologists, AI coders will replace tech employees, while AI agents will replace lawyers and legislators. These predictions all suggest that AI is fundamentally different from previous technological advancements. The “AI moment” is an evolutionary one as humanity is about to evolve into a new state of AI-enhanced existence.
Hypes can be both positive and negative. The advent of the internet was also accompanied by concerns of disparities between rich and poor, or between those that could afford an internet connection and those that would be left out of the new digital town square. This is also true of AI with some warning that AIs could become so advanced that they “go rogue”, ignore their programming and unleash unparalleled catastrophes such as nuclear wars. What is most noteworthy about technological hypes is that they shape state policies. Hypes are visions of the future. They are cognitive roadmaps that define a set of possible futures. Yet these visions of the future are limiting as they prevent policy makers from using their imagination or leveraging new technologies in new and original ways. Instead, policy makers come to view technologies through the narrow prism of several dominant hypes. Presently, states and policy makers seem to view AI through four dominant hypes: Bloom, Boom, Gloom and Doom.
The “Bloom” hype is inherently positive suggesting that AI will increase our quality of life. AIs will facilitate personalized medicine, remote care and telemedicine enabling states to offer citizens the best care possible. AI-based tutors will create a personalized learning curriculum allowing students to reach their full potential. AIs will help to predict and rapidly respond to crises and emergencies while reducing the costs of services such as energy bills lowered thanks to smart homes. The AI state will be smarter, cleaner and more efficient. It is this hype that is impacting policy makers’ decisions to integrate AIs into various state systems such as healthcare, education and even diplomacy.
One example is the proposed integration of AI into Ministries of Foreign Affairs (MFAs). Already in June of 2023, the US Advisory Commission on Public Diplomacy published a report outlining how AIs could undertake routine diplomatic functions such as authoring drafts of press statements, generating press reports, authoring content tailored to different audiences and using bots to deal consular requests. In all these cases, the main benefit is reducing diplomats’ workload and allowing them to be more efficient and expedient. Similarly, in May of 2024 the Ukrainian MFA unveiled an AI-generated spokesperson with the explicit goal of “saving time and resources” as the AI Spokesperson could respond on events or announce policy shifts faster than any human spokesperson. According to reports by the World Economic Forum, AIs are already being tested and deployed in other domains such as healthcare systems. The report states that AIs are used to improve diagnostics, shorten ambulance response times, reduce administration costs and predict health complications. Here too, AIs are utilized with the goal of improving state services while, cutting costs and increasing efficacy. This is the very essence of the “Bloom” hype.
The “Boom” hype is also inherently positive and suggests that AI will lead to a financial boom generating new sources of revenue, new occupations, new industries and new skilled laborers The “Boom” hype likens the AI revolution to the agricultural or industrial revolutions which forever changed the global economy, destroying some forms of labor but creating many new ones. It is the “Boom” hype that leads policy makers and states to invest in local AI industries, to urge local tech moguls to enter the AI marketplace, to offer incentives and financial support for AI-based research and development while viewing local AI industries as an economic priority.
A 2024 study published in the Harvard Business Review examined the impact of Generative AI on labor markets concluding that its “impact on online labor markets is already becoming discernible, suggesting potential shifts in long-term labor market dynamics that could bring both challenges and opportunities”. Another report from the International Labor Organization (ILO) predicts that AI will replace 10% of the global workforce. Moreover, according to Stanford University’s 2025 AI Index Report, the past year has seen a sharp increase in both private and public investments in AI-based technologies. More and more governments are dedicating substantial resources towards developing local AI industries including Canada, which pledged $2.4 billion to develop a robust national AI ecosystem, China which invested $47.5 billion in a national semiconductor fund as well as France (€109 billion) India ($1.25 billion) and Saudi Arbia ($100 billion). A recent report by the United Nations Industrial Development Organization (UNIDO) stated that developing countries “must prioritize and pursue the establishment of robust AI ecosystems” to narrow the gap with more developed states. These all demonstrate that both global policy makers (e.g., ILO, UNIDO) and national ones view AI as a game-changer for national economies, one that will generate substantial growth and generate new financial opportunities in the coming years, emblematic of the “Boom” hype.
The “Gloom” hype is inherently negative and views AI through the prism of present-day challenges. Indeed, policy makers and states are concerned by the potential use of AI to spread disinfromation and conspiracy theories, to drive political polarization, to diminish societal resilience and, ultimately, destroy democracies. Indeed, this hype assumes that the future will simply be a more dystopic version of the present with bots generating unlimited amounts of highly believable disinfromation limiting people’s ability to make sense of the world and leading to widespread political crises. The “Gloom” hype leads to the securitization of AI and the belief that AI capabilities are a national security issue. Once subsumed by the logic of securitization, AI becomes yet another aspect of state rivalry with states looking to maximize their AI capabilities will preventing others from doing the same.
In 2024 Time Magazine’s cover featured ChatGPT stating that “AI Arms Race is changing everything. According to Time Magazine, the corporate AI Arms Race may have devastating effects that will be far greater than those of social media including a “gutted” news business, rises in misinformation and disinformation, bots cannibalizing and warping content from news sites and “skyrocketing teen mental-health crisis”. The very comparison between the societal risks of AI with those of social media is demonstrative of the “Gloom” hype as AI is viewed through present day challenges and concerns.
The term “AI Arms Race” also manifests the securitization of AI indicating that states like the US and China view AI development as one more area in which they compete over dominance. A report by the MIT Technology Review clearly states that US policy makers drove an agenda centered on “winning” the AI race opposite China. According to the report “The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable” and that AI will play a crucial role in that war. Similarly, in a 2025 discussion held by the Council on Foreign Relations, various experts discussed the weaponization of AI, the integration of AI into existing weapons system and the importance of maintaining an AI-weapons edge over other countries. Even the corporate AI Arms Race is being impacted by that national AI Arms Race with American AI companies pledging to help America “beat China”.
The ”Doom” hype is even more negative and suggests that AI may spell the Doom of humanity. The “Doom” hype is the one adopted by all those calling for a moratorium on AI development, and those arguing that AIs must not be trusted with managing sensitive technologies such as autonomous weapons or nuclear arsenals. This hype is rooted in an existential fear. It is the “Doom” hype that shapes many states’ regulatory positions or that drives policy makers to seek AI regulation, first at the national level and then on the international level. Moreover, it is the ”Doom” hype which prompts policy makers to view AI as an agentic actor. That is, AI is not viewed as yet a technology similar to a toaster. Rather, policy makers view AI as a human-like actor endowed with intention, intelligence, needs and concerns. What follows is the belief that AI will be human like, or pre-occupied with its own survival at the expense of others.
A 2022 survey of AI researchers found that nearly half believed there was a realistic chance that AI could lead to catastrophic risks such as human extinction. In 2025, AI experts warned that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. That same year, AI experts from MIT also warned that AI systems may soon reach human intelligence and “clash” with humans or “oppose” attempts to control them. The use of terms such as “clash” and “oppose” are both emblematic of an agentic view of AI where this technology becomes a purposeful actor able of unleashing doomsday weapons against humanity. Experts calling for a moratorium on AI development have quoted Geoffrey Hinton, an AI expert and Nobel laureate, who said that the chance of mass extinction is as high as 50%.
The “Doom” scenario has also shaped the actions of policy makers. In February of 2025, the Korean MFA announced the launch of an AI Diplomacy Division responsible for negotiating AI regulations with allies and states across the world while ensuring that Korea remains at the forefront of global AI regulation. A 2024 treaty signed by the Council of Europe, the US the UK and other countries introduced joint regulation to ensure “responsible use of AI, with a focus on safeguarding human rights, democracy, and the rule of law”. The treaty called on participants to identify, assess and provide remedies for AI systems that may threaten human freedoms or humanity itself. The 2025 EU AI Act included a new classification of AI-based risks labeling some of these “unacceptable” and banning their development within the EU, including AI systems that could monitor and surveil humans.
However, hypes are just visions of possible futures. They are predictions based on limited information and driven by extreme emotions of hope or fear. Hypes stifle creativity as policy makers and states are unlikely to imagine a broad set of alternative futures, some of them better and others worse. Hypes take root among policy makers as they are circulated globally through news, movies, TV shows and popular fiction. But hypes hide a simple truth- that technology’s impact on society can rarely be predicted.
Few people assumed that the printing press would play a decisive role in the formation of the nation state; that the factory assembly line would lead to Communist revolution or that social media would fracture politics. The problem is that hypes often act as self-fulfilling prophecies. For example, the “Gloom” hype may come true as growing securitization of AI may lead to AI wars with nations conquering territory to ensure the supply of minerals necessary for the production of computer chips. Hypes of the future may thus shape the future.
Escaping the “hype” trap is no easy task, yet it will be crucial if states and societies are to fully leverage the potential of AI and mitigate its potential risks. The way to increase creative thinking among policy makers is to facilitate engagement with diverse stakeholders who imagine a wide range of possible futures and whose thinking is not limited by dominant hypes. There are nations that may pursue such a policy through their Tech Embassies to Silicon Valley. The remit of these Embassies is to manage relations with tech giants and tech moguls and to pursue tech related policies such combating disinfromation and hate speech. Yet this remit may be expanded when it comes to AI. Policy makers in Tech Embassies may engage with a host of actors ranging from tech CEOs to angel investors to individuals developing the next wave of technologies in R&D units, civil society groups, activists, NGOs, academics, futurists, artists and more. Through these engagements new possible futures may emerge, new applications for AI may become clearer and states may then pursue those visions of the future that best align with their interests, their needs and their hopes.
Further Reading on E-International Relations
- Tech Imperialism Reloaded: AI, Colonial Legacies, and the Global South
- Between Knowledge and Value: AI as a Technology of Dispossession
- Opinion – A New International AI Body Is No Panacea
- Artificial Intelligence and Digital Diplomacy
- Opinion – Why China’s Ambitious Agenda Could Fail in 2024
- The Ghibli Lens: When Algorithms Chase the Artist’s Spark