AI Enabled Decisions: Processes Not Binaries

Current trends point to the increasing integration of artificial intelligence (AI) into a range of military practices. Some suggest this integration has the prospect of altering how wars are fought (Horowitz and Kahn 2021; Payne 2021). Under this framing, scholars have begun to address the implications of AI’s assimilation into war and international affairs with specific respect to strategic relationships (Johnson 2020), organizational changes (Horowitz 2018, 38–39), weapon systems (Boulanin and Verbruggem 2017), and military decision making practices (Goldfarb and Lindsay 2022). This work is particularly relevant in the context of the United States. The establishment of the Joint Artificial Intelligence Center, the more recent creation of the Office of the Chief Digital and Artificial Intelligence Officer, and desires to incorporate AI into military command practices and weapon systems serve as signs of how AI may reshape aspects of the U.S. defense apparatus.

These trends, however, are controversial as recent efforts to constrain the use of military AI and lethal autonomous weapons systems through international coordination and advocacy from non-governmental organizations have shown. Common refrains amid this debate are structured around notions of how much control a human has over decisions. In the case of the United States, the Department of Defense’s (DoD) directive on autonomous weapons is somewhat ambiguous, calling for ‘appropriate levels’ of human control in situations where the use of force may be involved (“Department of Defense Directive 3000.09” 2017). A 2021 Congressional Research Services report on the directive noted that it was in fact designed to leave ‘flexibility’ on what counts as appropriate judgement based on the context or the weapon system (“International Discussions Concerning Lethal Autonomous Weapon Systems” 2021). This desired flexibility means there currently is no explicit DoD ban on AI systems making use of force decisions. In fact, the United States remains opposed to legally binding constraints in international fora (Barnes 2021).

Deliberations concerning the proper amount of human control over weapon systems are important but can distract from other ways AI enabled technologies will likely alter broader decision practices in advanced militaries. This is especially the case if decisions are portrayed as singular events. The central point here is that decisions are not simply binary moments comprised of the time before the decision and the time after it. Decisions are process outputs. In fact, this is recognized in concepts such as the ‘Military Decision-Making Process’ and ‘Rapid Decision-Making and Synchronization Process’ discussed in United States military doctrinal publications. If AI enabled systems are involved in these types of processes, they are likely to shape outputs. Put more simply, if a decision includes AI enabled systems, outputs will be shaped by the programming and design of that system. A crude analogy here is that if a dinner recipe includes chilli powder over nutmeg, the output will be different. Elements of the cooking process are important to the eventual combination of flavors the person preparing to eat sits down to at the dinner table. Translated back into military terms, if AI systems are included into decision processes, significant elements of human control may already be ceded away through changing the ‘recipe’ of how a decision occurs. It is not just about autonomy in terms of deciding whether to apply force or not apply force. Further, as others have pointed out, there is a continuum between AI enabled systems making decisions or being exclusively in the domain of humans (Dewees, Umphres, and Tung 2021). A ‘decision’ is likely not to remain exclusively under the purview of either.

This issue is central for assessing how AI might shape security affairs, even outside the most salient of debates pertaining to lethal autonomous weapon systems. An important example here is military command and control. In the context of the United States, this history is longer than many may appreciate. The DoD has been interested in incorporating AI and automated data processing into command practices since at least the 1960s (Belden et al. 1961).  Research at the Advanced Research Projects Agency’s Information Processing and Techniques Office is a central, but not singular, illustration (Waldrop 2018, 219). In the decades since, U.S. defense personnel have been involved in wide-ranging efforts to test the applicability of AI enabled systems for missile defense, decision heuristics, event prediction, wargaming, and even the capability of offering up courses of action for commanders during battle. For example, the decade long Defense Advanced Research Projects Agency’s Strategic Computing Initiative, which began during the 1980s, explicitly intended to develop AI enabled battle management systems, among other technologies, that could process combat data and help commanders make sense of complex situations (“Strategic Computing” 1983).

Currently, efforts to bring to fruition what the DoD calls Joint All Domain Command and Control envision similar data processing and decision support roles for AI systems. In fact, some in the U.S. military suggest that AI enabled technologies will be crucial for obtaining ‘decision advantage’ in the complex battlespace of modern war. For instance, Brigadier General Rob Parker and Commander John Stuckey, both a part of the Joint All Domain Command and Control effort, argue that AI is a key factor in the DoD’s effort to developing technological capabilities necessary to ‘seize, maintain, and protect [U.S.] information and decision advantage’ (Parker and Stuckey 2021). AI enabled methods of data processing, management, prediction, and recommendation of courses of action are highly technical, and more behind the scenes than the visceral image of weapon systems autonomously applying lethal force. In fact, advocacy groups have explicitly relied on such imagery in their campaigns related to ‘killer robots’ (Campaign to Stop Killer Robots 2021). However, this does not mean they are of no importance. Nor does it mean that they do not reshape warfighting practices in meaningful ways that can substantively affect the application of force.      

If the focus is solely on AI decisions as a discreet ‘event’, in which a person has an acceptable measure of control and judgement or not, it may inadvertently obscure an analysis of scenarios related to broader security related decision practices. This pertains to two important circumstances. First, the possible effects of the well-known issues with AI enabled systems related to bias, interpretability, accountability, opacity, brittleness, and the like. If such issues with the technology of AI are structured into decision processes, they will affect the eventual output. Second are the moral and ethical notions that humans should be making decisions regarding the application of force in war. If a decision is conceptualized as a discrete event, with human agency as fundamental for the critical moment of that decision, it abstracts away from the changes in socio-technical arrangements that are core elements of decisions conceived of as processes.

Consider what is referred to as a ‘decision point’ in military command parlance. Decision points, discussed in Army and Marine Corps doctrinal publications, are anticipated moments during an operation in which a commander is expected to make a decision. According to Army Doctrinal Publication 5-0, ‘a decision point is a point in space and time when the commander or staff anticipates making a key decision concerning a specific course of action’ (“ADP 5-0: The Operations Process” 2019, 2–6). These crucial junctures are commonly delineated during the planning of an operation and are important during execution. Further, due to the perceived need for speedy decisions, specific courses of action are usually listed out for decision points based on a certain set of parameters. Events occurring in real time are then analyzed, assessed, and compared with courses of action a commander may decide to take. In the case of the Marine Corps and the Army, decision points are included within what is called a Decision Support Matrix (or the more detailed version called Synchronization Matrix). These decision support tools are essentially spreadsheets indicating important events, assets, or areas of interest and collating them into a logical representation. If events on the ground meet certain criteria, relevant command decisions are built into the operational plan. Yet, during operations, keeping track of ongoing events is hectic. Information and intelligence come in rapidly from a wide range of sources in the form of human sources and electronic sensors. Furthermore, the complicated nature of contemporary war is bound to offer up unexpected surprises and, as is no new phenomenon, competing forces are frequently involved in acts of deception (Whaley 2007). Accordingly, gaining accurate, contemporaneous, assessments that would reflect when an operation is approaching a decision point is not an easy task. Furthermore, some scholars of command practice have noted the possible inflexibility of decision points, and while they are useful for standardizing decision-making procedures, they may have the unintended consequence of structuring in decision pathologies (King 2019, 402).

Apparent here is a fundamental tension related to the possible integration of AI and command decisions. AI is seen by many in the U.S. military as a way to analyze data at ‘machine speed’ and to obtain ‘decision advantages’ against enemy forces. Thus, incorporating AI systems into command practice related to decision points in the form of ‘human machine teams’ seems a logical path to take. If a commander can know faster and more accurately that a decision point is approaching, and then make that decision at a quicker pace than an adversary can react, they may be able to gain a leg up. This is the premise of military research in the United States that focuses on AI for command decision related purposes (c.f. AI related research sponsored by “Army Futures Command” n.d.). However, considering the well-known issues with AI systems, such as those discussed above, as well as criticisms that decision points and Decision Support Matrixes could lead to inflexible decision processes, there is cause for concern related to the quality of decision outputs. Particularly under conditions in which military forces appear to treat decision speed as a fundamental component of effective military operations.

None of this should be seen as an outright rejection of the DoD’s intentions. Wanting to make the best decision to achieve a mission’s goals, based on available information, certainly makes sense. In fact, because the stakes of war are so high and the human costs so real, endeavoring to make the best decisions possible under conditions of uncertainty is a praiseworthy goal. There are also, of course, strategic considerations related to the possible advantages of AI enabled militaries. The point here, however, is that what may appear as the mundane backroom or technical stuff of ‘data processing’ and ‘decision support’ can reshape decision outputs, thus edging decisions during conflict towards further delegation away from humans. Relatedly, it is also worth considering the relationship between political objectives and AI enabled command decision outputs. If AI systems are involved in the operational planning and data analysis functions important for decision making, how sure can military personnel be that a political objective will be properly translated into the code that comprises an AI algorithm? This is particularly relevant in cases where contexts might change rapidly, and political objectives may shift during the duration of combat. Furthermore, this phenomenon can lock in how technologies are incorporated into applications of military force making turning back the clock especially hard to imagine. The ways in which data and information are processed and analyzed may not be flashy but are fundamental to how modern organizations – including military ones – make decisions.

Debates related to the degree of human control over AI enabled war will remain important for shaping warfighting practices into the coming decades. In these debates, observers should hesitate to treat decisions that are parts of AI enabled data processing, battle management, or decision support as only comprising the singular moment of ‘the command decision’. Further, analysis, both moral and strategic, should endeavor to look beyond if the human remains in the prime position of the decision loop. In this manner, although praiseworthy, statements included in a Group of Governmental Experts report suggesting, ‘human responsibility on the use of weapon systems must be retained since accountability cannot be transferred to machines’, become more complex to realize (Gjorgjinski 2021, 13). While this report refers to weapon systems, and not necessarily command as a practice, it is still worth reflecting on at exactly what point in these complex, machine-human decision processes are responsibility and accountability fully realizable, identifiable, or regulatable? These are crucial concepts to talk about but go beyond notions of whether a human is ‘in the loop’, ‘out of the loop’, or ‘on the loop’.

As scholars in the field of science and technology studies have long pointed out, technology does not appear in the world only for humans to then decide what to do about it, good or evil (Winner 1977). It is integrated into social systems; it helps to shape the imaginable and possible. This is not to be technologically deterministic, but to note the important and recursive ways that technologies both shape and are shaped by humans. Furthermore, as others have noted (Goldfarb and Lindsay 2022, 48), it is to underscore that AI is likely to make conflict even more complex along a range of factors, including command practices. Reflecting on these consequences helps to further realize the implications of current debates and the ways in which AI, if it is integrated to the extent that military organizations think it will be, may shift military practices in substantive ways.

References

“ADP 5-0: The Operations Process.” 2019. Doctrinal Publication. United States Department of the Army. https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN18126-ADP_5-0-000-WEB-3.pdf.

“Army Futures Command.” n.d. Accessed October 22, 2021. https://armyfuturescommand.com/convergence/.

Barnes, Adam. 2021. “US Official Rejects Plea to Ban ‘Killer Robots.’” Text. TheHill. December 3, 2021. https://thehill.com/changing-america/enrichment/arts-culture/584219-us-official-rejects-plea-to-ban-killer-robots.

Belden, Thomas G., Robert Bosak, William L. Chadwell, Lee S. Christie, John P. Haverty, E.J. Jr. McCluskey, Robert H. Scherer, and Warren Torgerson. 1961. “Computers in Command and Control.” Technical Report 61–12. Institute for Defense Analysis Research and Engineering Support Division. https://apps.dtic.mil/sti/pdfs/AD0271997.pdf.

Boulanin, Vincent, and Maaike Verbruggem. 2017. “Mapping the Development of Autonomy in Weapon Systems.” Solna, Sweden: Stockholm International Peace Research Institute. https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf.

Campaign to Stop Killer Robots. 2021. This Is Real Life, Not Science Fiction. https://www.youtube.com/watch?v=vABTmRXEQLw.

“Department of Defense Directive 3000.09.” 2017. U.S. Department of Defense. https://irp.fas.org/doddir/dod/d3000_09.pdf.

Dewees, Brad, Chris Umphres, and Maddy Tung. 2021. “Machine Learning and Life-and-Death Decisions on the Battlefield.” War on the Rocks. January 11, 2021. https://warontherocks.com/2021/01/machine-learning-and-life-and-death-decisions-on-the-battlefield/.

Gjorgjinski, Ljupco. 2021. “Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems: Chairperson’s Summary.” United Nations Convention on Certain Conventional Weapons. https://documents.unoda.org/wp-content/uploads/2020/07/CCW_GGE1_2020_WP_7-ADVANCE.pdf.

Goldfarb, Avi, and Jon R. Lindsay. 2022. “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War.” International Security 46 (3): 7–50. https://doi.org/10.1162/isec_a_00425.

Horowitz, Michael C. 2018. “Artificial Intelligence, International Competition, and the Balance of Power.” Texas National Security Review 1 (3): 1–22.

Horowitz, Michael C, and Lauren Kahn. 2021. “Leading in Artificial Intelligence through Confidence Building Measures.” The Washington Quarterly 44 (4): 91–106.

“International Discussions Concerning Lethal Autonomous Weapon Systems.” 2021. Congressional Research Service.

Johnson, James. 2020. “Delegating Strategic Decision-Making to Machines: Dr. Strangelove Redux?” Journal of Strategic Studies, April, 1–39. https://doi.org/10.1080/01402390.2020.1759038.

King, Anthony. 2019. Command: The Twenty-First-Century General. Cambridge.

Parker, Brig Gen Rob, and Cmdr John Stuckey. 2021. “US Military Tech Leads: Achieving All-Domain Decision Advantage through JADC2.” Defense News. December 6, 2021. https://www.defensenews.com/outlook/2021/12/06/us-military-tech-leads-achieving-all-domain-decision-advantage-through-jadc2/.

Payne, Kenneth. 2021. I, Warbot: The Dawn of Artificially Intelligent Conflict. Hurst Publishers.

“Strategic Computing.” 1983. Defense Advanced Research Projects Agency. https://archive.org/details/DTIC_ADA141982/page/n1/mode/2up?q=%22strategic+computing%22. Internet Archive.

Waldrop, Mitchel M. 2018. The Dream Machine. San Francisco, CA: Stripe Press.

Whaley, Barton. 2007. Stratagem: Deception and Surprise in War. Norwood, UNITED STATES: Artech House. http://ebookcentral.proquest.com/lib/aul/detail.action?docID=338750.

Winner, Langdon. 1977. Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. MIT Press.

Further Reading on E-International Relations

Please Consider Donating

Before you download your free e-book, please consider donating to support open access publishing.

E-IR is an independent non-profit publisher run by an all volunteer team. Your donations allow us to invest in new open access titles and pay our bandwidth bills to ensure we keep our existing titles free to view. Any amount, in any currency, is appreciated. Many thanks!

Donations are voluntary and not required to download the e-book - your link to download is below.

Subscribe

Get our weekly email