China and the USA employ different strategies to put their AI-driven military dominance on display. Matter-of-fact tech policies and national strategies alternate with messages of national superiority. This section focuses on this particular realm of political communication and employs a comparative analysis of both countries, dissecting how LAWS as AI imaginaries are employed as geopolitical signifiers of national particularities. It analyses them in terms of the military doctrines and AI imaginaries they promote (“Military doctrines, autonomous weapons and AI imaginaries” section) and the definitions of autonomous weapons they establish (“Technological definitions and normative understandings of AWS” section) which both cater to certain goals in political communication.
Military doctrines, autonomous weapons and AI imaginaries
Foreign geopolitics is embedded in military doctrines, serving as a signalling landmark for military forces, the reallocation of strategic resources and technological developments. The empirical material at hand offers layers of analysis hinting at national SIs that put AWS in broader frameworks. These frameworks inform the populace, allies and adversaries about national aspirations, while presenting military self-assurance as a tool to look into a nationally desired future (see “Approaching autonomous weapons embedded in sociotechnical imaginaries” section). Here, AWS act as an empty and hence flexible signifier, a proxy for a society that exhibits different national idealisations of social life, statehood and geopolitical orders.
Military doctrine: The United States of America
In January 2015, the Pentagon published its Third Offset Strategy [US.PosP2]. Here, the current capabilities and operational readiness of the US armed forces are evaluated in order to defend the position of the USA as a hegemon in a multipolar world order. The claimed military “technological overmatch” [ibid.], on which the USA’s clout and pioneering role since the Second World War is based, is perceived as eroding. The Pentagon warns in a worrisome tone: “our perceived inability to achieve a power projection over-match (...) clearly undermine [sic], we think, our ability to deter potential adversaries. And we simply cannot allow that to happen” [ibid.].
The more recently published “Department of Defense Artificial Intelligence Strategy” [US.PosP5] specifies this concern with AI as a reference point. Specific claims are already made in the subtitle of the paper: “Harnessing AI to Advance Our Security and Prosperity”. AI should act as “smart software” [US.PosP5, p 5] within autonomous physical systems and take over tasks that normally require human intelligence. Especially, the US research policy targets spending on autonomy in weapon systems. It is regarded as the most promising area for advancements in attack and defence capabilities, enabling new trajectories in operational areas and tactical options. This is specified with current advancements in ML: “ML is a rapidly growing field within AI that has massive potential to advance unmanned systems in a variety of areas, including C2 [command and control], navigation, perception (sensor intelligence and sensor fusion), obstacle detection and avoidance, swarm behavior and tactics, and human interaction”.
Given that such ML processes depend on large amounts of training data, the DoD announced its Data Strategy [US.PosP11], harnessed inside a claim of geopolitical superiority, stating “As DoD shifts to managing its data as a critical part of its overall mission, it gains distinct, strategic advantages over competitors and adversaries alike” (p 8). In the same vein and under the perceived threat to be outrivalled, “the DoD Digital Modernization Strategy” [US.PosP7] lets any potential adversaries know: “Innovation is a key element of future readiness. It is essential to preserving and expanding the US military competitive advantage in the face of near-peer competition and asymmetric threats” [US.PosP7, p 14]. Here, autonomous systems act as a promise of salvation of technological progress, which is supposed to secure the geopolitical needs of the USA.
Specified with LAWS, the US Congress made clear: “Contrary to a number of news reports, U.S. policy does not prohibit the development or employment of LAWS. Although the USA does not currently have LAWS in its inventory, some senior military and defense leaders have stated that the USA may be compelled to develop LAWS in the future if potential US adversaries choose to do so” [US.PosP12, p 1].Footnote 13
Remarkably, the USA republished the very same Congress Paper in November 2021, just by a minor but decisive alteration. It changed “potential U.S. adversaries” into “U.S. competitors” [US.PosP14]. While it remains unmentioned (and presumably deliberately so) who is meant by both “senior military and defence leaders” and so named “U.S. competitors”, this minor change hints at a subtle but carefully orchestrated strategic tightening of rhetoric, sending out the message that the US acknowledges a worsening in the geopolitical situation with regard to the AWS development. In reaction, the USA continue to weaken their own standards for operator control over AWS in the most recent 2022 Congress Paper (as of May 2022), reframing human judgement: “Human judgement [sic!] over the use of force does not require manual human “control” of the weapon system, as is often reported, but instead requires broader human involvement in decisions about how, when, where and why the weapon will be employed” [US.PosP16]. Certainly, the rhetorical “broadening” of the US direction lowers the threshold to employ AWS in combat, evermore distancing the operator from the machine.
This stands in stark contrast to the US position in earlier rounds of the CCW process; here, the USA not only claims that advancements in military AI are of geopolitical necessity but also portrays LAWS as being desirable from a civilian standpoint, identifying humanitarian benefits: “The potential for these technologies to save lives in armed conflict warrants close consideration” [US.CCW3, p 1]. The USA is listing prospective benefits in reducing civilian casualties such as help in increased commanders’ awareness of civilians and civilian objects, striking military objectives more accurately and with less risk of collateral damage, or providing greater standoff distance from enemy formations [US.CCW3]. Bluntly, the USA tries to portray LAWS as being not only in accordance but being beneficial to International Humanitarian Law and its principles of proportionality, distinction or indiscriminate effect (see also “Technological definition: United States of America” section). While such assertions are highly debatable and have been rejected by many [1, 5, 7, 8], they do shed a very positive light on military technological progress, equating it with humanitarian progress.
In a congress paper on AWS, published in December 2021, these humanitarian benefits are once more mentioned but only very briefly, while a sharpening of the rhetoric is clearly noticeable. The paper also summarises the CCW positions of Russia and China, implicitly clarifying who is meant by “U.S. competitors” (see above). China, even though only indirectly, is accused by invoking that “some analysts have argued that China is maintaining “strategic ambiguity” about its position on LAWS” [US.PosP15, p 2]. This is the first time the USA overtly expresses in a position paper that it understands the AWS negotiations as a political power play, instead of serving the aim of finding an unanimously agreed upon regulatory agreement.
In sum, the USA claims a prerogative as the dominant and legitimate geopolitical player in a multipolar world order, who is under external threat. The ability to defend military supremacy against lurking rivals is portrayed as being in a dependent relationship with the level of technological development of the armed forces, specified with LAWS. The USA claim to hegemonial leadership may only be secured through maintaining technological superiority.
Military doctrine: China
The doctrinal situation in China is more complex and ambivalent. In 2003, the Chinese Communist Party (CCP) and the People’s Liberation Army (PLA) announced the concept of the “Three Warfares”, a military guideline for enforcing Chinese geopolitical interests that has been systematically embedded in the PLA’s military doctrine in recent years [52]. This concept promotes the objective of framing key strategic arenas of foreign policy in one’s favour, so that kinetic (physical military) interventions appear irrational to opponents. This framing, also known as “information warfare” [53], insinuates that international conflicts are less decided by armies carrying off the victory but rather by the media narratives that have the upper hand in interpreting the events.
The concept of “Three Warfares” has been discussed by numerous authors [52,53,54,55,56], encompassing the following dimensions: the so-called psychological warfare aims to influence or disrupt an opponent’s ability to make decisions. This includes practices that deter, shock or demoralise competitors. Media warfare, on the other hand, aims at influencing and manipulating national and international public opinion in order to generate support for China’s military interventions. This entails constant and insistent media exposure, which aims to influence the perception and attitudes of the domestic or enemy population. The third dimension focuses on the legal dimension (“lawfare”). Creative distortions and omissions, conceptual vagueness and loopholes in regulations and international legal conventions serve the purpose of expanding one’s own operational possibilities while simultaneously thwarting opponents in their scope of action. This instrumentalisation of the legal framework should be understood as a means of a “rule by law not rule of law” [54].
The strategic orientation of the “Three Warfares” also reflects a concession to the current military and geopolitical supremacy of the USA. While the USA claims its global leadership with rhetorical boldness, China sketches a military SI of an “underdog”, focussing on tactics of asymmetric warfare. This enables it to avoid direct military confrontation on all fronts and deploy a policy of “shashoujian” (杀手锏), which should be translated as “trump-card” approach [57,58,59]. Instead of competing in all strategic arenas with the USA, this doctrine targets a selective approach, fostering military technology that “the enemy is most fearful of”, including the call that “this is what we should be developing” [60].
However, in recent strategy papers, China has presented itself more confidently. As with the US, AI now plays a crucial role as a “cutting-edge” technology in China’s foreign policy aspirations [61,62,63,64,65].
The AlphaGo win over professional Go player Lee Sedol in 2016, which received a lot of media attention in China (280 million live viewers) was coined by some authors a Chinese “Sputnik moment” [66, 67], hence a wake-up call, which may well have contributed to the massive increase in spending in tech industry and research. Certainly, with the 2017 “new generation artificial intelligence development plan” the CCP also embraces these bold AI ambitions rhetorically by emphasising the need to “grasp firmly the strategic initiative of international competition during the new stage of artificial intelligence development [and] create new competitive advantage” [CH.PosP4, p 2]. The CCP decisively calls for a technological superiority that is equipped “to build China’s first-mover advantage in the development of AI” [CH.PosP4, p 1].
Such new confidence and ambitions are similarly met with a multilateralist appeasement and peacekeeping positioning [CH.PosP9]. China claims full sovereignty and strict non-interference in questions of national interest and security. This relates to, among other things, the one-China unification principle (e.g. directed to Taiwan “China must be and will be reunited”) or territorial claims (e.g. “safeguard China’s maritime rights and interests”). Beyond this sphere of the national interest the CCP pictures a military SI of a global hegemon without expansive aggressions (“Never Seeking Hegemony, Expansion or Spheres of Influence”). Sources of instability are located elsewhere, namely, in local “separatism” and foreign aspirations with “order [...] undermined by growing hegemonism, power politics, unilateralism and constant regional conflicts and wars”. At the same time, the USA is blamed directly for posing a threat to “global strategic stability” [CH.PosP9].
In sum, China’s military SI depicts a global player that has caught up on its rivals at a military level. The CCP adjusts its doctrines and strategies pragmatically, from an underdog position to an assertive hegemon, clearly addressing geopolitical claims and means to get there. Military doctrines are clearly linked, as with the USA, to modernist narratives of technological progress, incorporating intelligent weaponry as AWS as a means to an end to outrival competitors. The technological race for supremacy in this key strategic technology is perceived as open, with China claiming legitimate ambitions.
Technological definitions and normative understandings of AWS
The USA and China have published national strategy papers as well as position papers at the CCW that are of a technical nature, aiming to define AWS. These documents have to be read against the backdrop of the larger SIs as introduced above (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section), motivating and legitimating the state’s strategic interpretative flexibility in creating and promoting AWS definitions. Hence, these documents not only inform which understanding—and technological variation—of autonomous weapon systems is to be prioritised, but further raise the question to what greater ends these specific interpretations are pursued. For example, in much the same way as the US American definitions of AWS, the Chinese “lawfare objectives” keep the backdoor open for developing automated weapons that escape the poor attributions of autonomy found in the AWS documents, with many military applications remaining legally and politically unaffected. A closer look at the national AWS definitions in the following sections will illuminate this issue.
Technological definition: United States of America
The DoD Directive 2012/2017 [US.PosP1, emphasis added] provides seemingly unequivocal definitions:
“Autonomous weapon system. Targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”
Semi-autonomous weapon system. A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator.”
A first problem with the US definition arises with the role of the human operator as a defining criterion for autonomy. As discussed in “Definitions focusing on the degree of human control over supposedly autonomous systems” section, conceptually, the USA advocates a relational approach to autonomy, linking it to the human presence. But the essential question of what an autonomous system comprises cannot simply be addressed by determining whether a human is in the loop or not. The degree of human intervention may give us advice on how to use such weaponry, but it does not help much in defining what it is. As Crootof clarifies: “If a weapon system has the capacity to independently select and engage targets, whether there is a human supervisor or whether it is operated in a semi-autonomous mode is a question of usage—and thus regulation—and not of autonomy” [11]. Very powerful weapons can be controlled by an operator and restrained such that their fire power (e.g. operational speed, fire range or power of devastation) is actually rarely fully in use. But from this observation, we can hardly deduce that we have arrived at the very essence of what the weaponry actually is and what it is capable of. While the role of human intervention in AWS is ethically and politically a much-needed debate, but not a debate without pitfalls as discussed by various authors regarding “meaningful human control” [24, 68,69,70,71,72], it simultaneously raises further confusion if it is regarded as an appropriate characteristic in defining AWS.
More problematically, making a definition of AWS dependent on human intervention creates new loopholes in escaping effective legal regulation. The fundamental problem with the DoD definition stems from the fact that the standards for autonomy are simply very low—actually, it does not do justice to the term autonomy at all. The definition does not engage with the complexity of the term, clarifying what is really meant by autonomy. Should autonomy be rather understood as self-sufficiency, or as self-directedness, and hence as independence from outside control [73] (see “Technical definitions of autonomy and autonomous weapons systems” section)? Also, as problematised above, operation under pure autonomy as the DoD document suggests is a myth, as any technical device is influenced by external factors such as technical infrastructure, terrain etc.
In essence, the DoD reduces the term autonomy to a process of automation: Any (non-) trivial system—either mechanical or algorithm-based—that, once activated, automatically processes (hence, without further human intervention) tasks and interacts with an environment would meet this criterion. Following the US reasoning, it is extremely hard to differentiate between advanced and very rudimentary mechanical or algorithmic systems, as literally any of them can be reduced to processes of automation. Thus, reducing autonomy to a process of automation introduces the notion of a continuum, making a clear differentiation between ubiquitously labelled “intelligent” weaponry impossible and the distinction between full or only semi-autonomy ever more complicated (cf. “Definitions focusing on the degree of human control over supposedly autonomous systems” section).
Take, for example, the case of radar detection systems, which have been in use for decades and which are capable of identifying, selecting and targeting enemy objects without the necessity for human intervention. The only difference between such systems and AWS would be the capability of automatically engaging with these targets. But weapon systems that fulfil such additional criteria have existed for years already, with the best example maybe being the Phalanx system [74]Footnote 14, which has been in use since the 1980s, and hardly raised any regulatory concern back then [75]—especially not from the US side.
Problematically, the DoD definition cannot account for military advancements in fire power or complex machine behaviour such as adoption enabled through new data processing capabilities in machine learning—leading to a new myriad of problems such as unpredictability [76, 77] or opacity [78, 79] of machine behaviour, which are connected to safety, incomprehensibility and accountability issues well known from the civil AI regulatory debate. These phenomena in turn raise the fundamental question of whether deploying LAWS violates the Geneva Convention of IHL. If machine behaviour becomes ever more unpredictable, opaque and complex, it is debatable if the Geneva principles of the IHL distinction, proportionality and accountability in hors de combat can be met at all [80,81,82].
The USA has never claimed to retain from developing LAWS; in fact, it even cherished its advantages (see “The United States of America” section [US.CCW3]) and, as discussed above, threatens adversaries to “develop LAWS in the future if US competitors choose to do so” [US.PosP15]. This statement is, if one takes the DoD definition as a reference, strictly speaking, false. As discussed in relation to the Phalanx system, the USA have used LAWS in the past already and still do so todayFootnote 15 [Us.PosP12] [83, 84].
Conclusively, the DoD definition has the problematic effect of levelling down so many weapon systems under one category that critical advancements in weapon abilities that are now underway cannot be accounted for (making compliance with the Geneva principles more challenging). With such a vague and all-encompassing definition, effective legal regulation is ever more complicated, ensuring that national advances in the development of LAWS are not impeded.
Technological definition: China
China’s contributions to the discussions at the CCW are rather limited, but serve well to understand China’s ambivalent stance on AWS, echoing its international normative positioning (as introduced in “Military Doctrine: China” section). Their ambiguity helps to keep a strategic backdoor for optionality open. In the 2017 CCW negotiations, China adopted a positive stance on international regulation, favouring preventive arms control: “The international community should follow the concept of universal security on the basis of existing international law, carry out preventive diplomacy, check the trend of an arms race in the high-tech field and maintain international peace and stability” (12th December 2017, p 5). This is in accordance with the multilateralist stance voiced in the general AI policy trajectory of the country (“Actively participate in global governance of AI (...), Deepen international cooperation in AI laws and regulations, international rules (...) and jointly cope with global challenges” [CH.PosP4, p 25] [85]).
Such a preventive regulatory stance was regarded more critically in 2018. Here, China states that “(...) the impact of emerging technologies deserve objective, impartial and full discussion. Until such discussions have been done, there should not be any pre-set premises or prejudged outcome, which may impede the development of AI technology” [CH.CCW2, p 2]. This rather innovation and military friendly policy reveals clear reservations against a precautionary principle that would regulate LAWS restrictively and prevent an AI arms race. The ambivalence seems even more striking when looking at the Chinese LAWS definition presented at the CCW:
Definition [CH.CCW2, p 1, enumeration added by authors for better overview]
According to the Chinese view, “LAWS should include but not be limited to the following 5 basic characteristics”: (1) Lethality, “which means sufficient pay load (charge) and for means to be lethal”; (2) Autonomy, “which means absence of human intervention and control during the entire process of executing a task”; (3) Impossibility for termination, “meaning that once started there is no way to terminate the device”; (4) Indiscriminate effect, “meaning that the device will execute the task of killing and aiming regardless of conditions, scenarios and targets”; (5) Evolution, “meaning that through interaction with the environment the device can learn autonomously, expand its functions and capabilities in a way exceeding human expectations”.
Conceptually, these LAWS criteria display a pick-and-mix approach, with the first stating the obvious, with the second showing strong similarity to the US definition (with its discussed pitfalls), with the fourth showing compliance to the Geneva Principles of IHL, and with the fifth hyperbolising, picking a fancy term “evolution” (hence lending imagination from a biological domain and maybe even evoking fantasies of an organic, autopoetic and reproductive machinery creating awe by exceeding human capabilities) to label adoption in machine learning processes.
The real crux lies in the third of these criteria, which hypothesises that once started, there is no way to terminate a device. In essence, this scenario describes a universally destructive, actually ludicrous idea, which is nothing but absurd. Machines are not perpetuum mobiles but rely heavily on infrastructure, supervision, context, etc.—so, clearly, machinery self-sufficiency is a myth (see “Technical definitions of autonomy and autonomous weapons systems” section). Strictly speaking, these criteria depict sensational doomsday fiction, once more proving the hybridity of the entire AWS discourse, where realpolitik, imagination, possibility and fiction are conflated [86]Footnote 16 (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section).
It is exactly these unrealistic criteria for autonomous weapons that maintain the idea of promoting seemingly less dangerous—only “automatic”—weapon systems, undermining national or international legislation efforts. Where the US definition has set the benchmark for AWS too low, the Chinese set the benchmark for AWS too high, rendering their existence near science fiction. Hence, demands to ban AWS following these criteria can largely be understood as a political gesture of purely symbolic value. Implicitly, the development of autonomous and semi-autonomous weapon systems is not only tolerated but by definition appears as a legitimate course of action. This perfectly voices the objectives laid out in so-called asymmetric lawfare (see “Military doctrines, autonomous weapons and AI imaginaries” section): The legally vague, even bland criteria applied in the description and definition of LAWS have the intended effect of not curtailing one’s own political scope of action.
Conclusively, both countries are against a complete ban on AWS, and with the definitions they promote at the CCW, they certainly do leave a backdoor open for further development and use.