Skip to main content
  • Research Article
  • Open access
  • Published:

“Autonomous weapons” as a geopolitical signifier in a national power play: analysing AI imaginaries in Chinese and US military policies

A Correction to this article was published on 19 December 2022

This article has been updated

Abstract

“Autonomous weapon systems” (AWS) have been subject to intense discussions for years. Numerous political, academic and legal actors are debating their consequences, with many calling for strict regulation or even a global ban. Surprisingly, it often remains unclear which technologies the term AWS refers to and also in what sense these systems can be characterised as autonomous at all. Despite being feared by many, weapons that are completely self-governing and beyond human control are more of a conceptual possibility than an actual military reality.

As will be argued, the conflicting interpretations of AWS are largely the result of the diverse meanings that are constructed in political discourses. These interpretations convert specific understandings of AI into strategic assets and consequently hinder the establishment of common ethical standards and legal regulations. In particular, this article looks at the publicly available military AI strategies and position papers by China and the USA. It analyses how AWS technologies, understood as evoking sociotechnical imaginaries, are politicised to serve particular national interests.

The article presents the current theoretical debate, which has sought to find a functional definition of AWS that is sufficiently unambiguous for regulatory or military contexts. Approaching AWS as a phenomenon that is embedded in a particular sociotechnical imaginary, however, flags up the ways in which nation states portray themselves as part of a global AI race, competing over economic, military and geopolitical advantages. Nation states do not just enforce their geopolitical ambitions through a fierce realpolitik rhetoric but also play around with ambiguities in definitions. This especially holds true for China and the USA, since they are regarded and regard themselves as hegemonic antagonists, presenting competing self-conceptions that are apparent in their histories, political doctrines and identities. The way they showcase their AI-driven military prowess indicates an ambivalent rhetoric of legal sobriety, tech-regulation and aggressive national dominance. AWS take on the role of signifiers that are employed to foster political legitimacy or to spark deliberate confusion and deterrence.

Introduction

The development of the so-called autonomous weapon systems (AWS) has been the subject of intense discussions for years. Numerous political, academic and legal institutions and actors are debating the consequences and risks that may arise with these technologies, in particular their ethical, social and political implications, and many have called for strict regulation or even a global ban [1,2,3].

In these public debates, the attribute “lethal” is sometimes added to the term AWS, underlining the potential severity of the consequences this technology entails. Surprisingly, and despite the urgent need to deal with “Lethal Autonomous Weapon Systems” (LAWS), it is often unclear which technologies the term (L)AWSFootnote 1 primarily refers to, or even in what sense these systems can be characterised as “autonomous” at all. The associated definitions describe a range of phenomena, from landmines to combat drones, from close-in weapon systems (CIWS) to humanoid robot soldiers or purely virtual cyber weapons. Besides this terminological ambiguity, it is inherently unclear in what sense or to what degree these systems can be characterised as “autonomous” at all. Even though the development of automatic or semi-autonomous capabilities is generally advancing, fully autonomous weapons that are completely beyond human control—which is the reason why they are feared by many—largely represents a conceptual possibility at present rather than an actual military reality (“Technical definitions of autonomy and autonomous weapons systems” section).

While the current debate around the possibility and functionality of AWS is certainly not a novel phenomenon but one that has also been highly influenced by fictional works of the past [4], it has regained prominence in recent decades with technological advancements in artificial intelligence (AI), especially with accelerating machine learning (ML) data processing capabilities. Civil societal initiatives [5, 6], scientists [7, 8] and political bodiesFootnote 2 have raised political concerns about emerging “intelligent” and “autonomous” weapon systems with lethal capabilities that go beyond human control. As much as the debate has been guided by the agendas of different stakeholders pursuing (de-)regulation, the discourse around AWS has developed alongside other genres such as doomsday stories in journalism, Hollywood cinema or science-fiction literature, which exploit the idea around looming “killer robots”. Besides promoting a certain idea of what AWS are and what they are capable of, they also intensify the political debate by adding a high degree of urgency.

As will be argued, the conflicting interpretations of AWS are largely the result of diverse meanings that are constructed in political discourses. They convert a specific understanding of AI into strategic assets and, as a political consequence, hinder the establishment of common international ethical standards and legal regulation. Hence, the perspective we present not only reveals AWS to be powerful signifiers of political culture but also shows how they are instruments employed to foster political legitimacy or to spark deliberate confusion and deterrence between rival states.

In particular, this article looks at the publicly available military AI strategies and position papers by China and the USA and, informed by sociotechnical imaginaries [9, 10], analyses how this technology is politicised to serve particular national roles and interests. The ways these two nations showcase their AI-driven military prowess sends out unmistakable messages about national dominance and a desired geopolitical order. The ways in which nation states portray themselves as part of a global AI race, competing over economic, military, and political advantages, become obvious. This especially holds true for China and the USA, since they are regarded, and regard themselves, not only as international hegemons, but also as antagonists, promoting competing self-conceptions that are apparent in their histories, political doctrines and identities.

In turn, the analytical focus on these hegemonic powers will inform European debates on AWS, since these discussions are far from representing one unified stance. Identifying the similarities and differences between China and the USA makes it possible to recognise prototypical patterns, which at the same time puts the multitude of different AWS positions among European nations into a larger global perspectiveFootnote 3. The analysis explicitly focuses on military strategy documents in an effort to complete the picture of national AI aspirations and more general public discourses. Specifically, this subdomain of AWS imaginaries was chosen because it brings to the fore the deliberate meanings voiced by military actors in order to utilise them as part of political communication.

The article first dissects the current academic debate regarding a definition of AWS that would be sufficiently unambiguous for regulatory or military contexts; key issues in this debate have been concepts such as “autonomy”, “degree of human control” or a “functional understanding of AWS” (“The challenges of defining autonomous weapon systems” section). It is the meaning of these AWS-related concepts that, among other dimensions, constitutes the reference point in the geopolitical arena between the USA and China. They not only provide information about technical details but can be utilised to fulfil specific functions in asserting national interests. In order to be able to approach and analyse AWS from this realpolitik perspective, we introduce the concept of the “sociotechnical imaginary” (SI) as the theoretical frame (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section). The “Methods” section follows (“Methodology” section), where we showcase the empirical material, consisting of position papers taken from the debate at the United Nations (UN) Convention on Certain Conventional Weapons (CCWFootnote 4) and standpoint papers published by the executive ministries of both nations. The analysis sections portray AWS as geopolitical signifiers and approach the strategies as a form of political communication that is pursued as part of military AI imaginaries (“Military doctrines, autonomous weapons and AI imaginaries” section). AWS are a central element of the goals both nations pursue in the realm of geopolitical communication. Differing definitions and normative understandings of AWS are deliberately employed to serve national interests and, consequently, make it more difficult to reach a UN regulatory consensus (“Technological definitions and normative understandings of AWS” section).

The challenges of defining autonomous weapon systems

The different approaches to defining AWS constitute an arena of competing interpretations of what the technology is capable of and, above all, which reference points to consider in order to regulate specific capabilities. While the current debates on autonomous weapon systems mainly focus on regulatory questions, military simulation games or political and tactical scenarios, the power of interpretation over what AWS are and what capabilities they comprise remains contested. These questions neither simply refer to a problem in engineering nor are they of a purely conceptual nature but also borrow from the realm of fiction. It is essential to acknowledge that the prerogative of shaping the meaning of the technology creates both semantic and political dominance—and states take advantage of this opportunity.

In order to narrow down a comprehensible understanding, three different approaches can be roughly distinguished: The first focuses on the attribute of “autonomous”, which evokes a wide array of traditional associations with the concept of autonomy; the second approach takes into account different degrees of human control over the automated processes and in doing so addresses questions of human/machine interaction. While it is obvious that both definitional approaches are directly interwoven—in a complimentary fashion even, since the more autonomous the machines are, the less human control can be exercised—they still refer to distinct conceptual meanings and traditions. The third and most recent strategy promotes a primarily functional understanding of AWS that focuses on the actual capabilities and seeks to transcend essentialist definitions that are more concerned with the innate conceptual qualities of the technology.

Technical definitions of autonomy and autonomous weapon systems

One possible way of defining the concept of autonomy is to look at it as a technically determining and distinguishing feature; indeed, this already seems self-evident from the attribute “autonomous” alone. In this sense, an “autonomous” weapon system is one that, “based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets” [11]. While automated systems are only “triggered”, in this understanding, such systems can independently “select” and “engage” different targets, based on case-specific information.

The concept of autonomy is widely used in philosophy, psychology, human cognition and other disciplines and carries (often contested and contradictory) meanings that range from anthropocentric understandings to political contexts or aesthetics [12,13,14]. It has become a quite commonplace term in AI discourses, where it commonly evokes clear associations with characteristics such as independence, intelligence, self-governance, the ability to learn and adapt (e.g. orientation in unknown, unstructured and dynamic environments) or the execution of self-determined decisions. Its ubiquitous use, however, which also shapes non-expert debates on AI, has contributed to the erosion of its semantic qualities.

Even when one narrows down the concept to a more specific technical sense, ambiguities persist. Bradshaw et al. emphasise that there are two different understandings of autonomy in the context of machines: “In the first sense, it denotes self-sufficiency—the capability of an entity to take care of itself. The second sense refers to the quality of self-directedness or freedom from outside control. [...] It should be evident that independence from outside control does not entail the self-sufficiency of an autonomous machine. Nor do a machine’s autonomous capabilities guarantee that it will be allowed to operate in a self-directed manner. In fact, human-machine systems involve a dynamic balance of self-sufficiency and self-directedness”. At the same time, since no entity can be seen as completely independent of its environment, the term autonomous system would in a strict sense even count as a “misnomer” [15].

Furthermore, the different interpretations of machine autonomy in the context of AWS are usually embedded in either optimistic or dystopian discourses, which in turn firmly shape the understandings of autonomy as well, in particular the sense of “what autonomous machines can and cannot do” [16]. It is exactly this interpretative openness that make AWS an important reference point in the politico-strategic interactions of rivalling states, which are continuously struggling for a clear definition. A consensus on what can be regarded as an autonomous weapon is seen as a first step towards the legally binding regulation of these technologies.Footnote 5

The discussions on these semantic issues are held at the regular (annual or biannual) meetings that take place between participating state parties on the protocols of the CCW that was adopted in 1980 (cf. “Methodology” section) [17]. Politically, the terminological ambivalence and polysemy opens the door for disagreement at the CCW on how to define “autonomy” (cf. “Technological definitions and normative understandings of AWS” section). This, as a direct consequence, has also led to the failure to regulate autonomous weapons [18]. Paradoxically, even a common terminology can make the discourse on AWS more complicated, “when the terms involved lack consistent interpretations”. The often metaphorical use of “autonomy” and its ambiguities creates uncertainty when military robots are treated as black boxes. Only when understanding human decision-making processes in the design, production and programming of autonomous machines, questions of agency and responsibility can intelligibly be discussed [19].

This is why solely looking for ways to determine AWS in terms of the concept of autonomy cannot be sufficient, as the label “autonomous” evokes a whole spectrum of meanings that nonetheless does not present us with finite categorical distinctions. Even the more precise term of the so-called technical autonomy refers to a continuum, a point that becomes obvious by the necessity to employ auxiliary vocabulary such as “semi-autonomous”. In short, the term “autonomous” alone—even when defined technologically and hence relatively unequivocally as the “capabilities” of AWS—is not enough to grasp its complexity, since the weapon must necessarily also be understood in the ways it presents itself in manifold contexts.

Definitions focusing on the degree of human control over supposedly autonomous systems

Another approach to defining AWS involves determining the degree of human control over a weapon system that remains unaffected despite a higher degree of automation. In particular, it was the notation of in, on and out of the loop—emphatically used not in the sense of an inherent technical property, but in relation to human agency—that gained prominence in the debate. “In-the-loop” refers to directly executed control by humans (an action must be initiated), “on-the-loop” refers to systems whose actions can be prevented or aborted by human intervention, and, finally, “out-of-the-loop” is the term commonly used for systems that no longer require human control but whose processes are, most of the time, nonetheless still monitored by human agents.

According to this approach, weapon systems are to be called autonomous if they reduce the possibility of human intervention to a minimum, up to the point where they no longer require or even allow human control at all. It reflects a relational understanding of autonomous weapons in terms of the possibility of human intervention and agency and hence can be seen as part of a broader model conceptualising human/machine relationships.

In practice though, the focus on a relational understanding of agency and automation still comes with terminological challenges. One of these challenges refers to the vague distinction between automation and autonomy. As Sauer notes: “After all, automatic systems, targeting humans at borders or automatically firing back at the source of incoming munitions, already raise questions relevant to the autonomy debate” [20]. Similarly, defining the degree of human control as a continuum is at best a measurement metric, as the complex interactions cannot always be clearly attributed to either the human or the machine [21]. Further complicating this approach, this distinction says little about the “autonomy” of the system itself, but at best classifies the possibilities for curtailing it [11]. In other words, even a weapon system that could be called autonomous in a technical sense (cf. “Technical definitions of autonomy and autonomous weapons systems” section) can easily fall short of these expectations and functional properties if it is deliberatively limited and curtailed in a context that is controlled by humans (see “Technological definitions and normative understandings of AWS” section for a detailed analysis of the terminology used in US national strategy papers regarding AWS). The questions remain whether it makes sense to regard it as “autonomous” and even whether the attribute conveys a useful meaning at all. As Ekelhof comments, “any consensus among states, academia, NGOs, and other commentators involved in diplomatic efforts under the auspices of the CCW ... seems to be grounded in the idea that all weapons should be subject to “meaningful human control” (or a similar standard). This intuitively appealing concept immediately gained traction, although at a familiar legal-political cost: nobody knows what the concept actually means in practice” [22] (see also “Technological definition: United States of America” section).

Functional approaches to what “autonomous weapon systems” can and cannot do

The terminological vagueness partly explains more recent endeavours to find a functional definition of AWS. As we will see, however, these task-specific approaches rearrange and combine the above-discussed conceptual and relational understandings and engender their own problems, even though they are trying to break them down to actual functionalities in practical settings.

The most common way to a functional understanding of autonomous weapons at present is a task-based focus on “selecting” and “engaging” a target, which reframes the above definitions but puts stronger emphasis on what these functions comprise and entail in specific practical settings. The US Department of Defense (DoD) has defined an AWS as a “weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation” [US.PosP1] (see “AWS as geopolitical signifiers: Strategies in political communication in China and the United States of America” section for a detailed analysis). This approach is gaining traction and political acceptance. The International Committee of the Red Cross defines AWS as “any weapon system with autonomy in the critical functions of target selection and target engagement”. That is a weapon system that can select (i.e. detect and identify) and attack (i.e. use force against, neutralise, damage or destroy) targets without human intervention [23], with commentators emphasising that the “adoption of the ICRC’s definition—or one like it—” was “strongly advisable” paired with a call for “concerted response by the international community” to the continued developments of these kinds of weapons [24].

Ekelhof notes that the “main focus within this definition lies on the so-called critical functions of target selection and attack and the absence or lack of human intervention in relation to the system’s autonomy” [25]. Both target selection (sometimes meaning the mere distinction between combatants and non-combatants, sometimes referring to larger planning processes) and attack (raising the questions of what constitutes an individual attack or when exactly it starts and ends), in the end, bear their own ambiguities, albeit in a less obvious manner [26].

Even efforts to define AWS by focusing on specific tasks fail to establish a common ground that would clearly distinguish them from previous weapon systems while at the same time meeting the expectation of unambiguously pinpointing their functionality. Both “autonomy” and “meaningful human control” are volatile signifiers. The same, however, applies to automated tasks that are interpreted as constitutive of autonomous weapons, since these tasks are embedded in military practices, infrastructures and concrete situations that eventually determine the effects and degrees of autonomy. In other words, the contexts produce the conditions under which the agency of an autonomous weapon is determined.Footnote 6

Hopes that a functional, task-oriented definition of AWS (specifically singling out target selection and engagement) would neatly solve the ambiguity problem are bound to be disappointed. Even the more precise terminology is subjected to political discourses, in which different actors deliberately utilise diverging meanings, interpretations and definitions to pursue particular political and geostrategic interests. This picture is complicated even further by voices from outside the political realm, which claim that the current AWS technologies are not sophisticated enough to reasonably draw conclusions regarding their practical, legal or ethical consequences [27].

Both the conceptual and the task-centric approaches lead into a semantic recursion, as in all cases—irrespective of the level of theoretical abstraction—the necessity to agree on a static meaning of the terms cannot be met. One important issue usually neglected in these debates is the challenge of translating these terms back and forth between languages that are situated in vastly differing terminological and conceptual traditions (Bächle TC, Champion SC: Autonomous weapon systems. Journalistic discourses in China, forthcoming)Footnote 7. These cultural differences manifest themselves in larger imaginaries, promoting specific expectations, hopes and fears around new technologies. They are promoted by fictional texts but also by public discourses. For AWS, the attribute “lethal” is a case in point here. By the addition of the L in LAWS, the term comes to emphasise that these technologies are in line with expectations associated with the so-called killer robots, evoking specific cultural images. These images foreground the potential harm that is associated with autonomous weapons outside of human control, extending to fears of looming destruction of all humanity. The following section particularly addresses the role of larger sociotechnical imaginaries that shape and determine the ways AWS become meaningful technologies.Footnote 8

Approaching autonomous weapons embedded in sociotechnical imaginaries

Continuously re-semanticising or bluntly denying the mere possibility of a reasonable discourse on AWS and their effects are two ways that are used to drag out the efforts to find effective regulation. At the same time, AWS are only one of the many fields that shape the AI race between state actors and are rhetorically embedded in larger sociotechnical imaginations that are actively politicised. This becomes especially apparent when we look at the two self-proclaimed superpowers, China and the USA, both of which are striving for global dominance. In both instances, the national discourses around AWS act as signifiers that reveal projections of social, cultural and institutional imaginations. Arguably, these discourses not only function as meaningful narratives but also as effective instruments of geopolitical power (e.g. with the intention of deterrence) to enforce specific interests grounded in realpolitik.

The contradictory and contested meanings that are associated with and at the same time constitutive of AWS are embedded in larger narrative structures that in this article are regarded as an expression of vivid “sociotechnical imaginaries” [10]Footnote 9. In a well-known and influential understanding of “sociotechnical imaginaries”, Jasanoff defines them as “collectively held, institutionally stabilised, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology” [28]Footnote 10. In the continuation of this definition, the “desired futures” are juxtaposed with the “shared fears of harms that might be incurred through invention and innovation”—imaginings between utopia and dystopia—perfectly align with the discursive positions guiding the debates on AWS.

A vast body of research in the wake of Jasanoff’s initial coining of the concept has shown that imaginaries powerfully set boundaries to our futures, “shaping terrains of choices, and thereby actions” [29]. The diversification in approaches and research objects associated with the concept shows that SIs must always be understood as an open, contested and dynamic field influenced by a multitude of discursive arenas and players [10, 29, 30]. For example, AWS imaginaries are often influenced by popular culture, fiction or images used in journalism and inspired by more general assumptions about AI (Bächle TC, Bareis J, Ernst C (eds): The realities of autonomous weapons, forthcoming). The utopian and dystopian frames of reference for AI portray it as a kind of superintelligence with the potential to exceed (human) biology and unleash beneficial effects [31] (e.g. see the Chinese employment of “evolution” in “Technological definitions and normative understandings of AWS” section in the context of AWS), while the rise of technological agency poses grave ethical challenges [32]. AI can be seen as “a key sociotechnical institution of the twenty-first century” with state actors playing a pivotal role in shaping the images in which it is portrayed [33]. AI is strongly associated with specific meanings—and myths—about technological futures [34].

Sociotechnical imaginaries (SIs) mediate between the contested realms of fact and fiction and “allow actors to move beyond inherited thought patterns and categories and into an as if-world different from the present reality” [35]. This also applies to AWS and the foregrounding of science-fiction inspired technologies such as robots, which are promoted on the basis that they will play a vital part in future warfare [36, 37]. Today’s “military-entertainment complex” [38] is increasingly blurring the lines between the realities of war and its representation in popular culture (such as war games, which include tactics or threat scenarios). Drones, for example, have become emblematic of a specific type of warfare that has become mediated, remoted, networked, decentred and de-personalised. The particular “aesthetics” of drone images is represented in the arts, literature and film, and in this form, they also enter the public discourse, reifying a particular visual aesthetics of war [39]. This is a continuation of a type of consumable war that is televised, providing live images to the home viewer [40], a type of mediated war whose most recent iterations focus on cyberwars or the “weaponisation of social media” [41].

Paradoxically, it is exactly in this context of uncertainty—in which reality, imagination, possibility and fiction are conflated—that AWS become highly momentous, in particular when political or military decision-making comes to be based on potential or virtual scenarios [42, 43]. The debates around autonomous weapons usually focus on their legal, political or ethical ramifications. The foundation of these works is (at least in part) also based on those potential or virtual scenarios [44]. An ethical problem contributes to constructing, disseminating and maintaining a specific understanding of “(lethal) autonomous weapons” in popular culture, politics, journalism or research [45, 46]. Ethical debates are a major arena for imagining AWS, controversially situated between positions that argue that warfare could even become more “humane” (by more effectively adhering to international law and respecting human rights), when the actual acts of war are left to machines [3, 5] and voices of AI and robotics researchers warning of dire consequences [7].

Approaching AWS as part of the AI imaginations that are deliberately promoted by nation states, it becomes obvious how countries actively portray themselves as part of a global technology race, competing over economic, military and geopolitical advantages. These AWS meanings are part of larger narratives of national identity, interwoven with specific ideologies, ideas of military self-assurance and pride, which in turn are utilised with the communicative goals of deterrence towards political adversaries.

Comparing the USA and China in this regard is particularly fruitful and demonstrative, as they not only locate themselves in the geopolitical arena as rivals with their own interests, but also fundamentally oppose each other in their self-portrayal. This spans from guiding principles in state doctrine, political systems or general canons of values to the origin myths of these nations, representing competing self-conceptions that are apparent in their diverging histories and political identities.

Schematically, the USA’s hunger for greatness, exceptionalism and aspiration to take the role of a global hegemon contrasts with China’s confidently proclaimed ideal of a harmonised and stable society. AI is in both cases regarded as a means to realise these socio-political ideals, with supremacy achieved by technological prowess being a shared theme for both. The conceptual ambiguity of autonomous weapon systems makes their representation and interpretations a flexible tool in political communication. AWS can be seen as a proxy for the respective understanding of the world by China and the USA, a form of national self-assurance through technology.

Methodology

In this paper, we focus on the AWS strategies of China and the USA. Obviously, this selection of countries is not exhaustive, but as discussed above, it lends itself to overtly competing, even antagonistic stances of ideological, institutional and historical narratives of the two nations. These differences become particularly apparent in the military guidelines for reaching their respective ambitions. Both China and the USA position themselves as global leaders that articulate their geopolitical interests in the AI race, be it in the form of “hard” or “soft” power. Despite their position in the world, the striving for military advantage and global regulation of AWS involves many other nations, especially Russia, Israel, South Korea, the UK, Australia, Germany and France. These countries also harbour companies that are leading in robotic military innovation and their governments actively engage in or are confronted with geopolitical tensions and conflicts.

As discussed above (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section), sociotechnical imaginaries encompass broad concepts such as social order and nationhood. For this reason, the empirical material we refer to in the analysis necessarily reflects only a fraction of a multitude of cultural texts that fuel particular meanings of AWS. In this context, our objective is to specifically focus on those imaginations around AWS promoted in state military contexts and hence we pertain to two main discursive arenas: Firstly, the negotiation process at the CCW represents the international regulatory forum of the UN, with talks taking place in Geneva since April 2013 [47]. Here, the USA and China have issued multiple position papers via the Group of Governmental Experts (GGE) on LAWS regarding the ongoing negotiations. They give their stance on definitional issues, the role of technical features and human intervention with a view to agreeing on a final and unanimously agreed upon UN protocol. The negotiations are still ongoing in 2022 and have been characterised by tedious definition struggles and gridlocks in the past. In a joint effort, Germany and France have proposed to conclude the CCW negotiations with a legally non-binding declaration [48], trying to mediate between two groups of countries that either strictly oppose a ban or call for effective and binding regulation [49]. With the recommendation of the 2019 GGE on LAWS, eleven guiding principles were adopted by the 2019 Meeting of the High Contracting Parties to the CCW. In 2021–2022, the CCW is aimingFootnote 11 to convert these voluntary principles into a “normative and operational framework” [50], but given that the CCW decision making requires consensus, it is estimated that “the probability of this forum producing a framework with unanimous agreement is very low” [51].

Secondly, we refer to position papers, directives, guidelines or decrees addressing AWS published by ministries, executives, higher secretaries or party assemblies of both nations that are publicly accessibleFootnote 12. National standpoints towards tech policy are not limited to one condensed official document or even one type of medium alone. Documents that receive the status of a strategy paper vary in medium and form of presentation, being themselves subject to differing political cultures. Clearly, China and the USA have different institutional traditions in announcing political agendas, due to opposing governmental systems and doctrines, e.g. CCP party rule in China vs. executive presidency in the USA. Further, these tech policy documents are not set in stone but are subject to substantive updates, adjustments or even radical dismissals and reorientations in light of new states of affairs in global politics, changes of ruling governments or the implementation of new doctrines. In sum, the empirical body (Table 1) comprises all relevant CCW standpoint papers of the USA and China that have been published since the start of the negotiations in 2013 and incorporates governmental documents addressing AWS [(or synonymously military (use of) AI)] since the year 2011, when the USA, as a first government, published a comprehensive DOD directive on autonomy in weapon systems (introduced in “Functional approaches to what “autonomous weapon systems” can and cannot do” section).

Table 1 Overview of published CCW standpoint papers and governmental documents concerning LAWS of the USA and China 2011–2022

As a typology, the position papers offer various levels of analysis. First and foremost, the documents stemming from these two discursive arenas provide technical and definitional details on LAWS, showing many similarities to the academic debate (“The challenges of defining autonomous weapon systems” section). But beyond that, these position papers contain additional modi and layers of political communication. On the one hand, they act as self-assurances in the assessment of the current national security situation in the world and their own position in it. On the other hand, these documents can be instrumentalised to serve realpolitik interests. They set orientation points and geopolitical goals, identify threats and forge counter-strategies. Both countries are well aware of the signalling power of these documents for past, existing or emerging partners and adversaries. Further, apparently technical documents can offer strategic opportunities to escape definite LAWS regulation, or they can be used to deliberately provide a breeding ground for ongoing confusion in agreeing upon the regulatory object (see also “Technological definitions and normative understandings of AWS” section below).

AWS as geopolitical signifiers: strategies in political communication in China and the USA

China and the USA employ different strategies to put their AI-driven military dominance on display. Matter-of-fact tech policies and national strategies alternate with messages of national superiority. This section focuses on this particular realm of political communication and employs a comparative analysis of both countries, dissecting how LAWS as AI imaginaries are employed as geopolitical signifiers of national particularities. It analyses them in terms of the military doctrines and AI imaginaries they promote (“Military doctrines, autonomous weapons and AI imaginaries” section) and the definitions of autonomous weapons they establish (“Technological definitions and normative understandings of AWS” section) which both cater to certain goals in political communication.

Military doctrines, autonomous weapons and AI imaginaries

Foreign geopolitics is embedded in military doctrines, serving as a signalling landmark for military forces, the reallocation of strategic resources and technological developments. The empirical material at hand offers layers of analysis hinting at national SIs that put AWS in broader frameworks. These frameworks inform the populace, allies and adversaries about national aspirations, while presenting military self-assurance as a tool to look into a nationally desired future (see “Approaching autonomous weapons embedded in sociotechnical imaginaries” section). Here, AWS act as an empty and hence flexible signifier, a proxy for a society that exhibits different national idealisations of social life, statehood and geopolitical orders.

Military doctrine: The United States of America

In January 2015, the Pentagon published its Third Offset Strategy [US.PosP2]. Here, the current capabilities and operational readiness of the US armed forces are evaluated in order to defend the position of the USA as a hegemon in a multipolar world order. The claimed military “technological overmatch” [ibid.], on which the USA’s clout and pioneering role since the Second World War is based, is perceived as eroding. The Pentagon warns in a worrisome tone: “our perceived inability to achieve a power projection over-match (...) clearly undermine [sic], we think, our ability to deter potential adversaries. And we simply cannot allow that to happen” [ibid.].

The more recently published “Department of Defense Artificial Intelligence Strategy” [US.PosP5] specifies this concern with AI as a reference point. Specific claims are already made in the subtitle of the paper: “Harnessing AI to Advance Our Security and Prosperity”. AI should act as “smart software” [US.PosP5, p 5] within autonomous physical systems and take over tasks that normally require human intelligence. Especially, the US research policy targets spending on autonomy in weapon systems. It is regarded as the most promising area for advancements in attack and defence capabilities, enabling new trajectories in operational areas and tactical options. This is specified with current advancements in ML: “ML is a rapidly growing field within AI that has massive potential to advance unmanned systems in a variety of areas, including C2 [command and control], navigation, perception (sensor intelligence and sensor fusion), obstacle detection and avoidance, swarm behavior and tactics, and human interaction”.

Given that such ML processes depend on large amounts of training data, the DoD announced its Data Strategy [US.PosP11], harnessed inside a claim of geopolitical superiority, stating “As DoD shifts to managing its data as a critical part of its overall mission, it gains distinct, strategic advantages over competitors and adversaries alike” (p 8). In the same vein and under the perceived threat to be outrivalled, “the DoD Digital Modernization Strategy” [US.PosP7] lets any potential adversaries know: “Innovation is a key element of future readiness. It is essential to preserving and expanding the US military competitive advantage in the face of near-peer competition and asymmetric threats” [US.PosP7, p 14]. Here, autonomous systems act as a promise of salvation of technological progress, which is supposed to secure the geopolitical needs of the USA.

Specified with LAWS, the US Congress made clear: “Contrary to a number of news reports, U.S. policy does not prohibit the development or employment of LAWS. Although the USA does not currently have LAWS in its inventory, some senior military and defense leaders have stated that the USA may be compelled to develop LAWS in the future if potential US adversaries choose to do so” [US.PosP12, p 1].Footnote 13

Remarkably, the USA republished the very same Congress Paper in November 2021, just by a minor but decisive alteration. It changed “potential U.S. adversaries” into “U.S. competitors” [US.PosP14]. While it remains unmentioned (and presumably deliberately so) who is meant by both “senior military and defence leaders” and so named “U.S. competitors”, this minor change hints at a subtle but carefully orchestrated strategic tightening of rhetoric, sending out the message that the US acknowledges a worsening in the geopolitical situation with regard to the AWS development. In reaction, the USA continue to weaken their own standards for operator control over AWS in the most recent 2022 Congress Paper (as of May 2022), reframing human judgement: “Human judgement [sic!] over the use of force does not require manual human “control” of the weapon system, as is often reported, but instead requires broader human involvement in decisions about how, when, where and why the weapon will be employed” [US.PosP16]. Certainly, the rhetorical “broadening” of the US direction lowers the threshold to employ AWS in combat, evermore distancing the operator from the machine.

This stands in stark contrast to the US position in earlier rounds of the CCW process; here, the USA not only claims that advancements in military AI are of geopolitical necessity but also portrays LAWS as being desirable from a civilian standpoint, identifying humanitarian benefits: “The potential for these technologies to save lives in armed conflict warrants close consideration” [US.CCW3, p 1]. The USA is listing prospective benefits in reducing civilian casualties such as help in increased commanders’ awareness of civilians and civilian objects, striking military objectives more accurately and with less risk of collateral damage, or providing greater standoff distance from enemy formations [US.CCW3]. Bluntly, the USA tries to portray LAWS as being not only in accordance but being beneficial to International Humanitarian Law and its principles of proportionality, distinction or indiscriminate effect (see also “Technological definition: United States of America” section). While such assertions are highly debatable and have been rejected by many [1, 5, 7, 8], they do shed a very positive light on military technological progress, equating it with humanitarian progress.

In a congress paper on AWS, published in December 2021, these humanitarian benefits are once more mentioned but only very briefly, while a sharpening of the rhetoric is clearly noticeable. The paper also summarises the CCW positions of Russia and China, implicitly clarifying who is meant by “U.S. competitors” (see above). China, even though only indirectly, is accused by invoking that “some analysts have argued that China is maintaining “strategic ambiguity” about its position on LAWS” [US.PosP15, p 2]. This is the first time the USA overtly expresses in a position paper that it understands the AWS negotiations as a political power play, instead of serving the aim of finding an unanimously agreed upon regulatory agreement.

In sum, the USA claims a prerogative as the dominant and legitimate geopolitical player in a multipolar world order, who is under external threat. The ability to defend military supremacy against lurking rivals is portrayed as being in a dependent relationship with the level of technological development of the armed forces, specified with LAWS. The USA claim to hegemonial leadership may only be secured through maintaining technological superiority.

Military doctrine: China

The doctrinal situation in China is more complex and ambivalent. In 2003, the Chinese Communist Party (CCP) and the People’s Liberation Army (PLA) announced the concept of the “Three Warfares”, a military guideline for enforcing Chinese geopolitical interests that has been systematically embedded in the PLA’s military doctrine in recent years [52]. This concept promotes the objective of framing key strategic arenas of foreign policy in one’s favour, so that kinetic (physical military) interventions appear irrational to opponents. This framing, also known as “information warfare” [53], insinuates that international conflicts are less decided by armies carrying off the victory but rather by the media narratives that have the upper hand in interpreting the events.

The concept of “Three Warfares” has been discussed by numerous authors [52,53,54,55,56], encompassing the following dimensions: the so-called psychological warfare aims to influence or disrupt an opponent’s ability to make decisions. This includes practices that deter, shock or demoralise competitors. Media warfare, on the other hand, aims at influencing and manipulating national and international public opinion in order to generate support for China’s military interventions. This entails constant and insistent media exposure, which aims to influence the perception and attitudes of the domestic or enemy population. The third dimension focuses on the legal dimension (“lawfare”). Creative distortions and omissions, conceptual vagueness and loopholes in regulations and international legal conventions serve the purpose of expanding one’s own operational possibilities while simultaneously thwarting opponents in their scope of action. This instrumentalisation of the legal framework should be understood as a means of a “rule by law not rule of law” [54].

The strategic orientation of the “Three Warfares” also reflects a concession to the current military and geopolitical supremacy of the USA. While the USA claims its global leadership with rhetorical boldness, China sketches a military SI of an “underdog”, focussing on tactics of asymmetric warfare. This enables it to avoid direct military confrontation on all fronts and deploy a policy of “shashoujian” (杀手锏), which should be translated as “trump-card” approach [57,58,59]. Instead of competing in all strategic arenas with the USA, this doctrine targets a selective approach, fostering military technology that “the enemy is most fearful of”, including the call that “this is what we should be developing” [60].

However, in recent strategy papers, China has presented itself more confidently. As with the US, AI now plays a crucial role as a “cutting-edge” technology in China’s foreign policy aspirations [61,62,63,64,65].

The AlphaGo win over professional Go player Lee Sedol in 2016, which received a lot of media attention in China (280 million live viewers) was coined by some authors a Chinese “Sputnik moment” [66, 67], hence a wake-up call, which may well have contributed to the massive increase in spending in tech industry and research. Certainly, with the 2017 “new generation artificial intelligence development plan” the CCP also embraces these bold AI ambitions rhetorically by emphasising the need to “grasp firmly the strategic initiative of international competition during the new stage of artificial intelligence development [and] create new competitive advantage” [CH.PosP4, p 2]. The CCP decisively calls for a technological superiority that is equipped “to build China’s first-mover advantage in the development of AI” [CH.PosP4, p 1].

Such new confidence and ambitions are similarly met with a multilateralist appeasement and peacekeeping positioning [CH.PosP9]. China claims full sovereignty and strict non-interference in questions of national interest and security. This relates to, among other things, the one-China unification principle (e.g. directed to Taiwan “China must be and will be reunited”) or territorial claims (e.g. “safeguard China’s maritime rights and interests”). Beyond this sphere of the national interest the CCP pictures a military SI of a global hegemon without expansive aggressions (“Never Seeking Hegemony, Expansion or Spheres of Influence”). Sources of instability are located elsewhere, namely, in local “separatism” and foreign aspirations with “order [...] undermined by growing hegemonism, power politics, unilateralism and constant regional conflicts and wars”. At the same time, the USA is blamed directly for posing a threat to “global strategic stability” [CH.PosP9].

In sum, China’s military SI depicts a global player that has caught up on its rivals at a military level. The CCP adjusts its doctrines and strategies pragmatically, from an underdog position to an assertive hegemon, clearly addressing geopolitical claims and means to get there. Military doctrines are clearly linked, as with the USA, to modernist narratives of technological progress, incorporating intelligent weaponry as AWS as a means to an end to outrival competitors. The technological race for supremacy in this key strategic technology is perceived as open, with China claiming legitimate ambitions.

Technological definitions and normative understandings of AWS

The USA and China have published national strategy papers as well as position papers at the CCW that are of a technical nature, aiming to define AWS. These documents have to be read against the backdrop of the larger SIs as introduced above (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section), motivating and legitimating the state’s strategic interpretative flexibility in creating and promoting AWS definitions. Hence, these documents not only inform which understanding—and technological variation—of autonomous weapon systems is to be prioritised, but further raise the question to what greater ends these specific interpretations are pursued. For example, in much the same way as the US American definitions of AWS, the Chinese “lawfare objectives” keep the backdoor open for developing automated weapons that escape the poor attributions of autonomy found in the AWS documents, with many military applications remaining legally and politically unaffected. A closer look at the national AWS definitions in the following sections will illuminate this issue.

Technological definition: United States of America

The DoD Directive 2012/2017 [US.PosP1, emphasis added] provides seemingly unequivocal definitions:

“Autonomous weapon system. Targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”

(...)

Semi-autonomous weapon system. A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator.”

A first problem with the US definition arises with the role of the human operator as a defining criterion for autonomy. As discussed in “Definitions focusing on the degree of human control over supposedly autonomous systems” section, conceptually, the USA advocates a relational approach to autonomy, linking it to the human presence. But the essential question of what an autonomous system comprises cannot simply be addressed by determining whether a human is in the loop or not. The degree of human intervention may give us advice on how to use such weaponry, but it does not help much in defining what it is. As Crootof clarifies: “If a weapon system has the capacity to independently select and engage targets, whether there is a human supervisor or whether it is operated in a semi-autonomous mode is a question of usage—and thus regulation—and not of autonomy” [11]. Very powerful weapons can be controlled by an operator and restrained such that their fire power (e.g. operational speed, fire range or power of devastation) is actually rarely fully in use. But from this observation, we can hardly deduce that we have arrived at the very essence of what the weaponry actually is and what it is capable of. While the role of human intervention in AWS is ethically and politically a much-needed debate, but not a debate without pitfalls as discussed by various authors regarding “meaningful human control” [24, 68,69,70,71,72], it simultaneously raises further confusion if it is regarded as an appropriate characteristic in defining AWS.

More problematically, making a definition of AWS dependent on human intervention creates new loopholes in escaping effective legal regulation. The fundamental problem with the DoD definition stems from the fact that the standards for autonomy are simply very low—actually, it does not do justice to the term autonomy at all. The definition does not engage with the complexity of the term, clarifying what is really meant by autonomy. Should autonomy be rather understood as self-sufficiency, or as self-directedness, and hence as independence from outside control [73] (see “Technical definitions of autonomy and autonomous weapons systems” section)? Also, as problematised above, operation under pure autonomy as the DoD document suggests is a myth, as any technical device is influenced by external factors such as technical infrastructure, terrain etc.

In essence, the DoD reduces the term autonomy to a process of automation: Any (non-) trivial system—either mechanical or algorithm-based—that, once activated, automatically processes (hence, without further human intervention) tasks and interacts with an environment would meet this criterion. Following the US reasoning, it is extremely hard to differentiate between advanced and very rudimentary mechanical or algorithmic systems, as literally any of them can be reduced to processes of automation. Thus, reducing autonomy to a process of automation introduces the notion of a continuum, making a clear differentiation between ubiquitously labelled “intelligent” weaponry impossible and the distinction between full or only semi-autonomy ever more complicated (cf. “Definitions focusing on the degree of human control over supposedly autonomous systems” section).

Take, for example, the case of radar detection systems, which have been in use for decades and which are capable of identifying, selecting and targeting enemy objects without the necessity for human intervention. The only difference between such systems and AWS would be the capability of automatically engaging with these targets. But weapon systems that fulfil such additional criteria have existed for years already, with the best example maybe being the Phalanx system [74]Footnote 14, which has been in use since the 1980s, and hardly raised any regulatory concern back then [75]—especially not from the US side.

Problematically, the DoD definition cannot account for military advancements in fire power or complex machine behaviour such as adoption enabled through new data processing capabilities in machine learning—leading to a new myriad of problems such as unpredictability [76, 77] or opacity [78, 79] of machine behaviour, which are connected to safety, incomprehensibility and accountability issues well known from the civil AI regulatory debate. These phenomena in turn raise the fundamental question of whether deploying LAWS violates the Geneva Convention of IHL. If machine behaviour becomes ever more unpredictable, opaque and complex, it is debatable if the Geneva principles of the IHL distinction, proportionality and accountability in hors de combat can be met at all [80,81,82].

The USA has never claimed to retain from developing LAWS; in fact, it even cherished its advantages (see “The United States of America” section [US.CCW3]) and, as discussed above, threatens adversaries to “develop LAWS in the future if US competitors choose to do so” [US.PosP15]. This statement is, if one takes the DoD definition as a reference, strictly speaking, false. As discussed in relation to the Phalanx system, the USA have used LAWS in the past already and still do so todayFootnote 15 [Us.PosP12] [83, 84].

Conclusively, the DoD definition has the problematic effect of levelling down so many weapon systems under one category that critical advancements in weapon abilities that are now underway cannot be accounted for (making compliance with the Geneva principles more challenging). With such a vague and all-encompassing definition, effective legal regulation is ever more complicated, ensuring that national advances in the development of LAWS are not impeded.

Technological definition: China

China’s contributions to the discussions at the CCW are rather limited, but serve well to understand China’s ambivalent stance on AWS, echoing its international normative positioning (as introduced in “Military Doctrine: China” section). Their ambiguity helps to keep a strategic backdoor for optionality open. In the 2017 CCW negotiations, China adopted a positive stance on international regulation, favouring preventive arms control: “The international community should follow the concept of universal security on the basis of existing international law, carry out preventive diplomacy, check the trend of an arms race in the high-tech field and maintain international peace and stability” (12th December 2017, p 5). This is in accordance with the multilateralist stance voiced in the general AI policy trajectory of the country (“Actively participate in global governance of AI (...), Deepen international cooperation in AI laws and regulations, international rules (...) and jointly cope with global challenges” [CH.PosP4, p 25] [85]).

Such a preventive regulatory stance was regarded more critically in 2018. Here, China states that “(...) the impact of emerging technologies deserve objective, impartial and full discussion. Until such discussions have been done, there should not be any pre-set premises or prejudged outcome, which may impede the development of AI technology” [CH.CCW2, p 2]. This rather innovation and military friendly policy reveals clear reservations against a precautionary principle that would regulate LAWS restrictively and prevent an AI arms race. The ambivalence seems even more striking when looking at the Chinese LAWS definition presented at the CCW:

Definition [CH.CCW2, p 1, enumeration added by authors for better overview]

According to the Chinese view, “LAWS should include but not be limited to the following 5 basic characteristics”: (1) Lethality, “which means sufficient pay load (charge) and for means to be lethal”; (2) Autonomy, “which means absence of human intervention and control during the entire process of executing a task”; (3) Impossibility for termination, “meaning that once started there is no way to terminate the device”; (4) Indiscriminate effect, “meaning that the device will execute the task of killing and aiming regardless of conditions, scenarios and targets”; (5) Evolution, “meaning that through interaction with the environment the device can learn autonomously, expand its functions and capabilities in a way exceeding human expectations”.

Conceptually, these LAWS criteria display a pick-and-mix approach, with the first stating the obvious, with the second showing strong similarity to the US definition (with its discussed pitfalls), with the fourth showing compliance to the Geneva Principles of IHL, and with the fifth hyperbolising, picking a fancy term “evolution” (hence lending imagination from a biological domain and maybe even evoking fantasies of an organic, autopoetic and reproductive machinery creating awe by exceeding human capabilities) to label adoption in machine learning processes.

The real crux lies in the third of these criteria, which hypothesises that once started, there is no way to terminate a device. In essence, this scenario describes a universally destructive, actually ludicrous idea, which is nothing but absurd. Machines are not perpetuum mobiles but rely heavily on infrastructure, supervision, context, etc.—so, clearly, machinery self-sufficiency is a myth (see “Technical definitions of autonomy and autonomous weapons systems” section). Strictly speaking, these criteria depict sensational doomsday fiction, once more proving the hybridity of the entire AWS discourse, where realpolitik, imagination, possibility and fiction are conflated [86]Footnote 16 (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section).

It is exactly these unrealistic criteria for autonomous weapons that maintain the idea of promoting seemingly less dangerous—only “automatic”—weapon systems, undermining national or international legislation efforts. Where the US definition has set the benchmark for AWS too low, the Chinese set the benchmark for AWS too high, rendering their existence near science fiction. Hence, demands to ban AWS following these criteria can largely be understood as a political gesture of purely symbolic value. Implicitly, the development of autonomous and semi-autonomous weapon systems is not only tolerated but by definition appears as a legitimate course of action. This perfectly voices the objectives laid out in so-called asymmetric lawfare (see “Military doctrines, autonomous weapons and AI imaginaries” section): The legally vague, even bland criteria applied in the description and definition of LAWS have the intended effect of not curtailing one’s own political scope of action.

Conclusively, both countries are against a complete ban on AWS, and with the definitions they promote at the CCW, they certainly do leave a backdoor open for further development and use.

Conclusion

This paper reveals the ways in which (lethal) autonomous weapon systems (AWS) are used as flexible reference objects in political communication. It shows how the USA and China embed AWS in their military doctrines and uncovers idealisations of geopolitical orders. The analysis navigates between different theoretical disciplines in order to deconstruct these national quests, which are interpreted as competing sociotechnical imaginaries (SIs). Both nations employ semantic manoeuvres in the realm of LAWS to enforce their military interests. The chosen approach—which involved considering AWS as geopolitical signifiers of national particularities—reveals both similarities and differences. This is hardly a surprise, since SIs are strategically deployed as part of political communication: only by making the motifs mutually decipherable while at the same time stressing differences can both sides ensure an intelligible back and forth in communication.

The main objective shared by both sides is the attempt to cater to certain goals in political communication. In particular, the two nations use the term AWS as a semantic means of deterrence in hybrid warfare. More recent political developments illustrate an escalating rhetoric that also points to the function of military technology as a semantic vessel. On the US side, subtle terminological changes (such as substituting “potential U.S. adversaries” for “U.S. competitors”) have been accompanied by an increasingly transparent and conscious unmasking of the CCW negotiations as an arena of rhetorical contest. The worsening of the international security situation has motivated the USA to lower its standards of human control over AWS, which makes the employment of AWS more likely. Such endeavours are undermining international humanitarian efforts at establishing binding and supranational rules to regulate AWS. On the Chinese side, the doctrine of overt lawfare and media warfare have been obvious since the PLAs announcement in 2003. Recently, this self-portrayal has painted the picture of a transformation from an “AI underdog” to an assertive hegemon by means of AI superiority.

In another conspicuous similarity, the military doctrines of both countries are clearly linked to narratives of technological progress, with the USA and China emphasising that intelligent weaponry can be used to safeguard their respective geopolitical goals (especially regarding disputed territories and spheres of influence). AI technologies are tied to overt efforts to enforce legitimacy for military technology advancements and aggressive military strivings. Technological superiority is elevated to a sublime status and portrayed as indispensable to secure national orders in a perceived arena of fierce international competition (AI weapons race). The emphasis of national resilience to defend military hegemony (US), or to catch up and achieve a pole position (China), brings to the fore larger national imaginaries that articulate idealisations of world orders and their respective value foundations. AWS informed by SI, especially in a broader context of AI, articulate visions of national pride that are sought in technological advancement and achievement, even if at times they are hidden behind the smokescreen of international collaboration.

Major differences are apparent in the linguistic manoeuvres by which the USA and China achieve their goals. The US military definitions of AWS—which are also a conceptual blueprint for many other institutions and organisations—operate on a conceptual continuum, mainly reducing autonomous qualities to processes of automation. Taken together with the relational understanding of autonomous systems (which always necessarily involves human agency), this effectively creates a hybrid understanding of automatic and/or autonomous (weapons) systems. This blurring makes it all the more challenging to find legal parameters for the regulation of AWS. As an effect of this indeterminacy, national ambitions with regard to the development of novel weapon technologies remain unaffected: this lack of clarity allows for a historical perspective, focusing on functions such as target selection and engagement, which draws a continuous line from CIWS systems to today’s elaborated systems. Innovative technological features, which include machine learning operations and for this reason enable unprecedented adaptive qualities and unpredictable behaviour, remain largely unaccounted for in the AWS definition by the USA.

The understanding of AWS promoted by China at the CCW has intentionally fostered an ambiguity in defining AWS that helps to keep the strategic backdoor for the development of “intelligent” weapons open, despite the publicly displayed efforts to curtail their development and use. This is on the one hand achieved by taking an ambivalent stance to preventive measures against novel technologies and on the other by promoting a wildly contradictory and bizarrely unrealistic understanding of AWS. It is the latter in particular that helps to legitimise the use of automatic weapons, which are indirectly portrayed as the much less worrisome technology.

On an international level, the semantic ambiguities of both states, which employ value-laden concepts such as machine autonomy and (human) control in the context of AWS, are deliberately exploited in order to usurp efforts for their effective regulation. Effectively, both nations are undermining global efforts to prevent an AI weapons race—even if they are simultaneously promoting a rhetoric of appeasement and collaboration. If autonomous systems are understood as a relational quality that is always interwoven with external factors, the difference between them and “only automatic” systems is blurred. This means that novel military technologies seem fully legitimate as they are presented as a mere continuation of the weapon systems of the past, which did not spark a lot of controversy back then. If, on the other hand, autonomy and autonomous systems are defined as entities that operate completely independently of external factors such as infrastructure, energy supply, human oversight or decisions, the portrayal of AWS crosses the boundary into the realm of what is conceptually impossible. Regulating AWS becomes a vain endeavour since these technologies do not exist. In an effort to undermine much needed international regulation, it is exactly this paradoxical double-bind that ensures that states can continue the development of highly automatic and destructive weaponry.

The European actors have not contributed to an effective regulation of LAWS either. Neither Germany nor France as powerful EU nations are listed as countries that call for a prohibition on fully autonomous weapons by the Campaign to Stop Killer Robots, even though they are both active in the CCW process [87]. Their efforts for a voluntary regulatory framework can be perceived as less affirmative than other countries that strictly oppose a ban on LAWS, but this just seems to be another manoeuvre to circumvent tight regulation. The USA has happily exploited the German and French initiative as a model for “alternative approaches to manage LAWS” and is now advertising its own “nonbinding Code of Conduct to “help States promote responsible behaviour and compliance with international law” [US.PosP15]. Effectively, these declarations should be understood as a fig leaf strategy that mobilises a more humane rhetoric while striving for legitimacy for a soft LAWS regulation approach.

From a theoretical and analytical standpoint, a multidisciplinary lens is pivotal in the effort to make sense of the complex interdependence of conceptual frameworks, technological applications and a performative rhetoric. This lens also significantly sharpens our understanding of how they contribute to the present and future development of weapons technologies and the meanings attributed to them. It has the potential to inspire much needed research on the different political, legal and cultural (semio)spheres to further illuminate the functions and effects of AWS embedded in SIs.

When such momentous technologies are at issue, it is of paramount importance to defend the valence of concepts such as autonomy, accountability and responsibility. It is an imperative to prevent these values from being watered down as a consequence of power plays in the political arena.

Availability of data and materials

Not applicable.

Change history

Notes

  1. Cf. “The challenges of defining autonomous weapon systems” section for more details on the attribute “lethal”.

  2. See e.g. the debate at the CCW discussed below, “AWS as geopolitical signifiers: strategies in political communication in China and the United States of America” section.

  3. See “Methodology” section and the conclusion for details on the French and German initiatives at the CCW, as they take an important role in the UN discussions on regulating AWS.

  4. The long version reads are as follows: The Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May be Deemed to be Excessively Injurious or to Have Indiscriminate Effects.

  5. As Ekelhof continues to point out: “This linguistic indeterminacy has not withheld States from claiming consensus on a number of fundamental points; in fact, it may even have facilitated the development of these two consensus claims: (1) International law applies to autonomous weapons, and (2) some form of human involvement is necessary to ensure the lawful use of autonomous weapons. This may seem a notable achievement, but the linguistic indeterminacies that exist in this context inevitably turn these professed commonalities amongst High Contracting Parties into empty—or at least weakened—claims of consensus. This raises the question [...]: what do these claims actually mean?” [88]

  6. This, of course, does not trivialise the questions of human agency (as necessary fail safe) or human responsibility (that must not be delegated to machines).

  7. The term “autonomous/autonomy” and with it the term “autonomous weapon” does not have a direct equivalent in Mandarin (Bächle TC, Champion SC: Autonomous weapon systems. Journalistic discourses in China, forthcoming).

  8. Ekelhof recounts that autonomous weapons “were first discussed in the Human Rights Council in 2013 under the name “Lethal Autonomous Robotics” and later that year the topic (referred to as “Lethal Autonomous Weapons Systems”) was placed on the United Nations Convention on Certain Conventional Weapons’ (CCW) agenda for the year 2014 [89] Despite the meaning that is (probably deliberately) communicated with the use of “lethal” as an attribute, “the military has long applied the word “lethality” to anything that could make weapons more effective, not just the weapons themselves but also to training, methods, intel support systems and more” [90].

  9. For individual analyses of sociotechnical imaginaries see Jasanoff and Kim [10], for case studies regarding the interconnectedness of knowledge production, technologies and social order see e.g. Hilgartner et al. [91].

  10. The definition continues as follows: “It goes without saying that imaginations of desirable and desired futures correlate, tacitly or explicitly, with the obverse—shared fears of harms that might be incurred through invention and innovation, or of course the failure to innovate. The interplay between positive and negative imaginings—between utopia and dystopia—is a connecting theme throughout this volume ” [28].

  11. During the completion of this paper in February 2022, these negotiations were still ongoing.

  12. Chinese papers are especially difficult to access. Also, the authors do not speak Chinese, so we limitted ourselves to official documents which depict an appropriate translation (thus, the papers that are deliberately directed to allies and adversaries, which suits the analytical agenda of this paper well).

  13. Given the definition of LAWS, the USA’s claim of not possessing any LAWS is highly debatable. Such will be further discussed in Military Doctrine: China, looking at technical LAWS definitions.

  14. “Close-in Weapon Systems (…) designed to engage anti-ship cruise missiles and fixed-wing aircraft at short range. Like other close-in weapon systems, Phalanx provides ships with a terminal defense against anti-ship missiles that have penetrated other fleet defenses. (…) Unlike many other CIWS, which have separate, independent systems, Phalanx combines search, detection, threat evaluation, acquisition, track, firing, target destruction, kill assessment and cease fire into a single mounting” [74].

  15. For example, the so-called fire-and-forget weaponry such as the LRASM stealth anti-ship cruise missile in the US arsenal which can travel around 500 nautical miles before hitting target. But the DoD directive [US.PosP1] and the Congressional Research Service to the US congress label such weapon types solely “semi-autonomous”, justified by humans doing the target selection through “autonomous functions” [Us.PosP12]. Such labelling clashes with many other experts in the field who categorise these weapons as autonomous [69, 75].

  16. The German Delegation went even further into the science fiction genre, blunty alleging: “Having the ability to learn and develop self-awareness constitutes an indispensable attribute to be used to define individual functions or weapon systems as autonomous” [86].

Abbreviations

AI:

Artificial intelligence

AWS:

Autonomous weapon systems

CCP:

Chinese Communist Party

CCW:

Convention on Certain Conventional Weapons

CIWS:

Close-in weapon systems

DoD:

Department of Defense

GGE:

Group of Governmental Experts

ICRAC:

International Committee for Robot Arms Control

ICRC:

International Committee of the Red Cross

IHL:

International Humanitarian Law

LAWS:

Lethal autonomous weapon systems

ML:

Machine learning

NGOs:

Non-governmental organisations

SI:

Sociotechnical imaginary

UN:

United Nations

US:

United States of America

References

  1. Bhuta N, Beck S, Geiß R, Liu H-Y, Kreß C (eds) (2016) Autonomous weapons systems: law, ethics, policy. Cambridge University Press, Cambridge

    Google Scholar 

  2. Krishnan A (2009) Killer robots: legality and ethicality of autonomous weapons. Ashgate Publishing, Burlington

    Google Scholar 

  3. Scharre P (2018) Army of none: autonomous weapons and the future of war. W. W. Norton & Company, New York

    Google Scholar 

  4. Ernst C (2019) Beyond meaningful human control? – interfaces und die imagination menschlicher Kontrolle in der zeitgenössischen Diskussion um autonome Waffensysteme (AWS). In: Thimm C, Bächle TC (eds) Die Maschine: Freund oder Feind? Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-22954-2_12

    Chapter  Google Scholar 

  5. Article36. https://article36.org. Accessed 14 Sept 2021.

  6. Campaign to Stop Killer Robots. https://www.stopkillerrobots.org. Accessed 14 Sept 2021.

  7. Future of Life Institute (2015) Autonomous weapons. An Open Letter from AI & Robotics Researchers. https://futureoflife.org/open-letter-autonomous-weapons. Accessed 14 Sept 2021.

    Google Scholar 

  8. International Committee for Robot Arms Control (ICRAC). https://www.icrac.net. Accessed 14 Sept 2021.

  9. Jasanoff S (2015) Future imperfect: science, technology, and the imaginations of modernity. In: Jasanoff S, Kim SH (eds) Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. University of Chicago Press, Chicago/London, pp 1–33.

  10. Jasanoff S, Kim SH (eds) (2015) Dreamscapes of modernity: sociotechnical imaginaries and the fabrication of power. University of Chicago Press, Chicago/London

    Google Scholar 

  11. Crootof R (2015) [2014] The killer robots are here: legal and policy implications. Cardozo L. Rev. 36(1837-1915):1854–1862

    Google Scholar 

  12. Christman J (2018) Autonomy in moral and political philosophy. In: The Stanford Encyclopedia of Philosophy (Spring 2018 Edition). Center for the Study of Language and Information (CSLI). Stanford University. https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/. Accessed 14 Sept 2021

  13. Khurana T (2013) Paradoxes of autonomy: on the dialectics of Freedom and normativity. Symposium 17(1):50–74. https://doi.org/10.5840/symposium20131714

    Article  Google Scholar 

  14. Rebentisch J (2012) Aesthetics of installation art. Sternberg Press, London

  15. Bradshaw J, Hoffman R, Woods D, Johnson M (2013) The seven deadly myths of “Autonomous Systems”. IEEE Intelligent Syst 28:54–61 pp 2–3

  16. Ekelhof MAC (2019) The distributed conduct of war: reframing debates on autonomous weapons, human control and legal compliance in targeting. Dissertation, Vrije Universiteit Amsterdam, p 59

  17. United Nations (2021) Background on LAWS in the CCW. https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/. Accessed 30 June 2021

    Google Scholar 

  18. Lang J, van Munster R, Schott RM (2018) Failure to define killer robots means failure to regulate them. States disagree on definition of lethal autonomous weapons, DIIS Policy Brief. https://www.diis.dk/en/research/failure-to-define-killer-robots-means-failure-to-regulate-them. Accessed 14 Sept 2021

    Google Scholar 

  19. Noorman M, Johnson DG (2014) Negotiating autonomy and responsibility in military robots. Ethics Inform Technol 16(1):51–62. https://doi.org/10.1007/s10676-013-9335-0

    Article  Google Scholar 

  20. Sauer F (2016) Stopping ‘killer robots’: why now is the time to ban autonomous weapons systems. Arms Control Today 46(8) https://www.armscontrol.org/act/2016-09/features/stopping-%E2%80%98killer-robots%E2%80%99-why-now-time-ban-autonomous-weapons-systems. Accessed 14 Sept 2021

  21. Schaub G, Kristoffersen JW (2017) In, on, or out of the loop? Denmark and Autonomous Weapon Systems. In: Centre for Military Studies’ policy research. Centre for Military Studies. University of Copenhagen, Copenhagen https://cms.polsci.ku.dk/publikationer/in-on-or-out-of-the-loop/In_On_or_Out_of_the_Loop.pdf. Accessed 14 Sept 2021

  22. Ekelhof MAC (2019) The distributed conduct of war: reframing debates on autonomous weapons, human control and legal compliance in targeting. Dissertation, Vrije Universiteit Amsterdam p 67

  23. International Committee of the Red Cross (2016) Autonomous Weapon Systems, Implications of increasing autonomy in the critical functions of weapons. Expert meeting, Versoix, Switzerland, p 8

  24. Böll Foundation (2018) Autonomy in Weapon Systems. The military application of artificial intelligence as a litmus test for Germany’s new foreign and security policy, vol 49. Böll Foundation Publication Series on Democracy, Berlin, pp 20–21

  25. Ekelhof MAC (2019) The distributed conduct of war: reframing debates on autonomous weapons, human control and legal compliance in targeting. Dissertation, Vrije Universiteit Amsterdam p 70

  26. Ekelhof MAC (2019) The distributed conduct of war: reframing debates on autonomous weapons, human control and legal compliance in targeting. Dissertation, Vrije Universiteit Amsterdam pp 74-76

  27. Reeves S, Johnson W (2014) Autonomous weapons: are you sure these are killer robots? Can we talk about it? Army Lawyer 1:25–31. https://ssrn.com/abstract=2427923

  28. Jasanoff S (2015) Future imperfect: science, technology, and the imaginations of modernity. In: Jasanoff S, Kim SH (eds) Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. University of Chicago Press, Chicago/London, p 4

  29. Sismondo S (2020) Sociotechnical imaginaries: an accidental themed issue. Soc Stud Sci 50(4):505–507. https://doi.org/10.1177/0306312720944753

  30. Mager A, Katzenbach C (2021) Future imaginaries in the making and governing of digital technology: multiple, contested, commodified. New Media Soc 23(2):223–236. https://doi.org/10.1177/1461444820929321

  31. Kurzweil R (2005) The singularity is near. Viking Books, New York

  32. Bostrom N (2014) Superintelligence. Paths, dangers, strategies. Oxford University Press, Oxford

  33. Bareis J, Katzenbach C (2021) Talking AI into being: the narratives and imaginaries of national AI strategies and their performative politics. Sci Technol Hum Values. 3. https://doi.org/10.1177/01622439211030007

  34. Natale S, Ballatore A (2017) Imagining the thinking machine: technological myths and the rise of artificial intelligence. Convergence 26(1):3–18. https://doi.org/10.1177/1354856517715164

  35. Beckert J (2016) Imagined futures: fictional expectations and capitalist dynamics. Harvard University Press, Cambridge, p 173

  36. Franklin HB (2008) War stars. The Superweapon and the American Imagination. University of Massachusetts Press, Amherst

  37. Singer PW (2010) Wired for War. The robotics revolution and conflict in the twenty-first century. Penguin Books, New York

  38. Lenoir T, Caldwell L (2018) The military-entertainment complex. Harvard University Press, Cambridge

  39. Maurer K, Graae AI (2021) Drone imaginaries: the power of remote vision. Manchester University Press, Manchester

  40. Baudrillard J (1995) The gulf war did not take place. Indiana University Press, Bloomington

  41. Singer PW, Brooking ET (2018) Likewar. The weaponization of social media. Eamon Dolan/Houghton Mifflin Harcourt, Boston

  42. Cummings ML (2018) Artificial intelligence and the future of warfare. In: Chatham House Report. Royal Institute of International Affairs, London, pp 7–18. https://euagenda.eu/upload/publications/untitled-209846-ea.pdf. Accessed 14 Sept 2021

  43. Newton MA (2015) Back to the future: reflections on the advent of a Autonomous weapons systems. Case Western Reserve J Int Law 47(1):5–23

  44. Coeckelbergh M (2011) From killer machines to doctrines and swarms, or why ethics of military robotics is not (necessarily) about robots. Philos Technol 24(3):269–278

  45. Bhuta N, Beck S, Geiß R (2016) Present futures: concluding reflections and open questions on autonomous weapons systems. In: Bhuta N, Beck S, Geiß R, Liu H-Y, Kreß C (eds) Autonomous Weapons Systems. Law, ethics, policy. Cambridge University Press, Cambridge, pp 347–374

  46. Geiß R (ed) (2017) Lethal autonomous weapons systems: technology, definition, ethics, law & security. Federal Foreign Office, Berlin

  47. Reaching critical will. https://reachingcriticalwill.org/disarmament-fora/ccw. Accessed 14 Sept 2021

  48. Group of Governmental Experts of the High Contracting Parties (2017) For consideration by the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS). Submitted by France and Germany, Geneva. https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2017/gge/documents/WP4.pdf. Accessed 22 Feb 2022.

  49. Delcker J (2018) France, Germany under fire for failing to back ‘killer robots’ ban. In: Anderlini J (ed) Politico. Axel Springer, Brussels (in press)

  50. Group of Governmental Experts of the High Contracting Parties (2019) Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, Geneva. https://documents.unoda.org/wp-content/uploads/2020/09/1919338E.pdf. Accessed 14 Sept 2021.

  51. Lethal AWS. Global Debate: what are countries doing about this issue? https://autonomousweapons.org/global-debate/. Accessed 14 Sept 2021

  52. Kania EB (2016) The PLA’s latest strategic thinking on the three warfares. China Brief 16(13):10–14. https://jamestown.org/program/the-plas-latest-strategic-thinking-on-the-three-warfares/. Accessed 15 May 2020

  53. Timothy AW (2012) Brief on China’s three warfares. In: Delex Special Report-3. Delex Consulting, Studies and Analysis (CSA), Delex Systems, p 4. http://www.delex.com/data/files/Three%20Warfares.pdf. Accessed 14 Sept 2021

  54. Halper S (2013) China: the three warfares. Prepared for Andrew Marshall, Director of the Office of Net Assessment, Office of the Secretary of Defence. https://cryptome.org/2014/06/prc-three-wars.pdf. Accessed 14 Sept 2021

  55. Jackson L (2015) Revisions of Reality. The three warfares—China’s new way of war. In: Beyond Propaganda. Information at War: From China’s Three Warfares to NATO’s Narratives. The Legatum Institute, London, pp 5–15. https://li.com/wp-content/uploads/2015/09/information-at-war-from-china-s-three-warfares-to-nato-s-narratives-pdf.pdf. Accessed 14 Sept 2021

  56. Lee S (2014) China’s ‘three warfares’: origins, applications, and organizations. J Strat Stud 37(2):198–221. https://doi.org/10.1080/01402390.2013.870071

  57. Allen G (2019) Understanding China’s AI Strategy. Clues to Chinese strategic thinking on artificial intelligence and national security. In: Center for a New American Security https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy. Accessed 13 Mar 2019

  58. Bruzdzinski JE (2004) Demystifying Shashoujian: “China’s Assassin’s Mace” Concept. In: Scobell A, Wortzel L (eds) Civil-military change in china elites, institutes, and ideas after the 16th party congress. Diane Publishing Co, Darby, pp 309–364

  59. Kania EB (2020) “AI weapons” in China’s military innovation. In: Global China. The Brookings Institution https://www.brookings.edu/wp-content/uploads/2020/04/FP_20200427_ai_weapons_kania_v2.pdf. Accessed 15 May 2020.

  60. Cheung TM, Mahnken T, Seligsohn D, Pollpeter K, Anderson E, Yang F (2016) Planning for innovation: understanding China’s plans for technological, energy, industrial, and defense development, Report prepared for the US-China Economic and Security Review Commission, Washington DC, 28 July 2016. Citation of CMC Chairman Jiang Zemin, p 26

  61. Future of Life Institute (2018) AI policy - China. https://futureoflife.org/ai-policy-china/. Accessed 14 Sept 2021

  62. Horowitz MC (2018) Artificial intelligence, international competition, and the balance of power. Texas Natl Secur Rev 1(3):37–57. https://doi.org/10.15781/T2639KP49

  63. Horowitz MC, Allen GC, Kania EB, Scharre P (2018) Strategic competition in an era of artificial intelligence. In: Center for a New American Security’s series on Artificial Intelligence and International Security. Center for a New American Security. https://www.cnas.org/publications/reports/strategic-competition-in-an-era-of-artificial-intelligence. Accessed 14 Sept 2021

  64. Katzenbach C, Bareis J (2018) Global AI race: states aiming for the top. https://www.hiig.de/en/global-ai-race-nations-aiming-for-the-top/. Accessed 15 June 2019.

  65. Roberts H, Cowls J, Morley J, Taddeo M, Wang V, Floridi L (2021) The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI Society 36:59–77. https://doi.org/10.1007/s00146-020-00992-2

  66. Kania EB (2017) AlphaGo and beyond: the Chinese military looks to future “intelligentized” warfare. https://www.lawfareblog.com/alphago-and-beyond-chinese-military-looks-future-intelligentized-warfare. Accessed 22 Feb 2022

  67. Lee K-F (2018) AI superpowers: China, silicon valley, and the new world order. Houghton Mifflin Harcourt, Boston, New York

  68. Crootof R (2016) A meaningful floor for “Meaningful Human Control”. Temp Int'l Comp LJ 30:53–62

  69. Altmann J (2019) Autonomous weapon systems – dangers and need for an international prohibition. In: Benzmüller C, Stuckenschmidt H (eds) KI 2019: Advances in Artificial Intelligence. Joint German/Austrian Conference on Artificial Intelligence, Kassel, September 2019, Lecture Notes in Computer Science, vol, 11793. Springer, Cham, pp 1–17. https://doi.org/10.1007/978-3-030-30179-8_1

  70. Amoroso D, Tamburrini G (2020) Autonomous weapons systems and meaningful human control: ethical and legal issues. Curr Robot Rep 1:187–194. https://doi.org/10.1007/s43154-020-00024-3

  71. Chengeta T (2017) Defining the emerging notion of meaningful human control in weapon systems. J Int Law Politics 49(3):833–890

  72. International Committee for Robot Arms Control (2019) What makes human control over weapons systems ‘meaningful’? Working paper submitted to the Group of Governmental Experts on lethal autonomous weapons of the. CCW, Geneva

  73. Bradshaw J, Hoffman R, Woods D, Johnson M (2013) The seven deadly myths of “Autonomous Systems”. IEEE Intelligent Syst 28:54–61 p 5

  74. NavWeaps. 20 mm Phalanx Close-in Weapon System (CIWS). https://doi.org/10.1177/1354856517715164. Accessed 14 Sept 2021

  75. Sauer F (2020) Stepping back from the brink: why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible. Int Rev Red Cross 102(913):235–259. https://doi.org/10.1017/S1816383120000466

  76. European Commission (2020) Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics. European Commission, Brussels

  77. Kowert W (2017) The foreseeability of human–artificial intelligence interactions. Texas Law Review 96(1):181–204

  78. Brkan M, Bonnet G (2020) Legal and technical feasibility of the GDPR’s quest for explanation of algorithmic decisions: of black boxes, white boxes and fata morganas. Eur J Risk Regul 11(1):18–50. https://doi.org/10.1017/err.2020.10

  79. Burrell J (2016) How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc 3(1). https://doi.org/10.1177/2053951715622512

  80. Boulanin V, Bruun L, Goussac N (2021) Autonomous weapon systems and international humanitarian law. In: Identifying limits and the required type and degree of human–machine interaction. SIPRI Publications. https://sipri.org/sites/default/files/2021-06/2106_aws_and_ihl.pdf. Accessed 13 Sept 2021

  81. Sassòli M (2014) Autonomous weapons and international humanitarian law: advantages, open technical questions and legal issues to be clarified. Int Law Studies 90(1):308–340

  82. Schmitt MN (2013) Autonomous weapon systems and international humanitarian law: a reply to the critics. Harvard Natl Sec J 4:1–37

  83. Department of the Navy (2019) Department of Defence Fiscal Year (FY) 2020 budget estimates. In: Justification Book Volume 1 of 1, Weapons Procurement. Navy. https://www.secnav.navy.mil/fmc/fmb/Documents/20pres/WPN_Book.pdf. Accessed 14 Sept 2021

  84. Vavasseur X (2021) NavalNews. Lockheed martin progressing towards LRASM integration on F-35. https://www.navalnews.com/naval-news/2021/01/lockheed-martin-progressing-towards-lrasm-integration-on-f-35/. Accessed 14 Sept 2021

  85. Kania EB (2018) China’s strategic ambiguity and shifting approach to lethal autonomous weapons systems. https://www.lawfareblog.com/chinas-strategic-ambiguity-and-shifting-approach-lethalautonomous-weapons-systems. Accessed 17 Sept 2021

  86. Permanent Representation of the Federal Republic of Germany to the Conference on Disarmament in Geneva (2018) Statement delivered by Germany on Working Definition of LAWS/“Definition of Systems under Consideration”, Convention on prohibitions or restrictions on the use of certain conventional weapons which may be deemed to be excessively injurious or to have indiscriminate effects, Geneva, p 2. https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2018/gge/statements/9April_Germany.pdf. Accessed 14 Sept 2021

  87. Campaign to Stop Killer Robots (2020) Country views on killer robots. https://www.stopkillerrobots.org/wp-content/uploads/2020/05/KRC_CountryViews_7July2020.pdf. Accessed 22 Feb 2022

  88. Ekelhof MAC (2019) The distributed conduct of war: reframing debates on autonomous weapons, human control and legal compliance in targeting. Dissertation, Vrije Universiteit Amsterdam, p 60

  89. Ekelhof MAC (2019) The distributed conduct of war: reframing debates on autonomous weapons, human control and legal compliance in targeting. Dissertation, Vrije Universiteit Amsterdam, p 16

  90. Ekelhof MAC (2019) The distributed conduct of war: reframing debates on autonomous weapons, human control and legal compliance in targeting. Dissertation, Vrije Universiteit Amsterdam, p 17 Fn 15

  91. Hilgartner S, Miller CA, Hagendijk R (eds) (2015) Science and democracy. Making knowledge and making power in the biosciences and beyond, Routledge, New York/Abingdon

Download references

Acknowledgements

The authors acknowledge support by the KIT-Publication Fund of the Karlsruhe Institute of Technology. We want to thank Steven Mark Champion for his help in preparing the manuscript.

Funding

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations

Authors

Contributions

TCB and JB contributed to all parts of the article. The authors read and approved the final manuscript.

Corresponding authors

Correspondence to Thomas Christian Bächle or Jascha Bareis.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: the authors reported some major and minor editing and formatting errors.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bächle, T.C., Bareis, J. “Autonomous weapons” as a geopolitical signifier in a national power play: analysing AI imaginaries in Chinese and US military policies. Eur J Futures Res 10, 20 (2022). https://doi.org/10.1186/s40309-022-00202-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40309-022-00202-w

Keywords