Uploaded by Agastya Sridharan

Michigan-IpSh-Aff-Indiana-Octas

advertisement
Indiana---Octas---Disclosure
1AC
Autonomous War---1AC
The advantage is AUTONOMOUS WAR:
The military is racing to integrate fully autonomous weapons---a desire to maximize
efficiency drives unaccountable systems that disregard legal norms.
Giacomello ’22 [Giampiero; January 3; Associate Professor of Political Science, University of Bologna;
A Journal of Social Justice, “The War of ‘Intelligent’ Machines May Be Inevitable,” vol. 33 no. 2]
Broadly speaking, we
need to explore three main themes: first, the superiority of machine over human in speed,
focus, and so forth; second, “if there is a link, there is a way,” that is, if a link exists to allow communication between the man in
command and the machine, there is also a way to “hack into the system” and penetrate the machine; and third, the questionable morality of
autocracies when they have to choose between strategic imperatives or human life (especially if it is the lives of enemies). These three
reasons combined explain why a war of fully autonomous machines may soon become inevitable .
There is no disagreement that, for specific tasks, computers are
faster than humans. The original mission of computers was
indeed to calculate faster than humans; it was necessary to rapidly decrypt Nazi Germany’s Enigma transmissions so that the Battle of the
Atlantic and other major engagements were won in World War II. Once that was achieved, the rush to “speed up” as many functions as possible
began in earnest, in civilian and military spheres alike. Progressively, machines have become insuperable in regard to speed of execution, to the
point that, observing early applications of robots in combat, The New York Times’ John Markoff concluded that “the
speed of combat is
quickly becoming too fast for human decision maker s .” This statement was true even before quantum computing
made its contribution to
As
speed ing up calculations in the future.
speed on the battlefield has always been at premium , the “ fastest shooter in town” will set the standard and the
others, adversaries
and allies, will have to adapt to even faster reaction times or risk being left out of the
competition altogether. The humans’ low reaction time will again stand in the way. Even under human control, AI-powered combat robots
are already showing their sway on the battlefield (albeit under the controlled conditions of military exercises) and in the sky, as reported by
specialized media. The current happenings with hypersonic missiles is instructive, because they are too fast and maneuverable to be
intercepted—“ game
changers ” as they are called. Their speed will further reduce the time to assess whether or
not a “ perceived ” missile launch is the real thing, a bug, or an error . Yet, everybody wants them despite their
potential for “ destabilization .”
Undoubtedly, the scenario of intelligent robots roaming the battlefield, like in the Terminator saga, and replacing human soldiers completely is
far away in the future (if ever), thanks to the Artificial General Intelligence (AGI) limit. Humans may be slower and more limited in computing
power compared to intelligent machines, but they can perform an amazing variety of tasks and, on top of that, think in abstract terms. No
machine can replicate the flexibility and range of action of human intelligence, and this condition will hold true for the foreseeable future. This
is a somehow divergent conclusion reached by Chris Demchak: that the default of AGI could be total human annihilation. But this is only an
apparent divergency.
The fact that today’s intelligent machines cannot be as flexible and “multitasking” as human beings (lacking AGI) may end up lulling decisionmakers into believing that, after all, human soldiers can never be entirely replaced, which would give the former a false sense of security (and
control) about the “rise of machines” on the battlefield. The grim conclusion, however, is that “ killer
robots ” do not need to be
equipped with AGI ; they are not required to shoot a weapon and paint a portrait and think of parallel universes .
Rather, they
are built to be mono-functional . In other words, they have to be able to perform relatively few tasks (or
even just one), but quickly and with no distraction. And here lays the main reasons of why machines will eventually replace
humans on the battlefield. The following scenario may not be true now, but scholars and the public should be thinking 10 or 20 years in the
future.
To begin with, to retain remote command and control of such machines, human commanders will have to have access points (a bit like the
“ports” in a computer); it may be through secure Wi-Fi, radio, or cable, provided that it does not hamper the efficiency of the machine. The
general rule in hacking, however, is “where there is a link, there is a way,” that is, if something is connected, it can be hacked. Nothing can
prevent hacking one hundred percent, but sealing the mission and orders in a secure core can certainly reduce hostile access. Once the mission
is accomplished, the machine can receive new instructions (possibly via a USB) and so on, but during the mission it would be secluded from the
outside.
Of course, the machine’s instructions and mission can include ethical Rules of Engagements (RoE) and be subject to the Laws of War (jus in
bello)—lines and lines of code educating the machine on who is a legitimate target and who is not, on why a soldier with a weapon can be hit—
but a target can still drop their weapon instantly, becoming a noncombatant protected by the Geneva Convention, and can also then retrieve it,
becoming a target again, and so on. And how do we explain to AI (with more lines of code) that a kid with a cellular phone may be a legitimate
target if she/he is reporting the position of enemy troops, but that it’s also possible that she/he is simply playing with an old, useless phone, or
something quite similar to a phone? And on and on.
No matter how careful and skilled coders are, each additional line of code increases the probability of bugs and errors; not by accident,
“elegance” among coders means to do more with fewer lines. It may be preferable to limit autonomy and maintain remote control, so the
machine can ask back to higher command. That would, however, bring back the “where there is a link, there is a way” problem. Or,
alternatively, a government
may decide that all its military machines can do without all those lines explaining the
“ fine print ” of jus in bello (and make war an institutionalized affair among sovereign states) and should instead just be
given the simple mission of “ achieving the objective .” If the objective is in the enemy’s land, among the enemy’s people,
who cares if there is less “ fine print ” and machines cause unnecessary destruction, as long as the target is
achieved?
Yes, (correctly so) this issue is an anathema to a liberal democratic government. That way of looking at warfare elicits revulsion in the United
States, Europe and other advanced democracies, with their human rights foundations and, above all, their paramount belief in the respect of
human life, whether of their citizens or of their adversaries. But this attitude does not hold true for everybody. These beliefs may be
unimportant for those governments whose ideology, or political system tell them that respect for individual life is not that big of a deal in the
great scheme of things. How many governments, who routinely mistreat and disregard the individual rights of their own citizens, can be
expected to pay attention to or care for the lives and rights of their enemies?
Military efficiency would clearly benefit from
fully sealed, autonomous machines operating in enemy territory so that political objectives are more quickly
achieved.
And then, the great dilemma: what will democratic governments do? Deploy more careful and controlled machines that are slower, more
discerning, and perhaps more hackable, risking being outmatched by the enemy’s machines and losing the war, or match what the enemy
does? As the Chinese government, an actor not particularly known for its consideration and respect for its citizens (especially if they are
minorities), has determined that China will be the world’s leader in AI, perhaps it is indeed time to pose such questions.
If push comes to shove , mature democracies have showed their willingness to match the enemy’s
escalation, as the U nited S tates and the United Kingdom did in World War II when they strategically bombed German and Japanese
cities night after night to the point of no-return using nuclear weapons. Granted, such choices were debated and rationalized with the need to
demolish the enemies’ economy and industry (civilians were unavoidable collateral damage), but democracies have never been entirely
comfortable with such choices.
No sane person could disagree with Phil Torres that an AI arms race would be profoundly foolish , as it could
compromise the entire future of humanity . Unfortunately, humankind, by the same rational logic, should have never invented
nuclear weapons. But it did. Democracies confronted with strategic necessities (that is, defeating the Axis) and operational necessities (that is,
overcoming Axis military forces) at one point in time chose to overrule future common sense.
Necessity to prevail against one’s
opponent has always been a compelling logic , at some point in time.
Checks are non-existent---humans in the military loop are structurally flawed.
Johnson ’22 [James; 2022 edition; PhD, Politics and International Relations from the University of
Leicester, Lecturer in Strategic Studies in the Department of Politics & International Relations at the
University of Aberdee; Journal of Strategic Studies, “Delegating strategic decision-making to machines:
Dr. Strangelove Redux?” vol. 45 no. 3]
AI strategic decision-making magic 8-ball?
AI systems’ recent success in several highly complex strategic games has demonstrated insightful traits that have potentially
significant implications for future military -strategic decision-making.21 In 2016, for example, DeepMind’s AlphaGo system
defeated the professional Go master, Lee Sedol. In one game, the AI player reportedly surprised Sedol in a strategic move that ‘no human
would ever do.’22 Three years later, DeepMind’s AlphaStar system defeated one of the world’s leading e-sports gamers at Starcraft II – a
complex multiplayer game that takes place in real-time and in a vast action space with multiple interacting entities – and devised and executed
complex strategies in ways a human player would unlikely do.23 In short,
existing rule-based m achine l earning algorithms
would likely be sufficient to automate C2 processes further.
AI systems might undermine states’ confidence in their second-strike capabilities, and potentially, affect the ability
of defense planners to control the outbreak, manage the escalation , and terminate war fare. The central fear of alarmists
focuses on two related concerns. First, the potentially existential consequences – i.e., dystopian Terminator’s Skynet-like prophetic imagery –
and existential consequences of AI surpassing human intelligence. Second, the possible dangers caused by machines absent human empathy (or
other theory-of-the-mind emo- tional attributes), relentlessly optimize pre-set goals – or self-motivated future iterations that pursue their own
– with unexpected and uninten- tional outcomes – or Dr. Strangelove’s doomsday machine comparisons.24
Human commander s supported by AI, functioning at higher speeds , and compressed decision-making
timeframes might, therefore, increasingly impede the ability – or the Clausewitzian ‘genius’ – of commanders to shape
the action and
reaction cycles produced by AI-augmented autonomous weapon s ystems. For now, there is general
agreement among nuclear- armed states that even if technological developments allow, decision-making that directly impacts the nuclear
command and control should not be pre- delegated to machines – not least because of the explainability, transparency, and unpredictability
problems associated with machine-learning algorithms.25
Psychologists have demonstrated that humans are slow to trust the infor- mation derived from algorithms (e.g., radar data and facial
recognition soft- ware), but as the reliability
of the information improves so the propensity to trust machines
increases – even in cases where evidence emerges that suggests a machine’s judgment is incorrect .26 The
tendency of humans to use automation (i.e., automated decision support aids) as a heuristic replacement for vigilant
information seeking, cross-checking, and adequate proces- sing supervision, is known as ‘ automation bias .’ Despite humans’
inherent distrust of machine-generated information, once AI demonstrates an appar- ent capacity to engage and interact in complex military
situation (i.e., war- gaming) at a human (or superhuman level), defense
planners would likely become more predisposed to
view decisions generated by AI algorithms as analogous (or even superior ) with those of humans – even if these decisions lacked
sufficiently compelling ‘human’ rational or fuzzy ‘machine’ logic.27 Human psychology research has found that people are predisposed to do
harm to others if ordered to do so by an authority figure.28 As
AI-enabled decision-makings tools are introduced into militaries,
human operators may begin to view these systems as agents of authority (i.e., more intelligent and more authoritative than
more inclined to follow their recommendations blindly; even in the face of information that
indicates they would be wiser not to.
humans), and thus be
This predisposition will likely be influenced, and possibly
making heuristics) assumptions, and the
expedited by human bias, cognitive weaknesses (notably decision-
innate anthropomorphic tendencies of human psychology .29 Experts have long
recognized the epistemological and metaphysical confusion that can arise from mistakenly conflating human and
machine intelligence, especially used in safety-critical high-risk domains such as the nuc lear enterprise.30 Further,
are predisposed to treat machines (i.e., automated decision support aids) that share
task-orientated responsibilities as ‘ team members ,’ and in many cases exhibit similar in-group favoritism as humans do with one
studies have demonstrated that humans
another.31
Contrary to conventional wisdom, having a
human in the loop in decision-making tasks does not appear to alleviate
automation bias.32 Instead, human-machine collaboration in monitoring and sharing respon- sibility for decision-making can lead to
similar psychological effects that occur when humans share responsibilities with other humans, whereby ‘ social loafing ’ arises – the
tendency of humans to seek ways to reduce their own effort when working redundantly within a group than when
they work individually on a task.33 A reduction in human effort and vigilance caused by these tendencies could
increase the risk of
error and accidents .34 In addition, a reliance on the decisions of automation in complex and high-intensity situations can
make humans less attentive to – or more likely to dismiss – contradictory information, and more predisposed to
unforced
use automation as a heuristic replacement (or short-cut ) for information seeking.35
Global controls fall through---a new class of autonomous weapons will bring back
inter-state war.
Giacomello ’22 [Giampiero; January 3; Associate Professor of Political Science, University of Bologna;
A Journal of Social Justice, “The War of ‘Intelligent’ Machines May Be Inevitable,” vol. 33 no. 2]
Lessons learned from the past of
arms control show a mixed, complex future.
Banning entire classes of weapons
does not work any better than outlawing “ war ” as an instrument of policy in international affairs, as the Briand-Kellog Pact
Chemical weapons were considered un-human even before their widespread use, and they
were (somehow) limited only after what happened in WWI. Nuc lear weapon s, despite their clear annihilating power, were
tried to do after World War I.
limited in certain classes and/or areas, but never entirely . Battleships were limited/controlled when they began to lose their primacy
as symbols of naval power. Today, many governments have abandoned the production and/or deployment of anti-personnel mines and cluster
ammunitions. As for the latter, however, important countries like the United States, Russia or Israel recognize that they still have some military
value and thus declined to renounce to their use. All in all, pre-emotive
banning of a whole type of weapons will not work
and given that autonomous weapons are not as deadly ( clearly ) as nuc lear weapon s , more likely there will have
to be “some” use, just like chemical weapons, before there will be better, more controlled agreements. However, it will likely get worse before
it gets better.
Strategic logic will not help either. Strategy is about combining available resource to confront and adversary and overcome its will
to oppose. While it is true that the “ small
wars ” of the early 2000s will not go away, as many territories and areas will not jump to
modernity but will remain in the Middle Ages, “ near-peer ”,
multidomain (land, sea, air, space and cyberspace)
competition or plain old state-vs-state conflicts may enjoy a come-back of sort. In the latter instance, with near-peers,
every little advantage in machine speed and lethality will count to prevail and survive . At least, this is what
history has showed to happen, albeit the trends depicted here are probabilistic, not deterministic (they would not be “trends” otherwise). All in
all, maintaining the
human on the loop, let alone in the loop, may turn into a strategic (and deadly) disadvantage.
Or strategically illogical . In the end the loop is indeed too small for the both of them.
In a multidomain , electronic battlefield, electronic countermeasures will be everywhere, “white noise” (what
results from widespread electronic interdiction) will further
augment the already confused situation in battle. If possible, the “ fog
of war ” (typical battlefield uncertainty about data and information) will in fact increase rather than disappearing as some may think.
Electromagnetic pulse ( EMP ) weapons are already being
enticement to use high-altitude nuclear
deployed to “ kill ” enemy’s drones and machines, not to mention the
burst for the EMP to destroy transistors on a large scale. In such environment,
autonomous killing machines will be armored and protected but it would be impossible to shield and strengthen
everything electronic. It would make sense to exploit one of the oldest and most used defaults of computers, namely, “if no new information
all military in the world would ask that the “last”
known instruction into any combat systems were “ accomplish the mission, no matter what ”. Bury that deep
can be received, go back to the last known instruction”. Given
into the core of those autonomous machines, and they
the choice,
would go on fighting, even after all humankind has long been
gone and forgotten.
That alters state incentives, encouraging aggressive behavior.
Maas ’19 [Matthijs; August; PhD Fellow, Centre for International Law, Conflict and Crisis, University of
Copenhagen; Melbourne Journal of International Law, “International Law Does Not Compute: Artificial
Intelligence and the Development, Displacement or Destruction of the Global Legal Order,” vol. 20]
Finally, there is a ‘hard’ version of the argument that AI
international level, technological change can alter
will drive legal destruction . This is grounded in the idea that, especially at the
core conditions or operational assumptions, not just of specific international laws or
provisions, but in the scaffolding of entire legal frameworks. This relates to a more general point: as Remco Zwetsloot and Allan Dafoe have
pointed out, when we examine risks from AI, we implicitly or explicitly bucket problems as coming from either ‘accident’ or ‘misuse’.153
However, they argue that this dichotomy should be expanded to also take stock of a ‘structural perspective’.154 Rather than just examining
how new technology can afford agents with new capabilities — that is, new opportunities for (mis)use — this perspective asks us to consider
how the introduction of
AI systems may unwittingly shape the environment and incentives (the ‘ structure ’) in which
decision-makers operate.155 As an example, they refer to the prominent historical interpretation of the origin of the First World War
as at least partly deriving from the specific operational or logistical features of the contemporary European railroad system — features such as
tight mobilisation schedules, which promoted or required rapid, all-or-nothing mass mobilisation decisions over more muted moves and which
therefore, paradoxically, reduced states’ manoeuvre room and pitched the dominos of general war.156 In a like manner, certain use
of AI
could ‘unintentionally’ and structurally shift states’ incentives — possibly creating overlap between offensive
and defensive actions, thus driving security dilemmas ; creating greater uncertainty or space for
misunderstanding ; or generally making the inter-state dynamic appear more like a winner-take-all dynamic — in
ways that create
opportunity for conflict, escalation and crisis .157
As such, the ‘hard’ argument for legal destruction holds that the deployment of AI capabilities may lead to a relative decline of the global legal
system, as the capabilities afforded by these AI systems gradually shift the environment,
incentives , or even values of key states.
For instance, AI systems might strengthen the efficacy of more authoritarian states vis-a-vis more liberal ones,158
accelerate the current
trend towards state unilateralism, or feed into the perceived ‘ backlash ’ against international law and
multilat eralism. One rationale here is that whatever benefits a state believed it previously secured through engagement in, or compliance
with, international law (eg, security, domestic legitimacy, soft power or cooperation), if it now perceives (whether or not correctly) that it might
secure these goals unilaterally through application of AI, this may erode the broader legitimacy and regulatory capacity of international law. For
instance, governments might be tempted (and, perhaps, warranted) to believe that, in the near-term future, they might be able to achieve
internal security through AI surveillance capabilities, domestic legitimacy through computational propaganda (rather than through public
adherence to human rights norms) or global soft power through predictive modelling of other states’ negotiation strategies (rather than
reciprocal engagement and compromise). Such prospects
are particularly frightening given that the powerful states — on
whose (at times fickle) acquiescence much of the operation of, for instance, UN bodies, might currently depend — are also
leaders in
developing such AI capabilities.
All this is not to say that the prospect of unilateral AI power is the only force eroding international law’s multilateralist ‘hardware’ (institutions)
or ‘software’ (norms), nor that it is a decisive force or even that its effects might be irresistible or irreversible. However, in so far as we are
seeing an erosion of the foundations of international law, AI may speed up that decline — with all that this entails.
IV CONCLUSION
Does international law compute? How could ‘globally disruptive’ AI affect the institutions, instruments and concepts of the global legal order? I
have discussed ways in which applications of AI may drive legal development, disruption or displacement within the system of international
law. Specifically, I have argued that while many of the
international law system through
challenges raised by AI could, in principle, be accommodated in the
legal development , features of the technology suggest that it will, in practice, be destructive to
certain areas or instruments of international law. This ensures that there appears to be a large risk of practical erosion of certain international
law structures as a result of practical and political difficulties introduced by AI systems.
Speed alone warps decision making, ratcheting up escalatory pressures.
Johnson ’22 [James; 2022 edition; PhD, Politics and International Relations from the University of
Leicester, Lecturer in Strategic Studies in the Department of Politics & International Relations at the
University of Aberdee; Journal of Strategic Studies, “Delegating strategic decision-making to machines:
Dr. Strangelove Redux?” vol. 45 no. 3]
In the post-Cold War era, the emergence
a
of nuclear multipolarity has created multifaceted escalation pathways to
nuc lear confrontation invol- ving nine nuclear-armed states, compared to the Cold War dyad.6 Sophisticated NC3 networks
interact with nuclear deterrence through sev- eral key vectors:7 (1) early warning satellites, sensors, and radars (e.g., to detect incoming missile
launches); (2) gathering, aggregating, processing, and communicating intelligence for C2 planning (i.e., to send and receive secure and reliable
orders and status reports between civilian and military leaders)8; (3) missile defense systems as a critical component of nuclear deterrence and
warfighting postures; and (4) monitoring, testing, and assessing the security and reliability of sensor technology, data, and com- munications
channels, and weapon launch and platforms, used in the context of NC3.9
NC3 systems supply critical linkages between states’ nuc lear forces and their leadership, ensuring
decision-makers have the requisite information and time needed to c ommand and c ontrol ( C2 ) nuclear forces. In short,
NC3 systems are a vital pillar of the states’ deterrence and communications, to ensure robust and reliable command and control over nuclear
weapons under all conditions – and can have a significant impact on how wars are fought, managed, and terminated.10 Because of the pivotal
nature of these systems to the nuclear enterprise, superior systems would likely outweigh asymmetries in arsenals sizes – and thus, put an
adversary with less capable systems and more missiles at a disadvantage.
Nuclear security experts have cataloged a long list of computer errors, unstable or components, early warning radar faults, lack of knowledge
about adversary’s capabilities and modus operandi (especially missile defense sys- tems), and human mistakes that led to nuclear accidents and
demonstrated the limitations and potential for malicious interference of inherently vulner- able NC3 systems.11 The
risks and trade-offs
inherent in NC3 systems since the Cold War-era, reflecting the complex social, emotional, heuristic, and cognitive evolution
of human agent s, making decisions amid uncertainty, will likely be amplified by the inexorable and ubiquitous
complexity, uncertainty, and unpredictability that AI introduces. In particular, the military concept ‘mission command.’12 This
concept of 'mission command' holds that commanders’ strategic-psychology (or Clausewitz’s military ‘genius’) depends on the intuition,
flexibility, and empathy of subordinates to implement the spirit of commander’s intentions – especially in the context of uncertainty and incomplete information associated with modern warfare.13
AI -augmented systems operating at machine speed and reacting to situations in ways that may surpass humans’
comprehension, might challenge the ‘genius’ of commanders and heuristics in strategic decision-making and raise broader issues
about escalation control and the start of a slippery slope towards the abandonment of human moral
responsibility .14 That is, the uncertainties and unintended outcomes of machines interpreting human intentions, and making
autonomous strategic decisions, in fundamentally non-human ways. A
central risk posed by AI may not be the generation of
bias, or decisions based on AI fuzzy logic, but rather the temptation to act with confidence and certainty in response in
situations that would be better managed with caution and prudence.15
Independently, autonomous systems make mediation impossible.
Leys ’20 [Nathan; 2020; JD, Yale Law School; Yale Journal of International Law, “Autonomous Weapon
Systems, International Crises, and Anticipatory Self-Defense,” vol. 45 no. 2]
D. The “Battle of New Orleans” Problem: AWS and Disaggregated Command-and-Control 66
AWS ’ ability to fight when disconnected from their handlers is both a feature and a bug, at least when hostilities were once
ongoing but have since ceased. Conceptually, this problem is not new. On January 8, 1815, British forces attacked American troops under the
command of Andrew Jackson during the Battle of New Orleans.67 The clash occurred during peacetime; unbeknownst to the combatants, the
United States and the United Kingdom had already signed the Treaty of Ghent, ending the War of 1812.68 Of course, Andrew Jackson did not
have access to a satellite phone.69 The last two centuries have seen dramatic improvements in the “command-and-control” ( C2 ) structures
on which modern military commanders rely to collect information from, and relay orders to, troops in the field. But the modern
communications networks on
which the U nited S tates, its allies, and its peer/near-peer competitors rely may well be targeted
early on in a conflict.70 AWS will be especially valuable in degraded C2 environments; unlike Predator drones, for
example, their ability
to fight does not depend on the quality or even existence of a satellite uplink to an Air
Force base in Nevada.71 In addition, autonomy may reduce the risk of hacking 72 and the strain on C2 networks even in
the best of times.73
DARPA’s Collaborative Operations in Denied Environments (CODE) program illustrates concretely how AWS might function when they cannot
contact their human operators.74 CODE’s purpose is “to design sophisticated software that will allow groups of drones to work in closely
coordinated teams, even in places where the enemy has been able to deny American forces access to GPS and other satellite-based
autonomous weapons, it does seek to leverage
autonomy to reduce the need for direct human control of unmanned systems.76 The strategic and technical goals of
communications.”75 Although CODE’s purpose is not explicitly to develop fully
CODE are a short step from realizing AWS that can function in denied environments.77
Now imagine if an AWS, severed from its C2 network by accident, attack, or design,78 were forced to decide whether to engage a nearby
target.79 For example, MK 60 CAPTOR (encapsulated torpedo) mines “detect and classify submarines and release a modified torpedo” to attack
enemy targets.80 If such an
and cut
autonomous torpedo launcher, stationed in a crucial shipping lane during a conflict
off from C2 before the declaration of a ceasefire, picked up an adversary’s warship bearing down on it, such
a weapon might—like Andrew Jackson’s forces at New Orleans—decide to attack under the mistaken assumption that
hostilities were ongoing . Such an attack might well scuttle peace talks and erase the credibility of one
party’s promise to hold its fire .
That causes nuclear escalation from misstated objectives, pressures during crisis, AND
AI mismanagement.
Vold ’21 [Karina; 2021; philosopher of cognitive science and artificial intelligence & an assistant
professor at the University of Toronto's Institute for the History and Philosophy of Science and
Technology; Daniel Harris; retired lawyer and Foreign Service Officer at the US Department of State;
"How Does Artificial Intelligence Pose an Existential Risk?" Oxford Handbook of Digital Ethics, p. 1-34]
4.1 AI Race Dynamics: Corner-cutting Safety
An AI race between powerful actors could have an adverse effect on AI safety , a subfield aimed at finding technical
solutions to building “advanced AI systems that
are safe and beneficial ” (Dafoe, 2018, 25; Cave & Ó hÉigeartaigh, 2018; Bostrom,
2017; Armstrong et al., 2016; Bostrom, 2014). Dafoe (2018, 43), for example, argues that it is plausible that such a race would provide
strong incentives for researchers to trade-off safety in order to increase the chances of gaining a relative
advantage over a competitor.21 In Bostrom’s (2017) view, competitive races would disincentivize two options for a frontrunner: (a)
slowing down or pausing the development of an AI system and (b) implementing safety-related performance handicapping. Both, he argues,
have worrying consequences for AI safety.
(a) Bostrom (2017, 5) considers a case in which a solution to the control problem (C1) is dependent upon the components of an AI system to
which it will be applied, such that it is only possible to invent or install a necessary control mechanism after the system has been developed to a
significantly high degree. He contends that, in situations like these, it is vital that a team is able to pause further development until the required
safety work can be performed (ibid). Yet, if implementing these controls requires a substantial amount of additional time and resources, then in
a tight competitive race dynamic, any team that decides to initiate this safety work would likely surrender its lead to a competitor who forgoes
doing so (ibid). If competitors don’t reach an agreement on safety standards, then it is possible that a “risk-race to the bottom” could arise,
driving each team to take increasing risks by investing minimally in safety (Bostrom, 2014, 247).
(b) Bostrom (2017, 5-6) also considers possible scenarios in which the “mechanisms needed to make an AI safe reduces the AI’s effectiveness”.
These include cases in which a safe AI would run at a considerably slower speed than an unsafe one, or those in which implementing a safety
mechanism necessitates the curtailing of an AI’s capabilities (ibid). If the AI race were to confer large strategic and economic benefits to
frontrunners, then teams would be disincentivized from implementing these sorts of safety mechanisms. The same, however, does not
necessarily hold true of less competitive race dynamics; that is, ones in which a competitor has a significant lead over others (ibid). Under these
conditions, it is conceivable that there could be enough of a time advantage that frontrunners could unilaterally apply performance
handicapping safety measures without relinquishing their lead (ibid).
It is relatively uncontroversial to suggest that reducing investment in AI safety could lead to a host of associated dangers. Improper safety
precautions could produce all kinds of unintended harms from misstated objectives or from
specification gaming , for example. They could also lead to a higher prevalence of AI system vulnerabilities
which are
intentionally exploited by malicious actors for destructive ends, as in the case of adversarial examples (see
Brundage et al., 2018). But does AI safety corner-cutting reach the threshold of an Xrisk? Certainly not directly, but there are at least some
circumstances under which it would do so indirectly. Recall that Chalmers (2010) argues there could
be defeaters that obstruct
the self-amplifying capabilities of an advanced AI, which could in turn forestall the occurrence of an
intelligence explosion. Scenario (a) above made the case that a competitive AI race would disincentivize researchers
from investing in developing safety precautions aimed at preventing an intelligence explosion (e.g., motivational
defeaters). Thus, in cases in which an AI race is centred on the development of artificial general
intelligence, a seed AI with the capacity to self-improve, or even an advanced narrow AI (as per §3.1), a competitive race dynamic
could pose an indirect Xrisk insofar as it contributes to a set of conditions that elevate the risk of a
control problem occurring (Bostrom, 2014, 246; 2017, 5).
4.2 AI Race Dynamics: Conflict Between AI Competitors
The mere narrative of an AI race could also, under certain conditions, increase the risk of military conflict between competing
groups. Cave & Ó hÉigeartaigh (2018) argue that AI race narratives which frame the future trajectory of AI development in terms of
technological advantage could “increase the risk of competition in AI causing real conflict (overt or covert)”. The militarized language typical of
race dynamics may encourage competitors to view each other “as threats or even enemies” (ibid, 3).22 If a government believes that an
adversary is pursuing a strategic advantage in AI that could result in their technological dominance, then this alone could provide a motivating
reason to use aggression against the adversary (ibid; Bostrom, 2014). An AI race narrative could thus lead
to crisis escalation
between states. However, the resulting conflict, should it arise, need not directly involve AI systems. And it's an open question whether said
conflict would meet the Xrisk threshold. Under conditions where it does (perhaps
nuclear war ), the contributions of AI as a technology
would at best be indirect.
4.3 Global Disruption: Destabilization of Nuclear Deterrents
Another type of crisis escalation associated with AI is the potential destabilizing impact the technology could have on global strategic
stability;23 in particular, its capacity to destabilize
nuclear deterrence strategies (Giest & Lohn, 2018; Rickli, 2019; Sauer, 2019;
Groll, 2018; Zwetsloot & Dafoe, 2019). In general, deterrence relies both on states possessing secure second-strike capabilities (Zwetsloot &
Dafoe, 2019) and, at the same time, on a state's inability to locate, with certainty, an adversary’s nuclear second-strike forces (Rickli, 2019). This
could change, however, with advances in AI (ibid). For example, AI-enabled surveillance and reconnaissance systems, unmanned underwater
vehicles, and data analysis could allow a state to both closely track and destroy an adversary’s previously hidden nuclear-powered ballistic
missile submarines (Zwetsloot & Dafoe, 2019). If their second-strike nuclear capabilities
were to become vulnerable to a
first strike, then a pre- emptive nuclear strike would, in theory, become a viable strategy under certain scenarios (Giest & Lohn, 2018).
In Zwetsloot & Dafoe’s (2019) view, “the fear that nuclear systems could be insecure would, in turn, create
including defensively motivated ones—to
pressures for states—
pre-empt ively escalate during a crisis”. What is perhaps most alarming is that the
aforementioned AI systems need not actually exist to have a destabilizing impact on nuclear deterrence (Rickli, 2019; Groll, 2018; Giest & Lohn,
2018). As Rickli (2019, 95) points out, “[b]y its very nature, nuclear deterrence is highly psychological and relies on the perception of the
adversary’s capabilities and intentions”. Thus, the “simple
misperception of the adversary’s AI capabilities is
destabilizing in itself” (ibid). This potential for AI to destabilize nuclear deterrence represents yet an other kind of indirect global
catastrophic, and perhaps even existential , risk insofar as the destabilization could contribute to nuclear conflict escalation.
5. Weaponization of AI
Much like the more recent set of growing concerns around an AI arms race, there have also been growing concerns around the weaponization
of AI. We use “weaponization” to encompass many possible scenarios, from malicious actors or a malicious AI itself, to the use of fully
autonomous lethal weapons. And we will discuss each of these possibilities in turn. In §5.1 we discuss malicious actors and in §5.2 we discuss
lethal autonomous weapons. We have combined this diverse range of scenarios for two reasons. First, while the previous Xrisk scenarios
discussed (CPAX and an AI race) could emerge without malicious intentions from anyone involved (e.g., engineers or governments), the
scenarios we discuss here do for the most part assume some kind of malicious intent on the part of some actor. They are what Zwetsloot &
Dafoe (2019,) call a misuse risk. Second, the threats we discuss here are not particularly unique to AI, unlike those in previous sections. The
control problem, for example, is distinctive of AI as a technology, in the sense that the problem did not exist before we began building
intelligent systems. On the other hand, many technologies can be weaponized. In this respect, AI is no different. It is because AI is potentially so
powerful that its misuse in a complex and high impact environment, such as warfare, could pose an Xrisk.
5.1 Malicious Actors
In discussing CPAX, we focused on accidental risk scenarios—where no one involved wants to bring about harm, but the
mere act of building an advanced AI system creates an Xrisk . But AI could also be deliberately misused.
These can include things like exploiting software vulnerabilities, for example, through automated hacking or
adversarial examples; generating political discord or misinformation with synthetic media; or
initiating physical attacks using
drones or automated weapons (see Brundage et al., 2018). For these scenarios to reach the threshold of Xrisk (in terms of ‘scope’),
however, a beyond catastrophic amount of damage would have to be done. Perhaps one instructs an AI system to suck
up all the oxygen in the air , to launch all the nuclear weapons in a nation’s arsenal, or to invent a deadly
airborne biological virus. Or perhaps a lone actor is able to use AI to hack critical infrastructures , including some
that manage large-scale projects, such as the satellites that orbit Earth. It does not take much creativity to drum up a scenario in which an AI
system, if put in the wrong hands , could pose an Xrisk . But the Xrisk posed by AI in these cases is likely to be indirect—where AI is
just one link in the causal chain, perhaps even a distal one. This involvement of malicious actors is one of the more common concerns around
the weaponization of AI. Automated systems that have war- fighting capacities or that are in anyway linked to nuclear missile systems could
become likely targets of malicious actors aiming to cause widespread harm. This threat is serious, but the theoretical nature of the threat is
straightforward relative to those posed in CPAX, for example.
One further novel outcome of AI would be if the system itself malfunctions. Any technology can malfunction, and in the case of an AI system
that had control over real-world weapons systems the consequences of a malfunction could be severe (see Robillard, this volume). We’ll discuss
this potential scenario a bit more in the next section. A final related possibility here would be for the AI to itself turn malicious. This would be
unlike any other technology in the past. But since AI is a kind of intelligent agent, there is this possibility. Cotton- Barratt et al. (2020), for
example, describe a hypothetical scenario in which an intelligence explosion produces a powerful AI that wipes out human beings in order to
pre-empt any interference with its own objectives. They describe this as a direct Xrisk (by contrast, we described CPAX scenarios as indirect),
presumably because they describe the AI as deliberately wiping out humanity. However, if the system has agency in a meaningful sense, such
that it is making these kinds of deliberate malicious decisions, then this seems to assume it has something akin to consciousness or strong
intentionality. In general we are far from developing anything like artificial consciousness and this is not to say that these scenarios should be
dismissed altogether, but many experts agree that there are serious challenges confronting the possibility of AI possessing these cognitive
capacities (e.g., Searle, 1980; Koch and Tonini, 2017; Koch, 2019; Dehaene et al., 2017).
5.2 Lethal Autonomous Weapons
One other form of weaponization of AI that is sometimes discussed as a potential source of Xrisk are lethal autonomous weapons systems
(LAWS). LAWS include systems that can locate, select, and engage targets without any human intervention (Roff,
2014; Russell, 2015; Robillard, this volume). Much of the debate around the ethics of LAWS has focused on whether their use would violate
human dignity (Lim, 2019; Rosert & Sauer, 2019; Sharkey, 2019), whether they could leave critical responsibility gaps in warfare (Sparrow, 2007;
Robillard, this volume), or whether they could undermine the principles of just war theory, such as noncombatant immunity (Roff, 2014), for
example. These concerns, among others, have led many to call for a ban on their use (FLI ,2017). These concerns are certainly very serious and
more near term (as some LAWS already exist) than the speculative scenarios discussed in CPAX. But do LAWS really present an Xrisk? It seems
that if they do, they do so indirectly. Consider two possible scenarios.
(a) One concern around LAWS is that they will
ease the cost of engaging in war, making it more likely that
tensions between rival states rise to military engagement . In this case, LAWS would be used as an
instrument to carry out the ends of some malicious actor . This is because, for now, humans continue to play a significant
role in directing the behaviour of LAWS, though it is likely that we will see a steady increase in the autonomy of future systems (Brundage et al.,
2018). Now, it could be that this kind of warfare leads to Xrisks, but this would require a causal chain that includes political disruption, perhaps
failing states, and widespread mass murder. None of these scenarios are impossible, of course, and they present serious risks. But we have tried
to focus this chapter on Xrisks that are novel to AI as a technology and, even though we view the risks of LAWS as extremely important, they
ultimately present similar kinds of risks as nuclear weapons do. To the extent that LAWS have a destabilizing impact on norms and practices in
warfare, for example, we think that scenarios similar to those discussed in §4.3 are possible—LAWS
might escalate an ongoing
crisis, or moreover, the mere perception that an adversary has LAWS might escalate a crisis.
Nuclear war intensifies physical and psychic suffering.
ICRC ’18 [International Committee of the Red Cross; August 7; Humanitarian institution based in
Geneva, Switzerland and funded by voluntary donations; ICRC, “Nuclear weapons - an intolerable threat
to humanity,” https://www.icrc.org/en/nuclear-weapons-a-threat-to-humanity]
The most terrifying weapon ever invented
Nuclear weapons are the most terrifying weapon ever invented: no weapon is more destructive; no
weapon causes such unspeakable human suffering; and there is no way to control how far the
radioactive fallout will spread or how long the effects will last.
A nuclear bomb detonated in a city would immediately kill tens of thousands of people, and tens of thousands more
would suffer
horrific injuries and later die from radiation exposure .
In addition to the immense short-term loss of life, a nuclear war could cause long-term damage to our
planet. It could severely disrupt the earth's ecosystem and reduce global temperatures, resulting in
food shortages around the world.
Think nuclear weapons will never be used again? Think again.
The very existence of nuclear
weapons is a threat to future generations, and indeed to the survival of humanity .
What's more, given the current regional and international tensions, the
risk of nuclear weapons being used is the highest
it's been since the Cold War. Nuclear-armed States are modernizing their arsenals, and their command and
control systems are becoming more vulnerable to cyber attacks. There is plenty of cause for alarm about the danger
we all face.
No adequate humanitarian response
What would humanitarian organizations do in the event of a nuclear attack? The
hard truth is that no State or organization could
deal with the catastrophic consequences of a nuclear bomb.
Even absent nuke war, unaccountable military AI unleashes drone swarms---clarifying
liability is critical.
Sharan ’21 [Yair, Elizabeth Florescu, and Ted J. Gordon; April 27; senior associate researcher in BeginSaadat Center for strategic studies in Bar Ilan University; Tripping Points on the Roads to Outwit Terror,
“Artificial Intelligence and Autonomous Weapons,” p. 49—60]
Maven is a Yiddish word meaning expert, a sage and scholar, an egghead with a practical bent, and conveying at least a hint of wisdom with a
hint of a smile; in other words, it’s good to be called a maven. But the word was also shorthand for a US
designed to introduce
D epartment o f D efense project
a rtificial i ntelligence to battlefield decision-making, ultimately using m achine l earning and
algorithmic systems to
make life and death calls on lethal attacks. The more formal name of the project was “Algorithmic
Warfare Cross-Function Team (AWCFT)”; it was designed to use m achine l earning to help identify likely terrorist targets
from drone-captured images and sensor data, and while the official storyline placed humans in the loop most people saw the
possibility
of closing the loop and allowing the Maven-equipped drones to launch their explosive weapons if the algorithms
showed high levels of confidence about the nature of the potential targets.
The project has adopted an unusual logo, shown in Fig. 7.1, as its official seal.
<<<FIGURE OMITTED>>>
What an
incredible lapse of judgment to show images of smiling and happy robots designed to find and probably to
kill the enemy and nearby collateral, unlucky people! It is probably a significant error in judgment likely to produce more
opposition than support and in incredibly bad taste to say the least. The motto in the logo “officium nostrum estadiuvare” translates to “Our
job is to help.” But help whom and under what circumstances? [23].
Perhaps it
was inevitable that one of the first uses anticipated for a rtificial i ntelligence was closing the loop in
weapons. The IFF systems of World War II were designed to identify friends or foes, the modern electronic version of the medieval “who
goes there” challenge. In those early electronic warfare systems (1940), a radar signal from an airplane, ship or submarine triggered a receiver
in the challenged craft and the coded return identified it as a friend or foe. A human then interpreted the blips on an oscilloscope or other
visual display and took appropriate action. With IFF systems becoming much more sophisticated and with the advent of artificial intelligence
that is increasingly viewed as being reliable, it is no wonder that autonomous systems that do not depend on human decision-making are being
considered. This takes the onus away from humans and allocates decisions to algorithms that produce unambiguous “yes” or “no” answers. The
systems can’t waffle and their human operators cannot be assigned the blame if anything goes wrong.
The authoritative Bureau of Investigative Journalism [4] keeps score on the number of drone strikes in Afghanistan, Pakistan, Somalia, and
Yemen; they estimate that between 2014 and 2018 almost 6,000 strikes were launched killing between 8,000 and 12,000 people of whom
1,000–2,000 were civilians.
As of today, the loop has not been closed: the current USA military anti-terror drone system is a mix of intelligence on the ground obtained
through traditional means of espionage, drones capable of launching Hellfire missiles with precise accuracy against designated targets, pilots of
the remote-control drones located in Florida, command and control personnel to review the “capture or kill command,” troops on the ground if
the decision is capture rather than kill. Nothing is autonomous, out of caution to avoid killing the wrong person, injuring civilians, or a hundred
other errors and mistakes [30].
Two
troubling directions for the evolution of such weapons are miniaturization, and swarming . Hobby drones are
already tiny, they can take off from a forefinger, and are capable of carrying payloads—cameras and automated guidance systems and other
devices. The US Army has awarded a 2.6 million-dollar contract to FLIR Corporation, a manufacturer of thermal imaging cameras located in
Wilsonville, Oregon to provide an advanced version of their Black Hornet drone. France and Australia have used versions of the drone for
almost a decade. The company describes the drone as including two UAV sensors, a controller, and display. They describe the system on
their web site as “ equip (ing) the non-specialist dismounted soldier with immediate
covert situational awareness”
[10]. The soldiers using the quadcopter drone launch it to scan the area; it returns to them when the reconnaissance is complete. Industry
sources, reporting on the contract said: “The Black Hornet Personal Reconnaissance System is the world’s smallest combat-proven nano-drone,
according to the company. The US Army has ordered the next-generation Black Hornet 3, which weighs 32 g and packs navigation capabilities
for use in areas outside of GPS coverage. The drone, which has advanced image processing from earlier versions can fly a distance of two
kilometers at more than 21 km an hour and carries a thermal micro camera” [1].
Now to the other troubling vector of change:
Life Institute, an organization that warns
swarming . In 2017, a Youtube video called Slaughterbots [25] was released by the Future of
about the possibility of autonomous weapons .Footnote2 The film was extremely
realistic, simulating a TED talk, and showed a Hercules aircraft dropping what appeared to be
thousands of small drones
programmed to remain a short distance away from each other, behave as a swarm , and using facial recognition or other means
of identification, to
hunt and attack specific people or groups. In the video, the drones carried a small explosive shaped charge
and when they found their targets, exploded to kill them. It was a masterful production, extremely realistic. It went viral with over 2 million
views [22]. Readers who want to watch the film (recommended) should go to https://www.youtube.com/watch?v=9CO6M2HsoIA; it is a
frightening film in which weapons have been given the authority to make the decision to kill.
Perhaps in response to such concerns, the US Air Force has developed an anti-swarm weapon called Thor (not to be confused with the Thor
Intermediate Range Ballistic Missile (IRBM) [20] developed by the Air Force in the late 1950s). The new Thor is an electronic counter-measure
directed energy defensive weapon, a relatively portable microwave system designed to neutralize short-range swarms of autonomous drones
[18]. Similar systems are being developed for use at longer ranges. Even though the systems are relatively portable, the question is: where will
they be when they are needed?
How much time do we have before a utonomous w eapon s are an everyday reality? Not much . USA Secretary of Defense
Mark Esper, speaking at a public conference on uses of artificial intelligence in 2019 said: “The
N ational D efense S trategy (of the USA)
prioritizes China first and Russia second as we transition into this era of great power competition. Beijing has made it abundantly
clear that it intends to be the world leader in AI by 2030.”
“President Xi has said that China must, quote, “ensure that our country marches in the front ranks when it comes to theoretical research and
this important area of AI and occupies the high ground in critical and core AI technologies.”
“For instance, improvements in AI enable more capable and cost-effective autonomous vehicles. The Chinese People’s Liberation Army is
moving aggressively to deploy them across many warfighting domains. While the U.S. faces a mighty task in transitioning the world’s most
advanced military to new AI-enabled systems, China believes it can leapfrog our current technology and go straight to the next generation.”
“In addition to developing conventional systems, for example, Beijing is investing in low cost, long-range, autonomous and unmanned
submarines, which it believes can be a cost-effective counter to American naval power. As we speak, the Chinese government is already
exporting some of the most advanced military aerial drones to the Middle East, as it prepares to export its next generation stealth UAVs when
those come online. In addition, Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability
to conduct lethal targeted strikes” [9].
Reality overtakes speculation about autonomous swarms . A report issued following Secretary Esper’s speech
makes it clear that the USA and several other countries are far along in the process of fielding such weapons
and they appear to be so powerful that one aspect of the debate surrounding them is whether or not they should be
classed as w eapons of m ass d estruction. As is the case that definitions of terrorism are scattered, so are definitions of WMD. This lack
of agreed-to definition means that it is not clear whether the drone swarms fall under existing international treaties and agreements.Footnote3
Development and testing proceed, nevertheless.
One
swarm weapon is called Armed, Fully-Autonomous Drone Swarm, or AFADS; this is the type of weapon described as a
“ slaughterbot ” in the video referenced earlier in this chapter. In a small-business solicitation, the
D epartment o f D efense sought
contractors who would develop, “… missile launched payload consisting of multiple quad-copters . The missile
will release the quad-copter payload during flight, after which the quad-copters must decelerate to a velocity suitable for deployment
(unfolding), identify potential targets, maneuver to and land on the target, and detonate onboard Explosively Formed Penetrator (EFP)
munition(s). Potential targets include tank and large caliber gun barrels, fuel storage barrels, vehicle roofs, and ammunition storage sites. The
ultimate goal is to produce a missile deployable, long range UAS swarm that can deliver small EFPs to a variety of
targets” [29].
Zak Kallenborn, a senior consultant at the US Air Force Center for Strategic Deterrence Studies, in a report that considered if swarms qualified
as WMDs, had this to say: “Attack drones carry weapons payloads. Sensing drones carry sensors to identify and track potential targets or
threats. Communications drones ensure stable communication links within the swarm and with the command system. Dummy drones may
absorb incoming adversary fire, generate false signatures, or simply make the swarm appear larger.” The composition of a heterogeneous
swarm could be modified to meet the needs of a particular mission or operational environment. The capability to swap in new drones has been
demonstrated on a small scale. In the future, providing a
drone swarm to an operational commander could be akin
to supplying a box of Legos . “Here are your component parts. Assemble them into what you need …”
“Drone
swarms could be effective in assassination attempt s due to the ability to overwhelm defenses .
However, the lack of stealth means they are likely to only appeal to actors unconcerned with (or desire) their role being known. In some
circumstances, drone swarms could function
as anti-access, area-denial ( A2/AD ) weapons” [16].
<<<TEXT CONDENSED NONE OMITTED>>>
If terrorists can master the technologies involved (target recognition and inter-drone communications) it appears that drone swarms could be useful to terrorists, particularly in missions designed to attack special protected targets
and even wipe out political or military leaders of a nation, as anticipated in the video Slaughterbots. To sum up, one can see that AI-enabled autonomous weapons are potentially a future technology that might be available to
terrorists and increase their capability to pose a significant threat to strategic targets, some of them unreachable before. However, they can also help—as they are already—fight terrorism by facilitating reconnaissance,
communication, and targeted attacks with reduced collateral damage. As both AI and autonomous weapons are evolving, so are the related opportunities and threats. 7.3 Tripping Points Artificial Intelligence and machine
autonomy represent the main tripping points in this chapter. Although promising, these technologies have their dark side, including opening new opportunities to terrorists. Putting aside for the moment the larger questions of
whether AI will ultimately “take charge” of civilization, some tripping points are lurking in the uses of algorithms and their applications on the one hand, and the emergence of some new counter-terror steps on the other hand.
These include: Algorithms that decide on human matters are emotionless and probably contain hidden biases. These technologies (AI and automated decision-making) may be cooperative or antagonistic toward the human society.
AI can help identify terrorists and prevent their activities, but controlling terrorists’ access to AI has to be addressed. AI-enabled technologies highly impact the development of new systems of values. They could lead to a more
democratic and/or safer society, but also to a dictatorial, controlled world order with a handful of companies and/or governments having monopoly over the algorithms, databases, and their uses. This could seriously increase the
spread and power of terrorism. The forces leading toward autonomous weapons, tiny drones and drone swarms are strong and the decision to deploy or use such weapons will be a significant tripping point. Autonomous weapons
can help deter or prevent terrorism, but can also serve terrorists in accomplishing their objectives. Autonomous weapons might be used as political assassins, seeking terrorists and their leaders, or, if used by terrorists, for killing
legitimate politicians or other persons deemed to be enemies. As the developments of AI and autonomous weapons are accelerating, so are intensifying the debates over the ethical aspects associated with them. Most countries
and several international organizations, corporations, think tanks and academia are conducting studies and dialogues on the ethical use of AI. New Zealand became the first country to set standards for government use of
algorithms [26], the European Commission put forward a European approach to Artificial Intelligence and Robotics [8], while the UN started dialogues for setting guiding principle for AI development [15, 27]. The USA National
Security Commission on Artificial Intelligence (NSCAI) is an independent federal commission that assesses the implications of AI and related technologies developments worldwide, to support USA policy making and national
security [21]. While autonomous weapons development continues at ever increasing speed [11], efforts to ban them altogether are also increasing worldwide. Organizations like the Ban Lethal Autonomous Weapons
(autonomousweapons.org) and the International Committee for Robot Arms Control (icrac.net) lead worldwide efforts for slowing down the development and deployment of such weapons unless their ethical and safe use is
secured—which would be extremely difficult.
<<<PARAGRAPH BREAKS START>>>
The UN
covers the emerging technologies in the area of lethal autonomous weapons systems ( LAWS ) under the Convention on Certain
Conventional Weapons ( CCW ). In 2016 an open-ended Group of Governmental Experts was established and since then, several meetings
have been held. In principle, LAWS fall under the same international laws, particularly the International Humanitarian Law as other weapons
[28]. They apply to State as well as non-State actors.
However, while guidelines and protocols can help reduce
proliferation, enforcement and prosecution remain a tripping point. Who would be accountable if an
autonomous weapon commits a war crime? Will it be the programmer , the army , the manufacturer , the
entity that launched it? Whether it’s about terrorist or counter-terrorism organizations, these questions remain valid and have
yet to be addressed.
Drone swarms guarantee WMD-level attacks---accountability is critical to contain
escalation.
Kallenborn ’21 [Zachary; April; research affiliate with the Unconventional Weapons and Technology
Division of the National Consortium for the Study of Terrorism and Responses to Terrorism; Bulletin of
the Atomic Scientists, “Meet the future weapon of mass destruction, the drone swarm,”
https://thebulletin.org/2021/04/meet-the-future-weapon-of-mass-destruction-the-drone-swarm/]
fully-autonomous drone swarms are future w eapons of m ass d estruction. While they are unlikely to achieve the
scale of harm as the Tsar Bomba, the famous Soviet hydrogen bomb, or most other major nuclear weapons, swarms could cause the
same level of destruction, death , and injury as the nuc lear weapon s used in Nagasaki and Hiroshima—that is tens-of-thousands of
deaths. This is because drone swarms combine two properties unique to traditional weapons of mass destruction: mass harm
Armed,
and a lack of control to ensure the weapons do not harm civilians.
Countries are already putting together very large groupings of drones. In India’s recent Army Day Parade, the government demonstrated what it claimed is a true drone swarm of 75 drones and expressed the intent to scale the
swarm to more than 1,000 units. The US Naval Postgraduate School is also exploring the potential for swarms of one million drones operating at sea, under sea, and in the air. To hit Nagasaki levels of potential harm, a drone swarm
would only need 39,000 armed drones, and perhaps fewer if the drones had explosives capable of harming multiple people. That might seem like a lot, but China already holds a Guinness World Record for flying 3,051 preprogrammed drones at once.
Experts in national security and artificial intelligence debate whether a single autonomous weapon could ever be capable of adequately discriminating between civilian and military targets, let alone thousands or tens of thousands
of drones. Noel Sharkey, for example, an AI expert at the University of Sheffield, believes that in certain narrow contexts, such a weapon might be able to make that distinction within 50 years. Georgia Tech roboticist Ronald Arkin,
meanwhile, believes lethal autonomous weapons may one day prove better at reducing civilian causalities and property damage than humans, but that day hasn’t come yet. Artificial intelligences cannot yet manage the
complexities of the battlefield.
Drone swarms worsen the risks posed by a lethal autonomous weapon. Even if the risk of a well-designed, tested, and validated autonomous weapon hitting an incorrect target were just 0.1 percent, that would still imply a
substantial risk when multiplied across thousands of drones. As military AI expert Paul Scharre rightly noted, the frequency of autonomous weapons’ deployment and use matters, too; a weapon used frequently has more
opportunity to err. And, as countries rush to develop these weapons, they may not always develop well-designed, tested, or validated machines.
Drone communication means an error in one drone may propagate across the whole swarm. Drone swarms also risk so-called “emergent error.” Emergent behavior, a term for the complex collective behavior that results from the
behavior of the individual units, is a powerful advantage of swarms, allowing behaviors like self-healing in which the swarm reforms to accommodate the loss of a drone. But emergent behavior also means inaccurate information
shared from each drone may lead to collective mistakes.
The proliferation of swarms will reverberate throughout the global community, as the proliferation of military drones already has echoed. In the conflict between Armenia and Azerbaijan, Azeri drones proved decisive. Open-source
intelligence indicates Azeri drones devastated the Armenian military, destroying 144 tanks, 35 infantry-fighting vehicles, 19 armored personnel carriers, and 310 trucks. The Armenians surrendered quickly, and the Armenian people
were so upset they assaulted their speaker of parliament.
Drone swarms will likely be extremely useful for carrying out mass casualty attacks . They may be useful as strategic
deterrence weapons for states without nuclear weapons and as
assassination weapons for terrorists. In fact, would-be assassins
launched two drones against Prime Minister Nicolas Maduro in Venezuela in 2018. Although he escaped, the attack helps illustrate the
potential of drone swarms. If the assassins launched 30 drones instead, the outcome may have been different.
Drone swarms are also likely to be highly effective delivery systems for chemical and bio logical weapons
through integrated environmental sensors and mixed arms tactics (e.g. combining conventional and
chemical weapons in a single swarm ), worsening already fraying norms
Even if drone swarm risks to civilians are reduced, drone
against the use of these weapons.
swarm error creates escalation risks. What happens if the
swarm accidentally kills soldiers in a military not involved in the conflict?
Plan---1AC
The United States should vest legal duties in artificial intelligence that significantly
restrict its use in military systems.
Solvency---1AC
Finally, SOLVENCY:
The plan vests duties in autonomous AI systems that restrict their use---that shuts
down dangerous systems and forces responsible development.
Gevers ’15 [Aaron; Spring; JD at Rutgers University (2014), Assistant General Counsel of the Viking
Group; Rutgers Journal of Law & Public Policy, “Is Johnny Five Alive or Did It Short Circuit: Can and
Should an Artificially Intelligent Machine Be Held Accountable in War or Is It Merely a Weapon?” vol. 12,
no. 3]
V. WHO THEN IS
LIABLE FOR JOHNNY FIVE?
These legal standards and statutes affect Johnny Five and his situation in two ways: first, in determining whether Johnny Five himself would be
subject to the UCMJ or the Rome Statute, and second, in discerning whether a captain, as Johnny Five's commander, would be responsible for
his actions.
A. DO THE LAWS APPLY TO JOHNNY FIVE?
The UCMJ is broad in its jurisdiction over the armed services both in statute and in common law. Article 2
of the UCMJ states, inter alia, persons subject to the UCMJ include "members of a regular component of the armed forces,"139 "other persons
lawfully called or ordered into or to duty in or training for in, the armed forces,"140 "[un time of declared war or a contingency operation,
persons serving with or accompanying an armed force in the field,"141 or "persons serving with, employed by, or accompanying the armed
forces."142 Furthermore, Article 3 states the military has the jurisdiction to try personnel who are or were at the time of the act in question a
status to be subject to the UCMJ. In other words, the UCMJ's jurisdiction extends to members of the armed forces or other persons
encompassed by Article 2 at the time the act in question took place.143
Essentially, the
UCMJ applies to any person within or accompanying the armed forces. Johnny Five might think
he is able to get away scot-free since he is not necessarily a person , but that is not the case. While the UCMJ does not
expound upon the meaning of "person", the United States Code in its very first provision certainly does. It provides "[iun determining the
meaning of any Act of Congress, unless the context indicates otherwise . . . the words 'person' and 'whoever' include corporations, companies,
associations, firms, partnerships, societies, and joint stock companies, as well as individuals."144 It would be no different to give the same
rights to an AI being as those conferred on corporations; both AI persons and corporations would be legal fictions.
Johnny Five can't be said to be any less of a person than a corporation. In fact, because he is an individual
with cognitive and communicative abilities, he is more so a person than any corporation . At the very least, if
a corporation can be considered a person and is defined as such per the United States Code, with nothing else
to the contrary in the UCMJ, he
Likewise, Johnny
should be subject to the articles of the UCMJ and the jurisdiction of military tribunals.145
Five should be considered a person in any other criminal proceeding domestically or
internationally because the meaning of person has long been understood to include inanimate
objects . While "person" is typically understood to mean an actual human being, a legal person is anything that is subject to rights and
duties.14 6 So long as an inanimate object is the subject of a legal right, the will of a human is attributed to it in order for the right to be
exercised.147 Surprisingly, this
is not a new theory . Legal proceedings against inanimate objects have been in existence since ancient
Greece and in fact continue until this day, albeit infrequently. In Greece, proceedings against inanimate objects were almost commonplace.14 8
Such objects were often referred to as deodands and, in England as late as 1842, these items were forfeited to the church or Crown.149 In fact,
anything that had killed a man, such as an errant locomotive, was liable to be forfeited.150 For killing those people in our scenario, Johnny
Five then would be liable for those deaths and subject to forfeit under those rules - rules which have been around
for thousands of years and should go undisturbed or, at the very least, provide example for the discussion of how to treat
Johnny Five in a
potential war crime scenario.
This conception of liability of inanimate objects is not one solely of the old world or other countries, but has been a
staple of domestic U.S. law. There were instances in the Plymouth and Massachusetts colonies where a
gun or even a boat would be forfeited for causing the death of a man.1 s 1 Indeed, this notion of liability of objects
has had the most discussion in maritime and admiralty law.152 Oliver Wendell Holmes, Jr., in his treatise on the common law, has stated that in
maritime collisions, the owner of the ship is not to blame, nor necessarily is the captain, but rather all liability is to be placed upon the vessel freeing the owner from all personal liability.153 Chief Justice Marshall even stated outright that proceedings in maritime law are not against the
owner, but against the vessel for offenses committed by the vessel.154 Like a vessel, Johnny Five would often be referred to with a gender "he" for Johnny Five is no different than "she" for a sea faring ship. This attribution of gender is something unique to maritime ships and is a
likely reason for this unique, if not strange, rule of liability against vessels.155 If something as simple as gender can lead to legal person status,
surely something with gender, speech, movement, logic, and appearance similar to human persons should be treated in at least the same
respect.
Footnote 154:
54 United States v. The Little Charles, 26 F. Cas. 979, 982 (C.C.D. Va. 1818) (No. 15,612); see also United States v. Brig Malek Adhel, 43 U.S. (2
How.) 210, 234 (1844). It should be noted that the purpose of this rule had foundation in the idea that on the high seas, the vessel may be the
only way to secure a judgment against the at-fault party since the vessel is often of international origin. Indeed, this was a form of in rem
jurisdiction and the likely genesis for the rule. See HOLMES, supra note 153, at 27-28. Nonetheless, the law is still relevant here since the Al, like
the vessel, presents an equal conundrum in determining its owner or commander - if it even has one - or who to hold responsible generally, a
problem demonstrated by this very paper. Moreover, such an
Al could easily be used at sea and could, for all intents and
purposes, technically be considered a vessel in which these laws discussed by Holmes, Marshall, and others would apply
directly .
Footnote 154 ends and the article continues:
It is no stretch to relate Johnny Five to a vessel, just as it is no stretch to relate him to a corporation .
Both corporations and vessels display substantially larger differences from the traditional human person than Johnny Five would, yet they are
held liable and in the case of corporations, afforded certain constitutional rights.156 Johnny Five would be able to think, speak, move, listen,
make decisions, and take action all on his own. He would be tasked with the legal right to kill, capture, or otherwise deter an enemy combatant.
If a legal person is anything that is subject to legal rights and duties, then because Johnny Five is tasked with not only the legal right to kill, but
also the duty not to kill in certain situations, it only follows that he is a legal person. He
should, like vessels and corporations
before him, be considered a person for purposes of the UCMJ, Rome Statute, and any other i nternational
law he may meet. 157 Inanimate AI objects such as Johnny Five should most assuredly be legal persons .
B. OUR SCENARIO: WHO Is RESPONSIBLE?
Accepting that Johnny Five is a person under the UCMJ and other international laws, Johnny
in my scenario, because
Five would be liable for his actions. And
the Captain did nothing to stop Johnny Five and instead paused out of shock, the Captain
too would likely be liable , provided the Captain failed to act for a sufficient period. Moreover, if the Captain did nothing and
this happened again, the Captain would be even more likely to be held responsible, as the Captain had
knowledge that Johnny Five had certain proclivities to violate the rules of war.
This result is admittedly hard to swallow. After all, if Johnny Five is held liable, what good is it to actually punish him? Putting an expensive piece
of machinery, albeit a thinking, speaking, and moving piece of machinery, behind bars seems ineffective. The deprivation of freedom and time
may not mean the same to Johnny Five as it would to a human actor. True, he can think and understand he is being punished, and potentially
even feel sorry for his actions, but what does twenty years in prison mean to a machine that may not have a life span? Similarly, if the point of
punishment is to be a deterrent to others, does putting Johnny Five behind bars truly deter other Als in combat from doing the same thing?
Granted, these are questions potentially already posed by the retributive criminal justice system as a whole, but perhaps ones that may be
more easily quelled in the instance of a machine as opposed to a human.
Perhaps the simple solution is to shut him down and repurpose or reconfigure him. However, does one run into significant hurdles when they
do this to something that is, for all intents and purposes, human but for the organic component? Though we may develop bonds or affection
towards our AI counterparts as if they were our brothers, the ultimate reality that they are machines will never fade. No matter how similar in
appearance they become, no matter how identical they are in mannerisms and speech, or how friendly they may be, the notion that they are
naught but metal and plastic will never truly be overcome.15 8 Ultimately, punishment
will simply have to be crafted to
Johnny Five and will likely entail reconfiguration or decommissioning .
Regardless, procedures
such as court martials and military commissions or tribunals can and should still be
employed. They can be employed for the reasons mentioned above, that is, an AI could qualify as a "person" and therefore
be subject to the UCMJ and
other courts . They should be employed because an AI who can think and feel should be
afforded the same rights as their human counterparts, at least in terms of due process. It would be easy for a group of soldiers to commit a war
crime, blame an AI, and have the AI simply shut down while they escape scot-free. For the very necessity of ascertaining the facts of any
situation, proceedings should be held. That these proceedings ultimately end in different punishments have little effect on their necessity.159
Setting aside the rationale behind holding a robot liable and applying punishment, let us look at why both Johnny Five and the Captain may be held responsible. First, Johnny Five's actions are reminiscent of the My Lai incident and
are per se in violation of the rules of war. Namely, Johnny Five's actions violate three of the four principles of jus in bello: military necessity, proportionality, and distinction. The fourth principle ofjus in bello, humanity, is not
triggered by this because Johnny Five was not calculated to cause unnecessary suffering, nor had his actions prior to this instance resulted in unnecessary suffering. However, if this instance continued or if a ban was suggested on
AI after this instance, one could assert a violation of the humanity principle, citing this incident as reason enough.
There was no military necessity, and there is never any military necessity, in willfully killing unarmed noncombatants. There was no concrete and direct military advantage calculated by the murder of these innocent civilians. And of
course, there was no distinction made here; attacks directed solely at civilians are in direct conflict with the rule of distinction since the only distinction made was to kill civilians. Thus, we can be sure Johnny Five's actions were in
violation of the rules of war, just as the actions of Lieutenant Calley and his men were in My Lai.
But how alike is My Lai to this incident? A stark contrast to My Lai is that the Captain did not order Johnny Five to murder those people as Lt. Calley ordered his men to kill the villagers of My Lai. However, like Lt. Calley, the Captain
did nothing to stop the slaughter of a whole village.160 Looking to the Yamashita and Chavez standards, so long as the Captain knew or should have known of Johnny Five's actions, he can and will be held liable. Here, he knew by
watching through the scope what Johnny Five was doing and, like Yamashita himself, he did not take reasonable or necessary steps to prevent the murders - or rather to prevent them from continuing to occur after he became
aware. Similarly, under the Article 77 and the Medina standard, the Captain had actual knowledge and would be liable. The same result occurs under the Rome Statute, albeit by a moderately different analysis, as he both had
knowledge and, it can be argued by willful inaction, consciously disregarded the massacre that was taking place. Looking next to the High Command case, we may run into a bit of a kerfuffle. If a commander is not responsible but
for cases where there is a personal dereliction on his part, does the Captain's failure to act create responsibility for Johnny Five? It most certainly does. After all, this exact scenario is almost perfectly fit into the High Command
court's decision. The Captain's inaction - depending throughout this analysis on precisely how long Johnny Five went on killing, that is minutes as opposed to mere seconds certainly amounts to a personal dereliction, and is
tantamount to criminal negligence. He had actual knowledge of what was occurring and failed to do anything.
If , however, we were to utilize the civil doctrine of respondeat superior, not only is our Captain potentially
liable, but so is the U nited S tates as a whole , barring of course some sovereign immunity. Because the U.S. decided to
employ the AI in combat, the deaths were ultimately a result of their negligent utilization of this
technology, and so they should be made to pay reparations , much like they ultimately did in the Iran Air Flight 655
incident.161
Nonetheless, our Captain is stuck holding the proverbial smoking gun here on one level or another and will be punished for an error a hulking
bit of metal committed. This
is a n unfortunate, but ultimately correct, result under the current regime of command
responsibility .
VI. A PROPOSAL FOR LIABILITY OF JOHNNY FIVE AND HIS COMMANDERS
True, these outcomes would be the same if Johnny Five was human, but that is entirely the point. An AI with exactly the same faculties and characteristics as a human, but merely inorganic sinews, still acts, decides, and exists as a
human does. Still, this seems an awfully strange outcome; it does not sit right. Perhaps instead we should look to other parties to hold liable in addition to, or in place of, the poor Captain.
The manufacturer of Johnny Five is one party to look to for responsibility. Along a sort of products liability theory, comingled with command responsibility, we can ascertain that the manufacturer was apparently negligent in its
creation, programming, wiring, or other technique used to make Johnny Five alive. But while this seems an obvious conclusion when you consider Johnny Five as a machine, it becomes a much more difficult conclusion to draw
when you realize he can see, think, act, speak, and move just like a person can. In that instance, the situation seems less like returning a faulty washing machine to its manufacturer and more similar to punishing the mother for sins
of her son. If the manufacturer is creating something that is essentially an inorganic human, how do we hold them responsible for the acts of this now artificially independent being? It may be that we consider the AI much like an
adolescent child and as an agent of the manufacturer. Perhaps in this instance it provides incentive for the creator to construct Johnny Five with the utmost care and, in a limitation on Johnny Five's free will, hardwire him with
specific directions. The trouble is when you hardwire those directions in him, there is almost always going to be a situation you cannot account for which circumvents the hardwiring, or perhaps one that allows a maligned soldier to
order Johnny Five to slaughter the villagers of My Lai. The question is, can this be corrected? And are we amenable to limiting something's free will?
What about Johnny Five as solely a weapon? If Johnny Five was an M16, and the Captain was the soldier pulling the trigger, the Captain and not the weapon is the one responsible for what damage that M16 does. But Johnny Five is
not a mere weapon. If he were a mindless automaton programmed to respond to any order, he, perhaps, could be just a weapon. Instead, he has the ability to think and decide like a human does. And like humans, every so often
the wiring is not quite right. It is true that in this scenario, the Captain could have intervened with an order - he is still at fault for not at least trying to stop the atrocity. If this situation were different, though, as where the Captain
sends out an entire independent platoon of Johnny Fives and they commit a My Lai sort of atrocity, can we say he pulled the trigger there? Surely not. And he is surely not liable under any regime of command responsibility, barring
some previous event that occurred with these particular AI robots. He would not have actual or constructive knowledge of their actions unless he gave the specific order to wipe out the entire village, which for our purposes is not
the case. It is the perfect crime. Yes, the robots themselves could be decommissioned, destroyed, confined, or otherwise, but no human actor is responsible. This too seems an unfitting result; the death of loo civilians and not one
human person to blame is unsettling.
Foremost, it is undoubted in my mind that the robots either the lone Johnny Five or platoon Johnny Five - should all be put out of service, an option which, while potentially harsh when you consider the bonds that may be created
with an inorganic human, is the only sure way to prevent them from doing the same thing again. The death penalty for robots is hardly a capital offense, regardless of the humanity of the machine. A machine that cannot "die" just
as it cannot be "born," should not be allowed the same trepidation in enacting an end of life sentence as a human would. This seems the only logical outcome for the machine actor.
It seems the only answer here then is respondeat superior, or rather a slightly modified version. If respondeat superior translates to "let the
master answer," then our version for Johnny Five shall be "let
the creator answer and be careful ." For policy concerns, the
best bet to ensure Johnny Five is created with the utmost care and properly constructed is to place
the burden for ensuring these machines do not commit war crimes with the manufacturers. While we can
punish the derelict commanders and dispose of or repurpose the miscreant machines, the reparations should be paid not by the
country utilizing the machines, but rather the creators.
This puts significant pressure on the manufacturers and
designers to create the soundest ethical machines for not only financial but also publicity purposes.
No manufacturer would want to be known as the one who creates baby killers or women slaughterers .
Ergo, in this version of respondeat superior, the creator is responsible for the actions of his creations as opposed to the employer responsible
for the actions of his employees.
This doctrine, to be accompanied by the command responsibility and normal criminal liability imposed on the actor, provides for
complete criminal and civil remedies . Most importantly though, it solves the issue of deterrence. If
decommissioning, imprisoning, or repurposing an AI does not deter others from acting, by nipping
the problem in the bud we can deter other manufacturers from producing equally malfeasant AL.
Legal personality is necessary to patch up liability gaps---it’s a prerequisite to holding
any ‘nearest person’ accountable.
Krupiy ’18 [Tetyana; 2018; researcher at Tilburg University, postdoctorate fellow at McGill University;
Georgetown Journal of International Law, “Regulating A Game Changer: Using a Distributed Approach to
Develop an Accountability Framework for Lethal Autonomous Weapons,” vol. 50]
B. Challenges to Assigning Accountability
There are challenges to imputing a war crime caused by a LAWS under existing criminal law and international
frameworks . There exist three bases for ascribing responsibility to an individual in criminal law. These are: (1)
the existence of a causal link between the act of an individual and the outcome," 4 (2) a legal duty to act or to abstain from an
act,"' and (3) a moral duty to act or to abstain from an act." 6 The weaker the element of causality between the conduct of an individual and
criminal law
the wrongful outcome, the harder it will be to justify imposing criminal responsibility on an individual on any of the three bases. Traditionally,
criminal law has been concerned with blamewor- thy conduct where an individual made a conscious decision to carry out or to contribute to a
wrongful act." 7
Broadly, international criminal law requires that "for the accused to be criminally culpable his [or her] conduct must ...have contributed to, or
have had an effect on, the commission of the crime.""18 In the absence of a sufficiently close link between the act and the outcome, it is
difficult to argue that the individual had the necessary mental element to carry out the act in question."9 In the context of LAWSs,
it is
challenging to trace the link between the conduct of a particular programmer, corporate employee, or
government official and the international crime of a LAWS .12 ° There may be a lack of sufficient proximity
between the politicians' decision to bring into effect a law regulating LAWSs and the developer's
decision relating to a particular LAWS design . The hypothetical legislation discussed here sets a range of parameters that specifies
reliability and design criteria for a LAWS. Since the reliability of the LAWS is linked to its architecture, the flawed design of a particular LAWS
creates an opportunity for a LAWS to exe- cute a war crime. Similarly, when the policies
of the defense department relating to
how LAWSs are to be embedded into the armed forces are silent on how LAWS will be designed, there
is no causal link between the promulgated policy and the LAWS's architecture. From this perspective, senior officials
bear no accountability for war crimes committed by LAWSs, because these individuals are not involved in
developing and manufacturing the weapon systems.
existing legal frameworks, such as the doctrine of command responsibility ,1 2 render it difficult to impute
accountability to officials, such as a senior defense official, for a war crime that a LAWS carries out. Since states
The
designed the doctrine of command responsibility with natural persons and military organizations in mind, this
doctrine is difficult to apply to the relationship between a human superior and a LAWS . 22 Article 28 of the
Rome Statute of 1998 imposes criminal
responsibility on civilian and military superiors who have "effective command
and control" or "effective
authority and control" over subordinates who commit an international crime.1 2' The
International Criminal Court interpreted the test of "effective control" in a manner identical to the customary international law definition. 2 4
Under c ustomary i nternational l aw, such superiors should have had the "mate- rial ability " to prevent and punish
the commission of these offenses."' For instance, the superior should have had the power to "initiate meas- ures leading to
proceedings against the alleged perpetrators."126 Moreover, the superior should have failed to take " necessary and reasonable" measures within
his or her power to prevent the subordinate from committing an international crime
or to punish the subordi- nate. 2 7 When states formulated the doctrine of command responsibil- ity, they assumed that individuals are in a
hierarchical relationship to each other. 28 This can be gleaned from the nature of the indicia states
require to establish effective
control. Indicators that a superior possesses "effective control" over a subordinate include the fact that a
supe- rior has the ability to issue binding orders 29 and that the subordinate has an
expectation that he or she has to obey such
orders.'
Hin-Yan Liu posits that the doctrine of
command responsibility applies to a relationship between two human beings and therefore
does not regulate an interface between a LAWS and an individual .' This assertion is well-founded, because how
LAWSs function does not fit with how subordinates interact with superiors. In principle, a LAWS could be
programmed to check that the order lies within a set of orders that it can implement. However, a LAWS does not make decisions in
32 the sense in which human being s think of decision-making.' Since a LAWS lacks moral agency,"' a LAWS cannot reflect on
whether it is under an obligation to obey an order. Neither does the threat of punishment play a role in its decision-making process. A LAWS
is guided by its software when it performs an action.3 4 The operator activates processes in the software by
inputting an instruction into a LAWS."'S There is a temporal and geographical gap between a LAWS's acts and the cre- ation of the
architecture guiding how a LAWS performs." 6
The operator
lacks a material ability to prevent a LAWS from operating in an unreliable manner and
triggering a war crime due to not being involved in its design. 7 Furthermore, it is impossible to employ the doctrine of
command responsibility to
attribute a war crime to an individual in a corporation . An indicator of "effective control"
is that the alleged superior was, "by virtue of his or her position, senior in some sort of formal or informal hierarchy to the
perpetrator."18 Where the corporation is a separate organization from the defense depart- ment and where the defense staff does not
have the responsibility to oversee the day-to-day operations of contractors via a chain of com- mand, 9 defense officials lack "effective control"
over corporate employees.14° In such cases, the defense
officials or a similar body can neither issue binding orders
nor expect obedience from individuals who are not subordinate to them through a chain of command. 41
The fact
that c ustomary i nternational l aw requires that the conduct associated with the war crime take place
during an armed conflict 14 2 is not a setback for imputing accountability to individuals involved in developing and
regulating LAWSs. As the International Criminal Tribunal for the Former Yugoslavia held in the Prosecutorv. Kunarac case:
The armed
conflict need not have been causal to the commission of the crime, but the existence of an
armed conflict must, at a minimum, have
played a substantial part in the perpetrator's ability to commit
it, his decision to commit it, the manner in which it was committed or the purpose for which it was com- mitted. Hence, if it can be
established, as in the present case, that the perpetrator acted in furtherance of or under the guise of the
armed conflict, it would be sufficient to conclude that his acts were closely related to the armed conflict.14
Because the legislature144 and the individual defense ministries typi- cally promulgate regulations relating to the armed forces,14 and because
the armed forces employ LAWSs in an armed conflict, there is a nexus between the conduct of the officials and the performance of a LAWS on
the battlefield. In principle, it
should not matter that there is a time gap between the conduct of a government official
and the use of a LAWS during an armed conflict. This scenario is akin to a situation where an individual starts
planning to commit a war crime during peacetime but carries out the crime during an armed conflict .
On the development level, the fact
that many organizations are likely to produce individual components for a LAWS
poses a challenge for assigning accountability to a particular individual. 4 6 Even when a single corporation designs
and manufactures
a LAWS, there will be numerous individuals collaborating on designing a LAWS and on
determining product specifications. 47 For instance, a team of programmers could work together on proposing alternative blueprints for
the architecture of a LAWS. The board of
to programmers to develop. The
directors or senior managers could decide what product specifications to give
collaborative nature of the decision-making related to the architecture of a LAWS poses
difficulty for attributing a flawed design of a LAWS to a particular individual.
Furthermore, because the armed
forces deploy the LAWS at the time it brings about a war crime ,148 it "may not
be possible to establish the relevant intent and knowledge of a particular perpetrator" who is not part of the armed
forces. 49
C ustomary i nternational l aw requires that the superior knew or "had reason to know " that the subordinate
was about to commit or had committed a war crime ."' The complexity of the software and hardware makes it
challenging to impute even negligence to a particular individual involved in designing a LAWS."' One possible counterargument,
though, is that a programmer who knows he or she is unable to foresee the exact manner in which a LAWS will perform its mission because of
the nature of the artificial intelligence software is reckless when he or she certifies that the system is suitable for carrying out missions involving
autonomous application of lethal force." 2 Since the mental element of the doctrine of command respon- sibility encompasses recklessness,1"'
on the application of this approach the programmer would fulfill the mental element requirement of the doctrine of command responsibility.
The same argument applies to cor- porate directors and defense officials.
Scholars propose various strategies for addressing the accountability gap. Alex Leveringhouse posits that an individual who takes excessive risks
associated with ceding control to a LAWS or who fails to take into account the risks associated with employing a LAWS should bear
responsibility." 4 Such responsibility stems from the fact that the indi- vidual could foresee that a LAWS could link two variables in an inappropriate manner and carry out an unlawful act."' On this basis, the operator would be accountable." 6 Leveringhouse's approach mirrors the U.K.
Joint Doctrine Note 2/11, which places responsibility on the last person who issues commands associated with employing a LAWS 17 for a
military activity.
Although Levering house's approach.s merits consideration, it gives insufficient weight to the fact that operators play no role in devising
regulatory frameworks regarding the operational restrictions placed on the employment of LAWSs and regarding the steps they should take in
order to mitigate the risks associated with employing LAWSs. As a result, Leveringhouse..9 unduly restricts the range of individuals who are held
accountable. The better approach is found in the U.K. Joint Doctrine 2/11,16° which places accountability on relevant national military or
civilian authorities who authorize the employment of LAWSs. However, the U.K Joint Doctrine 2/11 does not go far enough, because it does not
extend accountability to senior politicians.161 Heather Roff explains that policy elites and heads of state are the ones who truly make decisions
to employ LAWSs. 16 2 Because they possess full knowl- edge about such systems and decide the limitations for their use, they are morally and
legally responsible for the outcomes brought about by LAWSs. 63 Roff concludes that current legal norms make it impossible to hold political
elites responsible.16 4
Thilo Marauhn
maintains that Article 25(3) (c) of the Rome Statute of 1998 could be employed to attribute responsibility to
developers and manufacturers of LAWSs.165 This provision criminalizes aiding, abet- ting, and assisting the commission of an
international crime for "the 1'66 purpose of facilitating the commission of ... a crime."
However , Marauhn's proposal is
unworkable . First, it requires that the aider and abettor is an accessory to a crime another individual
perpetrated.167 Developers and manufacturers cannot aid a LAWS to perpetrate a war crime, because a
LAWS is
not a natural person . Further, developers and manufacturers are unlikely to fulfill the
actus reus requirement of carrying out acts "specifically directed to assist, encourage or lend moral sup- port" to the
perpetration of a certain specific crime with such support having a " substantial effect upon the perpetration
of the crime."168 Corporations operate to earn a profit. Corporate directors and manag- ers know that bodies, such as the
D epartment o f D efense, will
not buy a weapon system where it is clear that the system is designed to bring
about war crimes .169 Given that states formulated the doctrine of com- mand responsibility with human relationships and human
perpetrators in mind, 171 the
better course of action is to formulate a legal framework to address the context of
LAWSs . This position is in line with the pro- posals of many scholars, such as Gwendelynn Bills, who argue that states should adopt a new
treaty to regulate LAWSs.171
III. ENVISAGING LAWSs As RELATIONAL ENTITIES
There are indications that states
may endow a rtificial i ntelligence with l egal p ersonhood. Saudi Arabia granted citizenship to the
artificial intelligence system Sophia in 2017.172 A number of experts recom- mended to the EU that it should consider recognizing autonomous
robots as having a legal status of "electronic persons."1 71 While it may be desirable to grant legal status to robotic systems, care
should
be taken not to conflate the nature of the legal status of human being s with that of a rtificial i ntelligence systems. A
group of computer scien- tists explains that "[r] obots are simply not people. '174 It is undesirable to apply existing categories, such as moral
agency, to robotic systems, because they do not capture the nature of such systems.1 7' Rather, we
should develop a separate
category for understanding the nature of LAWSs . Luciano Floridi and J.W. Sanders propose that the current definition of moral agency
is anthropocentric and that a new definition of a moral agent should be developed to
include a rtificial i ntelligence sys- tems.1 76
Traditionally, many legal philosophers associate moral agency with: (1) an ability to intend an action, (2) a capacity to
autonomously choose the intended action, and (3) a capability to perform an action.177 In order to be able to intend an
action and to autonomously elect an action, an individual needs to possess a capacity to reflect on 17 8 what beliefs to hold.
Bans are toothless---only regulation starting at the national level paves the way for
disarmament.
Lewis ’15 [John; February; Senior Counsel at Democracy Forward, JD, Yale Law School; Yale Law
Journal, “The Case for Regulating Fully Autonomous Weapons,” vol. 124 no. 4; *FAWs are Fully
Autonomous Weapons]
Regulating FAWs would also help to resolve issues of compliance and accountability . International law sets out fairly
broad standards: weapons must distinguish between civilians and combatants, they may not cause
disproportionate collateral damage , and so on. Yet in any given case, there is ambiguity about what the relevant standard
requires, and this ambiguity hinders effective compliance and accountability . For instance, a commander, in the heat of battle and
with incomplete information, may not know whether a particular use complies with abstract concepts such as distinction or proportionality.
Defining the bounds of permissible conduct more precisely via regulation can minimize these concerns.39
For this reason, various actors have recognized the need for guidance regarding FAWs. In 2009, the Department of Defense issued a directive
on autonomous weapons, thereby taking a strong first step toward regulation. That directive primarily addresses mechanisms for approving the
development of new weapons systems, though it does also consider both the levels of autonomy present in a given system and the purposes
for which systems may be used.40 The directive also generally dictates that commanders
using automated weapons should
apply “ appropriate care ” in compliance with international and domestic law .41 An ideal regulatory scheme would develop
beyond this Directive: it would be international in nature, would focus more heavily on use, and would provide greater specificity regarding
how and when particular systems may be used. A complete regulatory scheme would also tackle other thorny issues, including research,
testing, acquisition, development, and proliferation.42 In these early stages, the project
of regulation ought to begin with the
issue of permissible usage, given that it presents difficult—yet familiar —questions under international law .
B. States
Are More Likely To Comply with Regulations
In the previous section, I suggested that not all FAWs present an unacceptable risk of civilian casualties, and, as such, that these weapons are
not wholly impermissible. Yet,
even if FAWs ought to be categorically rejected, it is not clear that a ban would
actually be
effective. Robotic weaponry in the form of unmanned drones has already begun to revolutionize the ways
in which nations fight wars . At least one military analyst has suggested that f ully a utonomous w eapons will represent the
biggest advance in military technology since gunpowder .43 Other commentators have argued that it would be
unrealistic to expect major world powers to ban FAWs altogether, especially if some states refused to
sign on and
continued to develop them.44 FAWs may have significant military utility, and in this respect, they are unlike many other
Even if a ban were successful , moreover, nations might
interpret the terms of the ban narrowly to permit further development of FAWs46 or violate the
weapons that the international community has banned.45
prohibition in ways that escape detection.47 The better approach to ensure compliance overall would be
to establish minimum limitations on FAW technology and specific rules governing use .
Two cases, landmines and
cluster munitions, help to illustrate this point. The Ottawa Treaty formally banned
landmines in 1997. However , several states, including the U nited S tates, China, Russia, and India, declined to sign
the treaty, invoking military
necessity .48 Nations that have refused to sign the Ottawa Treaty have generally
complied with the more modest regulations of the Amended Protocol.49 In a similar pattern, several states, invoking
claims of military necessity, have declined to sign the Oslo Convention of 2008, which banned cluster weapons.50
However , these nations have signaled that they would be willing to negotiate a set of regulations under the Convention on
Certain Conventional Weapons.51 These cases
suggest that nations are unlikely to accept a full ban on weapons
that they frequently use. Among those states that are inclined to use FAWs, a more modest attempt to
regulate these weapons may
result in higher initial buy-in, as well as higher overall compliance with the principles of distinction and
proportionality .
In response to this claim, opponents of regulation make
a slippery-slope argument, stressing that once nations invest
in FAW technology, it will be difficult to encourage compliance with even modest regulations .52 Alternatively,
there is some evidence from the case of landmines that an absolute prohibition can establish a norm against a weapons system that buttresses
other, more modest regulatory schemes.53 This may be true, but
if FAWs turn out to revolutionize warfare, then states may
continue to develop them regardless. Furthermore, the causality may work the other way—” soft law ” norms, like
nation-specific codes of conduct, can often ripen into “ hard law ” treaties .54 If a ban turns out to be necessary,
then it
may be easier to build on an existing set of regulations and norms rather than to create one from
scratch . For these reasons, it is important to consider the components of an effective regulatory scheme.
Our impacts are real, not constructed---predictions are based on rigorous expertism.
Ravenal ‘9 [Earl; 2009; Professor Emeritus of the Georgetown University School of Foreign Service,
widely recognized as an expert on defense strategy; Critical Review, “What's Empire Got to Do with It?
The Derivation of America's Foreign Policy,” vol. 21]
The underlying notion of “the security bureaucracies . . . looking for new enemies” is a threadbare
concept that has somehow taken hold across the political spectrum, from the radical left (viz. Michael Klare [1981], who refers to a “threat
bank”), to the liberal center (viz. Robert H. Johnson [1997], who dismisses most alleged “threats” as “improbable dangers”), to libertarians (viz.
Ted Galen Carpenter [1992], Vice President for Foreign and Defense Policy of the Cato Institute, who wrote a book entitled A Search for
Enemies). What
is missing from most analysts’ claims of “threat inflation,” however, is a convincing
theory of why, say, the American government significantly (not merely in excusable rhetoric) might magnify and
even invent threats (and, more seriously, act on such inflated threat estimates).
In a few places, Eland (2004, 185) suggests that such behavior might stem from military or national security bureaucrats’ attempts to enhance their personal status and organizational budgets,
or even from the influence and dominance of “the military-industrial complex”; viz.: “Maintaining the empire and retaliating for the blowback from that empire keeps what President
Eisenhower called the military-industrial complex fat and happy.” Or, in the same section:
In the nation’s capital, vested interests, such as the law enforcement bureaucracies . . . routinely take advantage of “crises” to satisfy parochial desires. Similarly, many
corporations use crises to get pet projects— a.k.a. pork—funded by the government. And national security crises, because of people’s fears, are especially ripe opportunities to
grab largesse. (Ibid., 182)
Thus, “bureaucratic-politics” theory, which once made several reputations (such as those of Richard Neustadt, Morton Halperin, and Graham Allison) in defense-intellectual circles, and
spawned an entire sub-industry within the field of international relations, 5 is put into the service of dismissing putative security threats as imaginary.
So, too, can a surprisingly cognate theory, “public choice,”6 which can be considered the right-wing analog of the “bureaucratic-politics” model, and is a preferred interpretation of
governmental decisionmaking among libertarian observers. As Eland (2004, 203) summarizes:
Public-choice theory argues [that] the government itself can develop separate interests from its citizens. The government reflects the interests of powerful pressure groups and
the interests of the bureaucracies and the bureaucrats in them. Although this problem occurs in both foreign and domestic policy, it may be more severe in foreign policy
because citizens pay less attention to policies that affect them less directly.
There is, in this statement of public-choice theory, a certain ambiguity, and a certain degree of contradiction: Bureaucrats are supposedly, at the same time, subservient to societal interest
groups and autonomous from society in general.
This journal has pioneered the argument that state autonomy is a likely consequence of the public’s ignorance of most areas of state activity (e.g., Somin 1998; DeCanio 2000a, 2000b, 2006,
2007; Ravenal 2000a). But state autonomy does not necessarily mean that bureaucrats substitute their own interests for those of what could be called the “national society” that they
ostensibly serve. I have argued (Ravenal 2000a) that, precisely because of the public-ignorance and elite-expertise factors, and especially because the opportunities—at least for bureaucrats (a
few notable post-government lobbyist cases nonwithstanding)—for lucrative self-dealing are stringently fewer in the defense and diplomatic areas of government than they are in some of the
contract-dispensing and more under-the-radar-screen agencies of government, the “public-choice” imputation of self-dealing, rather than working toward the national interest (which,
however may not be synonymous with the interests, perceived or expressed, of citizens!) is less likely to hold. In short, state autonomy is likely to mean, in the derivation of foreign policy, that
“state elites” are using rational judgment, in insulation from self-promoting interest groups—about what strategies, forces, and weapons are required for national defense.
Ironically, “public choice”—not even a species of economics, but rather a kind of political interpretation—is not even about “public” choice, since, like the bureaucratic-politics model, it
repudiates the very notion that bureaucrats make truly “public” choices; rather, they are held, axiomatically, to exhibit “rent-seeking” behavior, wherein they abuse their public positions in
order to amass private gains, or at least to build personal empires within their ostensibly official niches. Such subrational models actually explain very little of what they purport to observe. Of
course, there is some truth in them, regarding the “behavior” of some people, at some times, in some circumstances, under some conditions of incentive and motivation. But the factors that
they posit operate mostly as constraints on the otherwise rational optimization of objectives that, if for no other reason than the playing out of official roles, transcends merely personal or
parochial imperatives.
My treatment of “role” differs from that of the bureaucratic-politics theorists, whose model of the derivation of foreign policy depends heavily, and acknowledgedly, on a narrow and specific
identification of the roleplaying of organizationally situated individuals in a partly conflictual “pulling and hauling” process that “results in” some policy outcome. Even here, bureaucraticpolitics theorists Graham Allison and Philip Zelikow (1999, 311) allow that “some players are not able to articulate [sic] the governmental politics game because their conception of their job
does not legitimate such activity.” This is a crucial admission, and one that points— empirically—to the need for a broader and generic treatment of role.
Roles (all theorists state) give rise to “expectations” of performance. My
point is that virtually every governmental role,
and especially national-security roles, and particularly the roles of the uniformed military, embody expectations of
devotion to the “national interest”;
rationality in the derivation of policy at every functional level; and objectivity in the
treatment of parameters, especially external parameters such as “threats” and the power and capabilities of
other nations.
Sub-rational models (such as “public choice”) fail to take into account even a partial dedication to the “national” interest (or even the possibility
that the national interest may be honestly misconceived in more parochial terms). In contrast, an official’s role connects the individual to the
(state-level) process, and moderates the (perhaps otherwise) self-seeking impulses of the individual. Role-derived behavior tends to
be
formalized and codified; relatively transparent and at least peer-reviewed, so as to be consistent with
expectations; surviving the particular individual and transmitted to successors and ancillaries; measured against a standard and
thus corrigible; defined in terms of the performed function and therefore derived from the state function; and uncorrrupt, because
personal cheating and even egregious aggrandizement are conspicuously discouraged .
My own direct observation suggests that defense decision-makers
they try to solve on
attempt to “frame” the structure of the problems that
the basis of the most accurate intelligence. They make it their business to know where the threats come
from. Thus, threats
are not “socially constructed” (even though, of course, some values are).
A major reason for the rationality, and the objectivity, of the process is that much security planning is
done, not in vaguely undefined circumstances that offer scope for idiosyncratic, subjective behavior, but rather in structured and
reviewed organizational frameworks. Non-rationalities (which are bad for understanding and prediction) tend to get
filtered out. People are fired for presenting skewed analysis and for making bad predictions. This is
because something important is riding on the causal analysis and the contingent prediction.
For these reasons, “public choice” does not have the “feel” of reality to many critics who have participated in the structure of defense decisionmaking. In that structure, obvious, and even not-so-obvious, “rent-seeking”
would not only be shameful; it would present a
severe risk of career termination . And, as mentioned, the defense bureaucracy is hardly a productive place for truly talented
rent-seekers to operate, compared to opportunities for personal profit in the commercial world. A bureaucrat’s very self-placement in these
reaches of government testifies either to a sincere commitment to the national interest or to a lack of sufficient imagination to exploit
opportunities for personal profit.
Avoiding extinction is necessary and valuable.
Stevens ’18 [Tim; 2018; Senior Lecturer in Global Security at Kings College London; Millennium:
Journal of International Studies, “Exeunt Omnes? Survival, Pessimism and Time in the Work of John H.
Herz,” p. 283-302]
Herz explicitly combined, therefore, a political realism with an ethical idealism, resulting in what he termed a ‘survival ethic’.65 This was
applicable to all humankind and its propagation relied on the generation of what he termed ‘world-consciousness’.66 Herz’s implicit
recognition of an open yet linear temporality allowed him to imagine possible futures aligned with the
survival ethic, whilst at the same time imagining futures in which humans become extinct . His
pessimism about the latter did not preclude working towards the former.
As Herz recognized, it
was one thing to develop an ethics of survival but quite another to translate theory into
practice. What was required was a collective, transnational and inherently interdisciplinary effort to
address nuclear and environmental issues and to problematize notions of security, sustainability and survival
in the context of nuclear geopolitics and the technological transformation of society. Herz proposed various practical ways in
which young people in particular could become involved in this project. One idea floated in the 1980s, which would alarm many in today’s more
cosmopolitan and culturally-sensitive IR, was for a Peace Corps-style ‘peace and development service’, which would ‘crusade’ to provide
‘something beneficial for people living under unspeakably sordid conditions’ in the ‘Third World’.67 He expended most of his energy, however,
from the 1980s onwards, in thinking about and formulating ‘a new subdiscipline of the social sciences’, which he called ‘Survival Research’.
68 Informed by the survival ethic outlined above, and within the overarching framework of his realist liberal internationalism, Survival Research
emerged as Herz’s solution to the shortcomings of academic research, public education and policy development in the face of global
catastrophe.69 It was also Herz’s plea to scholars to venture beyond the ivory tower and become – excusing the gendered language of the time
– ‘homme engagé, if not homme révolté’.70 His proposals for Survival Research were far from systematic but they reiterated his life-long
concerns with nuclear and environmental issues, and with the necessity to act in the face of threats to human survival. The
principal
responsibilities of survival researchers were two-fold. One, to raise awareness of survival issues in the
minds of policy-makers and the public, and to demonstrate the link between political inaction now
and its effect on subsequent human survival . Two, to suggest and shape new attitudes more ‘appropriate to the solution of
new and unfamiliar survival problems’, rather than relying on ingrained modes of thought and practice.71 The primary initial purpose,
therefore, of Survival Research would be to identify scientific, sociocultural and political problems bearing on the possibilities of survival, and to
begin to develop ways of overcoming these. This was, admittedly, non-specific and somewhat vague, but the central thrust of his proposal was
clear: ‘In
our age of global survival concerns, it should be the primary responsibility of scholars to engage
in survival issues’.72 Herz considered IR an essential disciplinary contributor to this endeavour, one that should be promiscuous across
the social and natural sciences. It should not be afraid to think the worst, if the worst is at all possible, and to
establish the various requirements – social, economic, political – of ‘a livable world’.73 How this long-term project would
translate into global policy is not specified but, consistent with his previous work, Herz identified the need for shifts in attitudes to and
awareness of global problems and solutions. Only then would it be possible for ‘a turn round that demands leadership to persuade millions to
change lifestyles and make the sacrifices needed for survival’.
74 Productive pessimism and temporality
In 1976, shortly before he began compiling the ideas that would become Survival Research, Herz wrote:
For the first time, we are compelled to take the futuristic view if we want to make sure that there will be future generations at all.
Acceleration of developments in the decisive areas (demographic, ecological, strategic) has become so strong that even the egotism of après
nous le déluge might not work because the déluge may well overtake ourselves, the living.
Of significance here is not the appeal to futurism per se, although this is important, but the suggestion this is
‘the first time’ futurism is necessary to ensuring human survival . This is Herz the realist declaring a break with
conventional realism: Herz is not bound to a cyclical vision of political or historical time in which events and processes reoccur over and again.
His identification of nuclear weapons as an ‘absolute novum’ in international politics demonstrates this belief in the non-cyclical nature of
humankind’s unfolding temporality.76 As Sylvest observes of Herz’s attitude to the nuclear revolution, ‘the horizons of meaning it produced
installed a temporal break with the past, and simultaneously carried a promise for the future’.
This ‘promise for the future’ was not, however, a simple liberal view of a better future consonant with
human progress. His autobiography is clear that his experiences of Nazism and the Holocaust destroyed all
remnants of any original belief in ‘inevitable progress’.78 His frustration at scientism, technocratic deception, and the
brutal rationality of twentieth-century killing, all but demanded a rejection of the liberal dream and the inevitability of its consummation. If the
‘new age’ ushered in by nuclear weapons, he wrote, is characterized by anything, it is by its ‘indefiniteness of the age and the uncertainties of
the future’; it was impossible under these conditions to draw firm conclusions about the future course of international politics.79 Instead, he
recognised the contingency, precarity and fragility of international politics, and the ghastly tensions inherent to the structural core of
international politics, the security dilemma.
80 Herz was uneasy with both
cyclical and linear-progressive ways of perceiving historical time. The former ‘closed’
temporalities are endemic to versions of realist IR, the latter to post-Enlightenment narratives feeding liberal-utopian visions of international
relations and those of Marxism.81 In their own ways, each
marginalizes and diminishes the contingency of the social
world in and through time, and the agency of political actors in effecting change . Simultaneously, each shapes
the futures that may be imagined and brought into being. Herz recognised this danger. Whilst drawing attention to his own gloomy disposition,
he warns that without care and attention, ‘the assumption may determine the event’.82 As a pessimist, Herz was alert to the hazard of
succumbing to negativity, cynicism or resignation. E.H. Carr recognised this also, in the difference between the ‘deterministic pessimism’ of
‘pure’ realism and those realists ‘who have made their mark on history’; the latter may be pessimists but they still believe ‘human affairs can be
directed and modified by human action and human thought’.83 Herz would share this anti-deterministic perspective with Carr. Moreover, the
possibility of agency is a product of a temporality ‘neither temporally closed nor deterministic,
neither cyclical nor linear-progressive; it is rooted in contingency’.
Reject epistemic purity---pragmatic solutions, even when imperfect, are necessary for
activism and the oppressed.
Jarvis ’0 [Darryl; 2000; Former Senior Lecturer in International Relations at the University of
Sydney; International Relations and the Challenge of Postmodernism, University of South Carolina Press,
“Continental Drift,” p. 128-129]
More is the pity that such irrational and obviously abstruse debate should so occupy us at a time of great global turmoil. That it does and
continues to do so reflect our lack of judicious criteria for evaluating theory and, more importantly, the lack of attachment theorists have to the
real world. Certainly, it is right and proper that we ponder the depths of our theoretical imaginations, engage in
epistemological and
ontological debate , and analyze the sociology of our knowledge. But to support that this is the only task of
international theory, let alone the most important one, smacks of intellectual elitism and displays a certain
contempt for those who search for guidance in their daily struggle as actors in international politics. What does
Ashley’s project, his deconstructive efforts , or valiant fight against positivism say to the truly marginalized, oppressed ,
and destitute ?
How does it help solve the plight of the poor, the displaced refugees, the casualties of war , or the émigrés of
death squads ?
Does it in any way speak to those whose actions and thoughts comprise the policy and
practice of international relations?
On all these questions one must answer
no . This is not to say, of course, that all theory should be judged by its technical rationality and
problem-solving capacity as Ashley forcefully argues. But to
in some way bad—is
support that problem-solving technical theory is not necessary—or
a contemptuous position that abrogates any hope of solving
some of the
nightmarish
realities that millions confront daily . As Holsti argues, we need ask of these theorists and their theories the ultimate question,
“So what?” To what purpose do they deconstruct, problematize, destabilize, undermine, ridicule, and belittle modernist and rationalist
approaches? Does this get us any further, make the world any better, or enhance the human condition? In
what sense can this
“debate toward [a] bottomless pit of epistemology and metaphysics” be judged pertinent, relevant, helpful ,
or cogent to
anyone other than those foolish enough to be scholastically excited by abstract and recondite
debate.
Fiat is good:
1. CRITICAL THINKING---fiat positions debaters as effective social critics without
treating debate as anything more than a game.
McGee ’97 [Brian and David Romanelli; 1997; Assistant Professor in Communication Studies at Texas
Tech AND Director of Debate at Loyola University of Chicago; Contemporary Argumentation and Debate,
“Policy Debate as Fiction: In Defense of Utopian Fiat,” vol. 18 p. 23-35; *ableist language modifications
denoted by brackets; DML]
Snider argued several years ago that a suitable paradigm should address “something we can ACTUALLY DO” as
opposed to something we can MAKE BELIEVE ABOUT” (“Fantasy as Reality” 14). A utopian literature metaphor is
beneficial precisely because it is within the power of debaters to perform the desired action suggested by the metaphor, if not always to
demonstrate that the desired action is politically feasible.
Instead of debaters playing to an audience of those who make public policy, debaters should
understand themselves as budding social critics in search of an optimal practical and cultural politics.
While few of us will ever hold a formal policy-making position, nearly all of us grow up with the social
and political criticism of the newspaper editorial page, the high school civics class, and, at least in homes that do
not ban the juxtaposition of food and politics, the lively dinner table conversation. We complain about high income taxes, declining state
subsidies for public education, and crumbling interstate highways. We worry about the rising cost of health care and wonder if we will have
access to high-quality medical assistance when we need it. Finally, we bemoan the decline of moral consensus, rising rates of divorce, drug use
among high school students, and disturbing numbers of pregnant teen-agers. From childhood on, we are told that good citizenship demands
that we educate ourselves on political matters and vote to protect the polis; the success of democracy allegedly demands no less. For those
who accept this challenge instead of embracing the political alienation of Generation X and becoming devotees of Beavis and Butthead, social
criticism is what good citizens do.
Debate differs from other species of social criticism because debate is a game played by students who
want to win. However, conceiving of debate as a kind of social criticism has considerable merit. Social criticism is not restricted
to a technocratic elite or group of elected officials . Moreover, social criticism is not necessarily idle or wholly
deconstructive. Instead, such criticism necessarily is a prerequisite to any effort to create policy change, whether that
criticism is articulated by an elected official or by a mother of six whose primary workplace is the home. When
one challenges the status quo, one normally implies that a better alternative course of action exists. Given
that intercollegiate debate frequently involves exchanges over a proposition of policy by student advocates who are relatively unlikely ever to
debate before Congress, envisioning intercollegiate debate as a specialized extension of ordinary citizen inquiry and advocacy in the public
sphere seems attractive. Thinking of debate as a variety of social criticism gives debate an added dimension of public relevance.
One way to understand the distinction between debate as policy-making and debate as social criticism is to examine Roger W. Cobb and
Charles D. Elder’s agenda-building theory.5 Cobb and Elder are well known for their analytic split of the formal agenda for policy change, which
includes legislation or other action proposed by policy makers with formal power (e.g., government bureaucrats, U.S. Senators), from the public
agenda for policy change, which is composed of all those who work outside formal policy-making circles to exert influence on the formal
agenda. Social movements, lobbyists, political action committees, mass media outlets, and public opinion polls all constitute the public agenda,
which, in turn, has an effect on what issues come to the forefront on the formal agenda. From the agenda-building perspective, one cannot
understand the making of public policy in the United States without comprehending the confluence of the formal and public agenda.
In intercollegiate debate, the policy-making metaphor has given primacy to formal agenda functions at the expense of the public agenda.
Debaters are encouraged to bypass thinking about the public agenda in outlining policy alternatives; appeals for policy change frequently are
made by debaters under the strange pretense that they and/or their judges are members of the formal agenda elite. Even arguments about the
role of the public in framing public policy are typically issued by debaters as if those debaters were working within the confines of the formal
agenda for their own, instrumental advantage. (For example, one thinks of various social movement “backlash” disadvantage arguments, which
advocate a temporary policy paralysis in order to stir up public outrage and mobilize social movements whose leaders will demand the formal
adoption of a presumably superior policy alternative.) The policy-making metaphor concentrates on the formal agenda to the near exclusion of
the public agenda, as the focus of a Katsulas or a Dempsey on the “real-world” limitations for making policy indicates.
Debate as social criticism does not entail exclusion of formal agenda concerns from intercollegiate
debate. The specified agent of action in typical policy resolutions makes ignoring the formal agenda of
the United States government an impossibility . However, one need not be able to influence the formal agenda
directly in order to discuss what it is that the United States government should do. Undergraduate debaters
and their judges usually are far removed —both physically and functionally—from the arena of formal-agenda
deliberation . What the disputation of student debaters most closely resembles, to the extent that it resembles any real-world analog, is
public-agenda social criticism. What students are doing is something they really CAN do as students and ordinary citizens; they are working in
their own modest way to shape the public agenda.
While “social criticism” is the best explanation for what debaters do, this essay goes a step further. The mode of criticism in which debaters
Strictly speaking, debaters engage in the creation of fictions and the
comparison of fictions to one another. How else does one explain the affirmative advocacy of a plan, a
operate is the production of utopian literature.
counterfactual world that, by definition, does not exist? Indeed, traditional inherency burdens demand
that such plans be utopian, in the sense that current attitudes or structures make the immediate
enactments of such plans unlikely in the “real world” of the formal agenda. Intercollegiate debate is utopian
because plan and/or counterplan enactment is improbable. While one can distinguish between
incremental and radical policy change proposals, the distinction makes no difference in the utopian
practice of intercollegiate debate .
More importantly,
intercollegiate debate is utopian in another sense. Policy change is considered because such change, it is
hoped, will facilitate the pursuit of the good life.
For decades, intercollegiate debaters have used fiat or the authority
of the word “should” to propose radical changes in the social order , in addition to advocacy of the incremental
policy changes typical of the U.S. formal agenda. This wide range of policy alternatives discussed in contemporary intercollegiate debate is the
sign of a healthy public sphere, where thorough consideration of all policy alternatives is a possibility. Utopian fiction, in which the good place
that is no place is envisioned, makes possible the instantiation of a rhetorical vision prerequisite to building that good place in our tiny corner of
the universe. Even Lewis Mumford, a critic of utopian thought, concedes that we “can never reach the points of the compass; and so no doubt
we shall never live in utopia; but without the magnetic needle we should not be able to travel intelligently at all” (Mumford 24-25).
An objection to this guiding metaphor is that it encourages debaters to do precisely that to which Snider would object, which is to “make
believe” that utopia is possible. This objection misunderstands the argument.
These students already are writers of utopian
fiction from the moment they construct their first plan or counterplan text . Debaters who advocate policy
change announce their commitment to changing the organization of society in pursuit of the good life, even though they have no formal power
to call this counterfactual world into being. Any proposed change, no matter how small, is a repudiation of policy paralysis and the maintenance
of the status quo. As already practiced, debate revolves around utopian proposals, at least in the sense that debaters and judges lack the formal
authority to enact their proposals. Even those negatives who defend the current social order frequently do so by pointing to the potential
dystopic consequences of accepting such proposals for change.
Understanding debate as utopian literature would not eliminate references to the vagaries of making public policy, including debates over the
advantageousness of plans and counterplans. As noted above,
talking about public policy is not making public policy ,
and a retreat from the policy-making metaphor would have relatively little effect on the contemporary practice of intercollegiate debate.6 For
example, while space constraints prevent a thorough discussion of this point, the utopian literature metaphor would not necessitate the
removal of all constraints on fiat, although some utopian proposals will tax the imagination where formal-agenda policy change is concerned.
The utopian literature metaphor does not ineluctably divorce debate from the problems and concerns of ordinary people and everyday life.
There will continue to be debates focused on incremental policy changes as steps along the path to utopia. What the utopian literature
metaphor does is to position debaters, coaches, and judges as the unapologetic social critics that they are and have always been, without the
confining influence of a guiding metaphor that limits their ability to search for the good life. Further, this metaphor does not encourage the
debaters to carry the utopian literature metaphor to extremes by imagining that they are sitting in a corner and penning the next great
American novel. The
metaphor is useful because it orients debaters to their role as social critics, without
the suggestion that debate is anything other than an educational game played by undergraduate
students.
2. ADVOCACY---predictable debates over fiated proposals develop detailed advocacy
and research skills.
Dybvig ‘2k [Kristin and Joel Iverson; 2000; Ph.D. in Communications from Arizona State University,
M.S. from Cornell University; Associate Professor of Communication at the University of Montana;
Debate Central, “Can Cutting Cards Carve into Our Personal Lives: An Analysis of Debate Research on
Personal Advocacy,” https://debate.uvm.edu/dybvigiverson1000.html]
Mitchell (1998) provides a thorough examination of the pedagogical implication for academic debate. Although Mitchell acknowledges
that
debate provides preparation for participation in democracy, limiting debate to a laboratory where students
practice their skill for future participation is criticized. Mitchell contends:
For students and teachers of argumentation, the heightened salience of this question should signal the danger that critical thinking and oral
advocacy skills alone may not be sufficient for citizens to assert their voices in public deliberation. (p. 45)
Mitchell contends that the laboratory style setting creates barriers to other spheres, creates a "sense of detachment" and causes debaters to
see research from the role of spectators. Mitchell further calls for " argumentative
agency [which] involves the capacity to
contextualize and employ the skills and strategies of argumentative discourse in fields of social action , especially wider
spheres of public deliberation" (p. 45). Although we agree with Mitchell that debate can be an even greater instrument of empowerment
for students, we are more interested in examining the impact of the intermediary step of research. In each of Mitchell's examples of debaters
finding creative avenues for agency, there had to be a motivation to act. It is our contention that the
research conducted for
competition is a major catalyst to propel their action, change their opinions, and to provide a greater depth
of understanding of the issues involved.
The level of research involved in debate creates an in-depth understanding of issues. The level of research conducted during a year of debate is
quite extensive. Goodman (1993) references a Chronicle of Higher Education article that estimated "the level and extent of research required of
the average college debater for each topic is equivalent to the amount of research required for a Master's Thesis (cited in Mitchell, 1998, p.
55). With this extensive quantity of research, debaters attain a high level of investigation and (presumably) understanding of a topic. As a
result of this level of understanding, debaters become knowledgeable citizens who are further empowered to make informed opinions
and energized to take action. Research helps to educate students (and coaches) about the state of the world.
Without the guidance of a debate topic, how many students would do in-depth research on female
genital mutilation in Africa, or United Nations sanctions on Iraq? The competitive nature of
policy debate provides an impetus for students to research the topics that they are going to debate. This in turn fuels
students’ awareness of issues that go beyond their
front doors . Advocacy flows from this increased awareness. Reading books and
articles about the suffering of people thousands of miles away or right in our own communities drives people to become involved in the
community at large.
Research has also focused on how debate prepares us for life in the public sphere. Issues that we discuss in debate have found their way onto
the national policy stage, and training in intercollegiate debate makes
us good public advocates . The public sphere is the arena in
which we all must participate to be active citizens. Even after we leave debate, the skills that we have gained should help us to be better
advocates and citizens. Research has looked at how debate impacts education (Matlon and Keele 1984), legal training (Parkinson, Gisler and
Pelias 1983, Nobles 19850 and behavioral traits (McGlone 1974, Colbert 1994). These works illustrate the impact that public debate has on
students as they prepare to enter the public sphere.
The debaters who take active roles such as protesting sanctions were probably not actively engaged in the issue
until their research drew them into the topic. Furthermore, the process of intense research for debate may actually change the
positions debaters hold. Since debaters typically enter into a topic with only cursory (if any) knowledge of the issue, the research process
provides exposure to issues that were previously unknown. Exposure to the literature on a topic can create, reinforce or alter an individual's
opinions. Before learning of the School for the America's, having an opinion of the place is impossible. After hearing about the systematic
training of torturers and oppressors in a debate round and reading the research, an opinion of the "school" was developed. In this
manner, exposure
to debate research as the person finding the evidence, hearing it as the opponent in a debate round
of awareness on an issue. This process of discovery seems to have a similar impact to
(or as judge) acts as an initial spark
watching an investigative news report.
Mitchell claimed that debate could be more than it was traditionally seen as, that it could be a catalyst to empower people to act in the social
arena. We surmise that there is a step in between the debate and the action. The intermediary step where people are inspired to agency is
based on the research that they do. If students are compelled to act, research is a main factor in compelling them to do so. Even
are
if students
not compelled to take direct action, research still changes opinions and attitudes.
Research often compels students to take action in the social arena. Debate topics guide students in a direction that allows them to explore
what is going on in the world. Last year the college policy debate topic was,
Resolved: That the
U nited S tates F ederal G overnment should adopt a policy of constructive engagement, including
the immediate removal of all or nearly all economic sanctions, with the government(s) of one or more of the following nation-states: Cuba,
Iran, Iraq, Syria, North Korea.
This topic spurred quite a bit of activism on the college debate circuit. Many students become actively involved in protesting for the
removal of sanctions from at least one of the topic countries. The college listserve was
used to rally people in support
of various movements to remove sanctions on both Iraq and Cuba. These messages were posted after the research on the topic began.
While this topic did not lend itself to activism beyond rallying the government, other topics have allowed students to take their beliefs outside
of the laboratory and into action.
In addition to creating awareness, the research process can also reinforce or alter opinions. By discovering new information in the research
process, people can question their current assumptions and perhaps formulate a more informed opinion. One example comes from
a summer debate class for children of Migrant workers in North Dakota (Iverson, 1999). The Junior High aged students chose to debate the
adoption of Spanish as an official language in the U.S. Many students expressed their concern that they could not argue effectively against the
proposed change because it was a "truism." They were wholly in favor of Spanish as an official language. After researching the topic throughout
their six week course, many realized much more was involved in adopting an official language and that they did not "speak 'pure' Spanish or
English, but speak a unique dialect and hybrid" (Iverson, p. 3). At the end of the class many students became opposed to adopting Spanish as an
official language, but found other ways Spanish should be integrated into American culture. Without research, these students would have
maintained their opinions and not enhanced their knowledge of the issue. The students who maintained support of Spanish as an official
language were better informed and thus also more capable of articulating support for their beliefs.
The examples of debate and research impacting the opinions and actions of debaters indicate the strong potential for a direct relationship
between debate research and personal advocacy. However, the debate community has not created a new sea of activists immersing this planet
in waves of protest and political action. The level of influence debater search has on people needs further exploration. Also, the process of
research needs to be more fully explored in order to understand if and why researching for the competitive activity of debate generates more
interest than research for other purposes such as classroom projects.
Since parliamentary debate does not involve research into a single topic, it can provide an important reference point for examining the impact
of research in other forms of debate. Based upon limited conversations with competitors and coaches as well as some direct coaching and
judging experience in parliamentary debate, parliamentary forms of debate has not seen an increase in activism on the part of debaters in the
United States. Although some coaches require research in order to find examples and to stay updated on current events, the basic principle of
this research is to have a commonsense level of understanding(Venette, 1998). As the NPDA website explains, "the reader is encouraged to be
well-read in current events, as well as history, philosophy, etc. Remember: the realm of knowledge is that of a 'well-read college student'"
(NPDA Homepage,http://www.bethel.edu/Majors/Communication/npda/faq2.html). The focus of research is breadth, not depth. In fact, indepth research into one topic for parliamentary debate would seem to be counterproductive. Every round has a different resolution and for
APDA, at least, those resolutions are generally written so they are open to a wide array of case examples, So, developing too narrow of a focus
could be competitively fatal. However, research is apparently increasing for parliamentary teams as reports of "stock cases" used by teams for
numerous rounds have recently appeared. One coach did state that a perceived "stock case" by one team pushed his debaters to research the
topic of AIDS in Africa in order to be equally knowledgeable in that case. Interestingly, the coach also stated that some of their research in
preparation for parliamentary debate was affecting the opinions and attitudes of the debaters on the team.
Not all debate research appears to generate personal advocacy and challenge peoples' assumptions. Debaters must switch sides, so they
must inevitably debate against various cases. While this may seem to be inconsistent with advocacy,
supporting and
researching both sides of an argument actually created stronger advocates . Not only did debaters learn both
sides of an argument, so that they could defend their positions against attack , they also learned the nuances of each
position. Learning and the
intricate nature of various policy proposals helps debaters to strengthen their
own stance on issues.
3. PASSIVITY---limited debates over fiated proposals reduce apathy and fosters a
commitment to action.
Eijkman ’12 [Henrik; May 2012; PhD from University of Canberra, a visiting Fellow at the UNSW
Canberra Campus and visiting Professor of Academic Development at ADCET Engineering College, India;
“The role of simulations in the authentic learning for national security policy development,” Australian
National University National Security College Occasional Paper, no. 4]
Policy simulations enable the seeking of Consensus
Games are popular because historically people seek and enjoy the tension of competition, positive rivalry
and the procedural justice of impartiality in safe and regulated environments . As in games, simulations
temporarily remove the participants from their daily routines, political pressures, and the restrictions
of real-life protocols.
In consensus building, participants engage in extensive debate and need to act on a shared set of meanings and beliefs to guide the policy
process in the desired direction, yet
without sacrificing critique and creativity . During the joint experimental actions of
simulation, value debates become focused, sharpened, and placed into operational contexts that allow participants to negotiate value tradeoffs. Participants work holistically, from the perspective of the entire system, in order to reach a joint definition of the problem. Most
importantly, role-playing
takes the attention away from the individual (Geurts et al. 2007). To cite one case, Geurts
et al. (2007: 549) note that the ‘impersonal (in-role) presentation of some of the difficult messages was a
very important factor in the success of the game. When people play roles, they defend a perspective,
not their own position: what they say in the game, they say because their role forces them to do so’.
Consequently, policy simulations make it possible for participants to become (safely) caught up and to learn
powerful lessons from conflict-ridden simulations rather than from conflict-ridden real-life policy processes
(Geurts et al. 2007).
Policy simulations promote Commitment to action
When participants engage collaboratively in a well-designed policy simulation and work towards the
assessment of possible impacts of major decision alternatives, they tend to become involved,
reassured and committed. However, participating in a simulation about one’s own organisation or
professional arena can also be a disquieting experience. The process of objectification that takes place in a well-designed
and
well-run simulation helps to reinforce memory, stimulate doubt, raise issues, disagreements and
further discussions, and acts to control the delegation of judgement (those who are affected can check the logic of
simulations engage participants in the exploration of possible futures and foster the power of ‘exercises in explicitness’ to
question and prevent unrealistic over- commitment to one idea or course of action and critically explore situations and
conditions where a chosen strategy deviates, fails, or backfires.
action). Good
Policy
simulations are, of course, not free from the problem of participant passivity. However, a well-
planned process of participatory modelling, a strictly balanced distribution of tasks, and transparent
activity of all participants acts as a safeguard against abstention from involvement. Good simulations:
serve as vehicles to develop realistic, mature and well- grounded commitment. They help individual actors engaged in a strategy to understand
the problem, see the relevance of a new course of action, understand their roles in the master plan, and feel confident that their old or recently
acquired skills will help them to conquer the obstacles or seize the opportunities ahead (Geurts et al., 2007: 551).
2AC---Indiana---Octos
AI Weapons---2AC
54 If
a ban turns out to be necessary, then it may be easier to build on an existing set of regulations and norms
rather than to create one from scratch .
Counter-interp---'Artificial intelligence’ is a gradient that includes automated and
semi-autonomous algorithms.
Shrestha ’21 [Sahara; Fall 2021; J.D. Candidate at George Mason University School of Law; George
Mason Law Review, “Nature, Nurture, or Neither? Liability for Automated and Autonomous Artificial
Intelligence Torts Based on Human Design and Influences,” vol. 29]
Legal scholars have various ways of dealing with the question of who should be held liable when an intelligent-AI machine causes harm, and
such assessments do not always align with the current simple machine liability found in traditional tort law cases. While the legal discourse may
provide some guidance for courts in future intelligent-AI-harm cases, the general answer to the question of who should be held liable for such
harms is unclear. Part II will explain how a
succinct division of all intelligent-AI machines into automated- and
autonomous-AI cases can help clarify the legal discussion surrounding AI liability.
II. Categorizing Intelligent-AI Machines and Assessing Liability for AI Torts
By using the words “AI,” “autonomous machine,” and “automation” as interchangeable synonyms,123 legal scholars overcomplicate the
assessment of fault in intelligent-AI-machine-tort discussions. Some legal
scholars have attempted to categorize
intelligent-AI machines into further categories depending on autonomy. For example, one scholar
separates “decision-assistance” AI that makes recommendations to users from fully-autonomous AI
that makes decisions for itself .124 Another scholar categorizes AI as either “knowledge engineering” AI that follows preprogrammed tasks from “machine learning” AI that learns and processes data to make decisions.125 The National Highway and Traffic Safety
Administration (“NHTSA”) uses five levels of classification for autonomy: fully manual cars; partially automated cars; automated cars; semiautonomous cars; and fully-autonomous cars.126 This
Comment adopts the terminology used in the autonomous vehicle
context127 to separate intelligent-AI machines into two categories: automated-AI or autonomous-AI
machines.128 Using this distinction, courts can improve their assessment of liability and name the correct party responsible for the AI harm.
A. The Difference Between Automated AI and Autonomous AI and Why the Distinction Matters
Distinguishing automated-AI torts from autonomous-AI torts allows courts to separate the “easy” cases from the “difficult” ones. The discussion
surrounding who should be liable for an intelligent-AI machine’s tort is complex because AI machines are not one homogenous group.129 AI
machines range from semi-dependent to fully-autonomous machines with varying complexities. This
Comment attempts to
simplify the legal scholarship of AI by categorizing machines as (1) fully-dependent, simple machines,
(2) semi-autonomous, automated AI, or (3) fully-autonomous AI. Figure 2 summarizes the hierarchy of
autonomy in machines.
Automated AI differs from autonomous AI.
Automated AI follows pre-programmed, human
instructions.130 Automated AI requires manual configuration, so the system’s code is derived from
various “if A, then B” statements.131 For example, a user or programmer could set a smart thermostat with instructions like, “If it
is hotter than seventy-five degrees Fahrenheit, turn the cooling unit on.” When the system detects the temperature has risen above seventyfive degrees Fahrenheit, the automated AI will thus start the home’s cooling unit.132 Examples of other automated AI include the capability of
a car to parallel park on its own,133 current online fraud detection systems for financial transactions,134 and even Apple’s Siri.135
Autonomous AI, on the other hand, can respond to real-world problems without any human
intervention or direction.136 Autonomous AI systems heavily rely on machine learning, a subset of AI
where the system learns and makes decisions inductively by referencing large amounts of data.137 In machine learning,
solutions to problems are not coded in advance. For example, Google created an advanced AI machine learning software that can play
computer games like Go,138 chess, and Shogi.139 The Google AI received no instructions from human experts and learned to play Go from
scratch by playing itself.140 The AI could play millions of games against itself and learn through trial and error within hours.141 After each
game, the program adjusted its parameters.142 Since the program was “unconstrained by conventional wisdom,” the program actually
developed its own intuitions and strategies that its human counterparts could not even fathom.143 Humans may have assigned the
autonomous-AI tasks, but the system chooses the means of performing the tasks.144 Those means—the choices and judgments the system
makes—cannot be predicted by users or manufacturers assigning tasks or programming the original system.145
Because intelligent-AI machines are not infallible,146 subsequent harms will occur eventually. An automated AI harm may look like an accident
by a driverless car that fails to see an obstruction because the sensor could not recognize the object.147 An autonomous AI harm may look like
discriminatory treatment based on a biased algorithm used to evaluate mortgage applications that considers “impermissible factors,” like
race.148
By distinguishing between automated and autonomous AI, courts can more accurately assess liability
for intelligent-AI harms and further the goals of tort law. Legal scholars have confined themselves to assessing AI
liability through the broad lens of the general intelligent-AI machine versus simple-machine
distinction. This leads to varied discussions about the liability for AI harms because some scholars are actually
referring to automated AI and others are actually referring to autonomous AI.149
Distinguishing between the two categories
organizes the legal discussion surrounding AI and illustrates that automated-AI cases are easier to resolve and autonomousAI cases are harder to resolve. Automated-AI cases are “easy” because courts can solve them using the same torts liability regime used for
simple machines. Thus, in automated-AI tort cases, courts can assign fault to the manufacturer or user. Most tort cases in the present-day
concern automated AI because autonomous AI is not yet common in society. Thus, a majority of AI tort cases can be resolved under the existing
tort system. Autonomous-AI cases are “hard” because courts will have difficulty applying the simple-machine-tort liability to Autonomous-AI
harms. The difficulty stems from the fact that simple-machine-tort liability does not account for the autonomous AI’s independent decisions.
Anderson is about an international ban, not a US domestic prohibition of use so
there’s no “definitional circumvention” and their author also thinks that LAWs are
good, which is why we shouldn’t try to ban them in the next paragraph so you should
be skeptical
[Baylor’s evidence for reference]
Anderson et. al, 14
[Kenneth, Prof. Law @ American University, Member of the Hoover Institution Task Force on National
Security & Law, Senior Fellow @ Brookings Institution; Daniel Reisner, Former head of International Law
Department @ Israel Defense Force, international law, defense, homeland security and aerospace
partner @ the law firm of Herzog, Fox & Neeman; and Matthew Waxman, Liviu Librescu Professor of
Law @ Columbia, Adjunct Senior Fellow @ Council on Foreign Relations, Member of the Hoover
Institution Task Force on National Security & Law: “Adapting the Law of Armed Conflict to Autonomous
Weapon Systems,” International Law Studies volume 90 (2014),
https://apps.dtic.mil/sti/pdfs/ADA613290.pdf]//AD
One way to ban—or to interpret existing law as banning—autonomous weapons is to define some maximum level
of autonomy for any weapon system and prohibit any machine system that exceeds it. A variant approach is to
define some minimum legal level of human control. Human Rights Watch, for example, has called for a preemptive “ban [on] fully autonomous
weapons,” which “should apply to robotic weapons that can make the choice to use lethal force without human input or supervision.” It also
proposes to ban their “development, production, and use,” as well as calling for “reviews” of “technologies and components that could lead to
fully autonomous weapons.”22 The International Committee for Robot Arms Control, an organization dedicated to reducing threats from
military robotics, calls for the “prohibition of the development, deployment and use of armed autonomous unmanned systems.”23 A British
nongovernmental organization dedicated to the regulation of certain weapons argues that lethal decision making should require “meaningful
human control.”24 This
idea of requiring a minimum level of “meaningful human control” emerged as a major
theme in discussions among States and advocacy groups at the 2014 UN CCW meeting.25 Instead of
encouraging a permanent ban on autonomous weapons, Christof Heyns, the UN Special Rapporteur on Extrajudicial,
Summary or Arbitrary Executions, has proposed a moratorium, calling for “all States to declare and implement
national moratoria on the testing, production, assembly, transfer, acquisition, deployment and use of [lethal autonomous robotics]
until such time as an internationally agreed upon framework . . . has been established.”26
Before addressing some of the enforceability problems and dangers of any effort to prohibit autonomous weapons, it is important to
understand the proposed formulas do not, as it may seem initially, contain a bright line that would be useful for promoting adherence. Lawyers
experienced in the law of armed conflict will quickly see that each of these seemingly clear-cut definitions leaves many open questions as to
what systems would be banned under any particular formulation. Even something as seemingly plain as “lethal decision making” by a machine
does not address, among other things, the lawfulness of targeting a tank, ship or aircraft which is ultimately the source of the threat, but inside
of which is a human combatant.
[Baylor’s card begins]
Beyond definitions, the technology and basic architecture of an autonomous system and a nearly
autonomous, highly automated system are basically the same—if you can build a system that is nearly
autonomous, for example with human override, then you can probably reprogram it to eliminate that
human role. Moreover, whether a highly automated system—say, one with a human supervisor who can override proposed
firing decisions—is in practice operating autonomously depends on how it is being manned, how operators
are trained and how effectively oversight is exercised. It also depends on operational context and
conditions, which may limit the degree to which the human role is in any way meaningful. For these and
other reasons, a
fully autonomous system and a merely highly-automated system will be virtually
indistinguishable to an observer without knowing a lot about how that system is used in particular
operational conditions. The
difference might not matter very much in practice, given the variable
performance of human operators. In any case, these systems will be easily transitioned from one to the
other. The blurriness of these lines means that it will be very difficult to draw and enforce
prohibitions on “fully” autonomous systems or mandates for minimum levels of human decision
making. Given the great practical difficulty of distinguishing between autonomous and highly automated
applying a legal ban on autonomous systems would be relatively easy to
circumvent and very difficult to enforce.
systems,
[Baylor’s card ends]
At the same time, and as alluded to above, imposing
a general ban on autonomous systems could carry some highly
unfavorable consequences— and possibly dangers. These could include providing a clear advantage in autonomous
weapon technology to those States which generally would not join (or in reality comply with) such a ban. They could
also include losing out on the numerous potential advantages of autonomous systems of improving
decision making on the battlefield, including through avoiding emotion-based response; improving
system accuracy, thereby probably minimizing collateral injuries; and possibly limiting human loss of life
on both sides and among civilians.
AT: Sippel
AFF
Sippel, 20
[Felix, MA Thesis (Passed) of Global Politics and Societal Change @ Malmö University (Sweden):
“Antipreneurial behavior in conflict over norms: A case study on the resistance of nation-states against a
preventive ban on lethal autonomous weapons systems,” submitted May 25, 2020,
http://ls00012.mah.se/bitstream/handle/2043/32060/Felix%20Sippel_master%20thesis.pdf?sequence=
1&isAllowed=y]//AD
*Has an AT: Meaningful Human Control part, too – highlighted in blue
*GGE=Group of Governmental Experts
Subsequently, this analysis indicates that six of the eight nation-states that were originally assumed to be antipreneurs show resistance
patterns that can be traced back to the concept of antipreneurship: Australia, Israel, Russia, South Korea, UK and USA. On
the basis of the analytical framework outlined above and a close reading that moves between text and context, the analysis of the case was
able to derive a total of eight resistance
patterns which these six antipreneurs have in common. It is also interpreted that these eight
resistance patterns can be divided into groups. Two of them contribute in particular to block formal negotiations on a ban .
Three other resistance patterns help to deny a problem between LAWS and the principles of IHL. And the last three
patterns create a beneficial image about the emergence and development of LAWS. In addition, the eight
resistance patterns can each be divided into a number of sub-patterns that highlight certain aspects of behavior. This structure of the findings –
eight patterns that can be grouped as well as sub-divided – is shown in figure 4. These findings will be presented over the course of the next
three sub-chapters. In the presentation of the findings, several quotations from the analyzed documents are used and the findings are
discussed close to the material. Additionally, a selection of text examples for each resistance pattern is shown in the annex of this thesis, see
chapter 8, in order to exemplify which text passages are considered to represent these resistance patterns.2 After presenting the patterns
which are characteristic to the behavior of the six antipreneurs, the analysis will turn to China and France. These two nation-states were
assumed to be antipreneurs but their behavior deviates from the resistance patterns of the six antipreneurs. This is also an important finding,
as it distinguishes the resistance of the antipreneurs from ambivalent behavior that cannot be clearly linked neither to the behavior of the
entrepreneurs nor to that of the antipreneurs. Consequently, these findings of deviation are discussed in the fourth and final sub-chapter of the
analysis. 2 Chapter 8, which includes a selection of text examples for each resistance pattern, begins on page 62. Felix Sippel – 19950619-T197
30 Felix Sippel – 19950619-T197 31 5.1 Blocking formal negotiations on a ban Consultations on LAWS under the auspices of the CCW started
informally in 2014 and were further formalized with the establishment of the open-ended GGE LAWS in 2017. At first sight,
entrepreneurs thus achieved the objective of rapidly placing LAWS on the institutional agenda (Rosert, 2019: 1109). At second glance, however,
this development can also be
interpreted as a great advantage for the activities of the antipreneurs because
the GGE LAWS is based on full consensus. A forum based on consensus is best suited to block decisions,
which is resulting in “effectively institutionalising the antipreneurs´ tactical advantages” (Adachi, 2018: 41).
These tactical advantages are particularly reflected in frustrating the progress of the sessions. This
interpretation can be further underlined with a reference to the similar case of cluster munitions. Also, in this case the antipreneurs reacted
very quickly to the campaigns by entrepreneurs. They took up the issue in the CCW and preemptively resisted before public support to ban the
use of cluster munitions became even stronger and in order to regain control over the debate (Adachi, 2018: 44). The following two resistance
patterns, which were discovered in the data, illustrate how this characteristic behavior of antipreneurs is also apparent in the case of LAWS.
5.1.1 Pattern A: Lack of definitions The analysis indicates that antipreneurs
resist a ban on LAWS by impeding the
creation of an internationally accepted, common definition. Admittedly, antipreneurs contribute to the discussion
about how LAWS are to be understood. Indeed, Russia, UK and the USA already provided their own definitions of LAWS.
However, the wording of these definitions differs considerably from one another and reflects the
problem of clarification outlined in chapter 2.1. The only point on which the antipreneurs largely agree is what should not be
considered as LAWS: including unmanned aerial vehicles and other automated weapons systems that already exist. Thereby, they usually
remind the parties “to remember that the CCW has been tasked with considering emerging technologies” (UK, 2016: 2). Accordingly, the
analysis shows that antipreneurs
contribute to the sessions, but they do not make any effort to harmonize
their definitions in order to enable a uniform understanding of autonomy. The lack of a common
understanding frustrates the efforts of the entrepreneurs, who constantly challenge the forum “to
agree on a carefully crafted and well balanced working definition of what LAWS actually are, so that a
red line can be drawn as to their acceptability on the basis of unequivocal compliance with IHL” (Brazil, 2018: 3).
Antipreneurs, on the other hand, argue that polishing a working definition would only cost time and resources that could be better used
elsewhere and therefore prefer the forum to discuss “a general understanding of the characteristics of LAWS” (USA, 2017a: 1). In this way, they
avoid an intensified debate on the legal definition of these weapons systems which could lay the basis for negotiations on regulatory
instruments. Furthermore, the analysis shows that antipreneurs
also hinder the creation of a common understanding
of what is characteristic of LAWS. For instance, the concept of meaningful human control came up in
the course of the meetings to draw a line on how much human control should be maintained over
increasingly autonomous systems. But here, too, the antipreneurs are stalling progress because
clarifying the relationship between autonomy and human control could open the door to a ban on those
weapons systems over which meaningful control can no longer be exercised. That is why antipreneurs are
rejecting the concept and contribute to the perplexity of the debate with their own terminologies such as
human judgement (Israel) or intelligent partnership (UK). Finally, it is interpreted that the reluctant behavior of
antipreneurs plays an important role in that the meetings still reflect a patchwork of national definitions and concepts. Therefore, there is
long way to go to achieve clarity on terminology, which would be a
necessary precondition for the negotiations on a binding instrument .
still a
K-Psycho---2AC
Their model is conservatism in disguise---rejecting pragmatic policies due to ‘colonial
assumptions’ is straight out of the imperial playbook.
Vickers ’20 [Edward; 2020; Professor of Comparative Education at Kyushu University; Comparative
Education, “Critiquing coloniality, ‘epistemic violence’ and western hegemony in comparative education
– the dangers of ahistoricism and positionality,” vol. 56 no. 2]
The empirical and theoretical flaws of this approach are intertwined with the problematic language in which its arguments are typically
couched. Historical and anthropological scholarship on East Asia and other regions amply demonstrates that colonialist or neo-colonialist
attitudes and strategies of domination are not and have never been a Western monopoly. But
less
decolonial theory posits the more or
uniform victimhood of non-Western ‘others,’ deriving claims for the moral superiority of
‘authentically’ indigenous perspectives. Debating the validity of such claims is complicated by an emphasis on ‘positionality.’
Readers are exhorted to
judge an argument less by standards of evidence or logic (often portrayed as camouflaging a
Western ‘will to power’) than on the basis of the writer’s self-identification or ‘ positioning .’ The
language of ‘ epistemic
violence ,’ ‘secure spaces,’ ‘epistemological diffidence,’ ‘border thinking’ and ‘location’ suggests an image of the critical scholar
as revolutionary guerrilla, valiantly sniping at Western hegemony from his or her [their]
marginal redoubt. In so far as this reflects a desire for a more just, tolerant and sustainable society – one that values diversity as a
resource for mutual learning – it is admirable.
However, if we seek to combat oppression, in the educational
sphere or beyond, it is incumbent on us to pick our enemies , and our language, carefully. Aiming a blunderbuss
at the supposedly illegitimate or self-serving ‘universalism’ of ‘modern West ern social science,’ while ignoring how calls
for indigenisation and ‘ authenticity ’ are used to legitimate highly oppressive regimes across Asia and elsewhere, is to risk
undermining those universal social and political values (freedom of expression, civil liberties, rule of law) upon which
critical scholars themselves rely.
An embrace
similarly
of ‘opacity’ or ‘ epistemological diffidence ,’ advocated by several of the CER contributors, threatens to be
self-defeating . While they share an admiration for the Argentine theorist of ‘decoloniality,’ Walter Mignolo, the work of his
brilliant compatriot, the writer, poet and essayist Jorge Luis Borges, is far worthier of attention. Borges’ famous fondness for ‘labyrinths’ and
the paradoxical was combined with a sharp eye for gratuitous obfuscation and circumlocution (see, for example, his story The Aleph, in Borges
1998, 274–286). Offering his own critique of the fashion for opaque jargon in mainstream social science, the émigré Polish sociologist Stanislav
Andreski wrote acerbically that ‘one of the pleasures obtainable through recourse to confusion and absurdity is to be able to feel, and publicly
to claim, that one knows when in reality one does not’ (1974, 95). ‘Opacity’ in imaginative literature may intrigue or entertain, but in
interpreting and explaining unfamiliar societies, cultures and education systems, comparativists especially ought to write in clear, accessible
language. And while all social scientists can understand the lure of the sweeping generalisation, we
should generalise with
extreme caution , especially when categorising large swathes of humanity.
Borges’ earliest collection of stories is entitled A Universal History of Iniquity. This appeared in 1935, when there were already rumblings in
both East and West of the conflict that would soon engulf Eurasia. Implied in his title was a truth painfully obvious to many contemporaries:
that iniquity is indeed universal. The conflicts of the mid-twentieth century starkly illuminated another truth: that iniquity in the modern world,
especially (though not only) that associated with totalitarian societies, often consists in essentialising and de-humanising ‘the other’. Hannah
Arendt – a thoroughly Eurocentric thinker, but one who addressed, in ‘totalitarianism,’ a theme with global ramifications – wrote of how,
through ‘the murder of the moral person in man,’ totalitarian systems transform their citizens into ‘living corpses’ capable of any outrage (2017,
591). But ironically, in
propagate an
the very act of attacking essentialism as applied to ‘non-Western’ cultures, the CER contributors
essentialise d view of ‘the West’ itself. Iniquity in the form of coloniality is in their account attributed solely to
Western modernity. This view is both inaccurate and dangerous.
The irony in this approach extends to the attribution of agency. Claims
to champion the dignity of subaltern , ‘ non-Western ’
reproduces the very Eurocentrism that
‘decolonial’ scholars quite rightly seek to challenge. In fact, privilege and victimhood have many
dimensions, by no means all traceable to the ‘phenomenon of colossal vagueness ’ that is colonialism
actors are in fact undermined by assertions of their uniform victimhood. This
(Osterhammel 2005, 4). One group or individual can plausibly be portrayed as victim, or perpetrator, or both, depending on context and
perspective. Were post-war German civilian refugees from Eastern Europe, or Japanese civilians fleeing Manchuria, victims or perpetrators? Or
today, is a privately-educated, English-speaking, upper-caste South Asian scholar more accurately to be seen as privileged or under-privileged,
in terms of access to power (‘epistemic’ or otherwise) within South Asia or the global academy? ‘Location’ or identity are not reducible to neat
labels or discrete categories. As the Anglo-Ghanaian philosopher Kwame Anthony Appiah emphasises, according dignity and agency involves
recognising that our identities are not just socially given, but also actively chosen. Culture is ‘a process you join, in living a life with others,’ and
‘the values that European humanists like to espouse belong as much to an African or an Asian who takes them up with enthusiasm as to a
European’ (2018, 211). The same applies with respect to value systems we have reason to regard as iniquitous, such as those associated with
colonialism or neoliberalism.
What, then, are we to make of the traction that totalising anti-Westernism appears to be gaining within the CIE field? On one level, this may tell
us more about the state of campus politics, and politics in general, across contemporary America and the broader ‘Anglosphere’, than about
the wider world. The worldview that the CER contributors espouse, even as they strain at the shackles of Western epistemology, is redolent of
America’s peculiarly racialised identity politics. And notwithstanding claims to marginal positionality, the increasingly widespread currency of
such arguments in North American and Anglophone CIE circles reflects their status as an emergent orthodoxy that in key respects mirrors the
very ethnocentrism it rejects.
Although the ideas in the CER special issue are presented as challenging both the scholarly mainstream and a wider neoliberal or neocolonial
establishment, the seriousness of this challenge is doubtful. Exhortations
the name of ‘ contesting
to embrace ‘opacity’ or to ‘ think
otherwise ’ in
coloniality ’ imply no coherent programme, and suggest an
overwhelmingly negative agenda . Meanwhile, far from risking ostracism, the contributors can expect warm
endorsement of their views from regulars at the major international conferences. For many in the CIE community in North
America and beyond, sweeping critiques of Western ‘hegemony’, ‘coloniality’ and so forth hold a strong
appeal ; it is those seeking to question the balance or accuracy of such theorising who risk opprobrium. As Merquior wrote of Foucault,
Derrida and their postmodernist or ‘deconstructivist’ followers, their ‘skepsis’, ‘highly placed in the core institutions of the culture it so strives
to undermine,’ has come to constitute an ‘official marginality’ (1991, 160).
The potential – and actual – consequences of this are troubling. Takayama et al call for the WCCES in particular to embrace the agenda of
‘contesting coloniality,’ but one conclusion to be drawn from recent events is that this is already happening, with damaging consequences for
civility within the Comparative Education field, and for the wider credibility of its scholarly output.15 Reducing scholarship to the projection of
the scholar’s own positionality can only lead to fragmentation and irrelevance. To quote Merquior again (paraphrasing Hilary Putnam), ‘to
demote rationality, in a relativist way, to a mere concoction of a given historical culture is as reductionist as the logical positivist’s reduction of
Pure Negation ’ (159) is an intoxicating brew, but it is
unlikely to inspire coherent or constructive contributions to addressing the pressing
problems of our age: climate change, poverty, inequality and the ethical crisis that underpins them all.
reason to scientific calculus’ (160). What he calls the ‘Elixir of
do the opposite . The neoliberal cadres of the OECD or World Bank, along
with nationalist autocrats from Beijing to Budapest, will be more than happy for ‘critical scholars’ to
Indeed, it
is very likely to
fulminate against a vaguely-defined ‘West’ while embracing ‘epistemological diffidence’ (Takayama, Sriprakash, and
Connell 2017, S18). As one critic of ‘postmodernism’ has put it, the promotion of ‘epistemological pluralism,’ combined with rejection of any
‘settled external viewpoint,’ means that, ‘so far as real-life ongoing politics is concerned,’ postmodernists, along with de-constructivists,
decolonialists and their ilk, tend to be ‘passively
conservative in effect ’ (Butler 2002, 61). If
‘decoloniality’ promotes a balkanisation of the Comparative Education field into identitybased cliques that prize ‘opacity,’ the risk is that in
practice this
will only serve to
buttress the status quo .
Fiat and scenario planning are good---key to challenging violent assumptions.
Esberg and Sagan ’12 [Jane and Scott; 2012; Special assistant to the Director at New York
University’s Center on International Cooperation; Professor of Political Science and Director of
Stanford's Center for International Security and Cooperation; The Nonproliferation Review, “Negotiating
Nonproliferation: Scholarship, Pedagogy, and Nuclear Weapons Policy,” p. 95-96]
These government or quasi-government think tank
are learned by
simulations often provide very similar lessons for high-level players as
students in educational simulations. Government participants learn about the importance of
understanding foreign perspectives , the need to practice internal coordination, and the necessity to compromise
and coordinate with other governments in negotiations and crises. During the Cold War, political scientist Robert Mandel noted how
crisis exercises and war games forced government officials to overcome ‘‘bureaucratic myopia,’’ moving beyond their normal
organizational roles and thinking more creatively about how others might react in a crisis or conflict. The
skills of imagination and
the subsequent ability to predict foreign interests and reactions remain critical for real-world foreign
policy makers . For example, simulations of the Iranian nuclear crisis*held in 2009 and 2010 at the Brookings Institution’s Saban Center
and at Harvard University’s Belfer Center, and involving former US senior officials and regional experts*highlighted the dangers of
misunderstanding foreign governments’ preferences and misinterpreting their subsequent behavior. In both simulations, the primary criticism
of the US negotiating team lay in a failure to
predict accurately how other states , both allies and adversaries, would
behave in response to US policy initiatives.
By university age, students often have a pre-defined view of international affairs , and the literature on
simulations in education has long emphasized how such exercises
force students to challenge their assumptions about
how other governments behave and how their own government works. Since simulations became more common as
a teaching tool in the late 1950s, educational literature has expounded on their benefits, from encouraging engagement by breaking from the
typical lecture format, to improving communication skills, to promoting teamwork. More broadly,
simulations can deepen
understanding by asking students to link fact and theory , providing a context for facts while bringing theory into the
realm of practice. These exercises are particularly valuable in teaching international affairs for many of the same reasons they are useful for
policy makers: they
force participants to ‘‘grapple with the issues arising from a world in flux. Simulations have
been used successfully to teach students about such disparate topics as European politics, the Kashmir crisis, and US response to the mass
killings in Darfur. Role-playing exercises certainly encourage
learn them in
students to learn political and technical facts * but they
a more active style . Rather than sitting in a classroom and merely receiving knowledge, students actively
research ‘‘their’’ government’s positions and actively argue, brief, and negotiate with others. Acts can change quickly; simulations
teach students how to contextualize and act on information.
LAWs are a critical enabler for American hegemony and supremacy over perceived
rivals.
Bächle ’22 [Thomas Christian; 2/2/2022; Head of the Digital Society Research Program at the
Alexander von Humboldt Institute for Internet and Society, Ph.D. in Media Studies from the University of
Bonn; Jascha Bareis; Researcher at the Institute for Technology Assessment and Systems Analysis at the
Karlsruher Institute for Technology, M.A. in Political Theory from Goethe University; "“Autonomous
weapons” as a geopolitical signifier in a national power play: analysing AI imaginaries in Chinese and US
military policies," European Journal of Futures Research, 10(20), DOI: 10.1186/s40309-022-00202-w]
Military doctrines, autonomous weapons and AI imaginaries
Foreign geopolitics is embedded in military doctrines, serving as a signalling landmark for military
forces, the reallocation of strategic resources and technological developments . The empirical material at hand
offers layers of analysis hinting at national SIs that put AWS in broader frameworks. These frameworks
inform the populace,
allies and adversaries about national aspirations, while presenting military self-assurance as a tool to
look into a nationally desired future (see “Approaching autonomous weapons embedded in sociotechnical imaginaries” section).
Here, AWS act as an empty and hence flexible signifier, a proxy for a society that exhibits different
national idealisations of social life, statehood and geopolitical orders.
Military doctrine: The United States of America
In January 2015, the Pentagon published
its Third Offset Strategy [US.PosP2]. Here, the current capabilities and operational
readiness of the US armed forces are evaluated in
order to defend the position of the USA as a hegemon in a
multipolar world order. The claimed military “technological overmatch” [ibid.], on which the USA’s clout and pioneering
role since the Second World War is based, is perceived as eroding. The Pentagon warns in a worrisome tone: “our
perceived inability to achieve a power projection over-match (...) clearly undermine [sic], we think, our ability
to deter potential adversaries. And we simply cannot allow that to happen” [ibid.].
The more recently published “ D epartment o f D efense Artificial Intelligence Strategy” [US.PosP5] specifies this
concern with AI as a reference point. Specific claims are already made in the subtitle of the paper: “Harnessing AI to Advance Our Security and
Prosperity”. AI
should act as “smart software” [US.PosP5, p 5] within autonomous physical systems and take over tasks that
normally require human intelligence. Especially, the US research policy targets spending on autonomy in weapon
systems. It is regarded as the most promising area for advancements in attack and defence
capabilities, enabling new trajectories in operational areas and tactical options . This is specified with current
advancements in ML: “ML is a rapidly growing field within AI that has massive potential to advance unmanned systems in a variety of areas,
including C2 [command and control], navigation, perception (sensor intelligence and sensor fusion), obstacle detection and avoidance, swarm
behavior and tactics, and human interaction”.
Given that such ML processes depend on large amounts of training data, the
DoD announced its Data Strategy [US.PosP11],
harnessed inside a claim of geopolitical superiority , stating “As DoD shifts to managing its data as a critical part of its overall
mission, it gains distinct, strategic advantages over competitors and adversaries alike” (p 8). In the same vein and under the perceived threat to
be outrivalled, “the DoD Digital Modernization Strategy” [US.PosP7] lets any potential adversaries know: “Innovation is a key element of future
readiness. It is essential to preserving and expanding the US military competitive advantage in the face of near-peer competition and
asymmetric threats” [US.PosP7, p 14]. Here, autonomous
systems act as a promise of salvation of technological
progress, which is supposed to secure the geopolitical needs of the USA.
Specified with LAWS, the US Congress made clear: “Contrary to a number of news reports, U.S. policy
does not prohibit the development or employment of LAWS. Although the USA does not currently
have LAWS in its inventory, some senior military and defense leaders have stated that the USA may be
compelled to develop LAWS in the future if potential US adversaries choose to do so” [US.PosP12, p 1].Footnote13
Remarkably, the USA republished the very same Congress Paper in November 2021, just by a minor but decisive
alteration. It changed “potential U.S. adversaries” into “U.S. competitors” [US.PosP14]. While it remains unmentioned (and presumably
deliberately so) who is meant by both “senior military and defence leaders” and so named “U.S. competitors”, this minor change hints at a
subtle but carefully orchestrated strategic tightening of rhetoric, sending
out the message that the US acknowledges a
worsening in the geopolitical situation with regard to the AWS development. In reaction, the USA continue to
weaken their own standards for operator control over AWS in the most recent 2022 Congress Paper (as of May 2022),
reframing human judgement: “Human judgement [sic!] over the use of force does not require manual human “control” of the weapon system,
as is often reported, but instead requires broader human involvement in decisions about how, when, where and why the weapon will be
employed” [US.PosP16]. Certainly, the rhetorical “broadening” of the US direction lowers the threshold to employ AWS in combat, evermore
distancing the operator from the machine.
This stands in stark contrast to the US position in earlier rounds of the CCW process; here, the
USA not only claims that
advancements in military AI are of geopolitical necessity but also portrays LAWS as being desirable
from a civilian standpoint, identifying humanitarian benefits: “The potential for these technologies to save lives in armed conflict
warrants close consideration” [US.CCW3, p 1]. The USA is listing prospective benefits in reducing civilian casualties such as help in increased
commanders’ awareness of civilians and civilian objects, striking military objectives more accurately and with less risk of collateral damage, or
providing greater standoff distance from enemy formations [US.CCW3]. Bluntly,
the USA tries to portray LAWS as being not
only in accordance but being beneficial to I nternational H umanitarian L aw and its principles of proportionality, distinction or
indiscriminate effect (see also “The United States of America” section). While
such assertions are highly debatable and
have been rejected by many [1, 5, 7, 8], they do shed a very positive light on military technological progress,
equating it with humanitarian progress.
In a congress paper on AWS, published in December 2021, these humanitarian benefits are once more mentioned but only very briefly, while a
sharpening of the rhetoric is clearly noticeable. The paper also summarises the CCW positions of Russia and China, implicitly clarifying who is
meant by “U.S. competitors” (see above). China, even though only indirectly, is accused by invoking that “some analysts have argued that China
is maintaining “strategic ambiguity” about its position on LAWS” [US.PosP15, p 2]. This is the first time the USA overtly expresses in a position
paper that it understands the AWS negotiations as a political power play, instead of serving the aim of finding an unanimously agreed upon
regulatory agreement.
In sum, the USA claims a prerogative as the dominant and legitimate geopolitical player in a
multipolar world order, who is under external threat. The ability to defend military supremacy against
lurking rivals is portrayed as being in a dependent relationship with the level of technological
development of the armed forces, specified with LAWS. The USA claim to hegemonial leadership may
only be secured through maintaining technological superiority.
2AC---Link Turn
Governance is inevitable but critical---accountability vacuums are filled by the
military, guaranteeing extinction.
Gracia ’19 [Eugenio; September; Senior Adviser on peace and security at the Office of the President of
the United Nations General Assembly; SSRN Papers; “The militarization of artificial intelligence: a wakeup call for the Global South,” https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3452323]
The long road to AI governance in peace and security
Even if we concede that the militarization of AI is here to stay, it is also true, but less obvious, that AI
governance in general cannot
be rejected altogether: technical standards, performance metrics, norms, policies, institutions, and other
governance tools will probably be adopted sooner or later.22 One should expect more calls for domestic
legislation on
civilian and commercial applications in many countries , in view of the all-encompassing legal, ethical, and
societal implications of these technologies. In international affairs, voluntary, soft law, or binding regulations can vary from confidence-building
measures, gentlemen’s agreements, and codes of conduct, including no first-use policies, to multilateral political commitments, regimes,
normative mechanisms, and formal international treaties.
Why is AI governance needed? To many, a do-nothing policy is hardly an option. In a normative vacuum, the practice of states
may push
for a tacit acceptance of what is considered ‘ appropriate ’ from an exclusively military point of view,
regardless of considerations based upon law and ethics .23 Take, for that matter, a scenario in which nothing
is done and AI gets weaponized anyway. Perceived hostility increases distrust among great powers and even
more investments are channeled to defense budgets. Blaming each other will not help , since unfettered and
armed AI is available in whatever form and shape to friend and foe alike. The logic of
confrontation turns into a self-fulfilling
prophecy . If left unchecked, the holy grail of this arms race may end up becoming a relentless quest for artificial general intelligence (AGI),
a challenging prospect for the future of
humanity , which is already raising fears in some quarters of an existential
risk looming large.24
Deregulated LAWs outweigh every link---they justify blowback violence on domestic
populations and enshrine dystopian colonial control---the only solution is a legal
refusal of LAWs.
Kendall ’19 [Sara; 2019; PhD, professor of Law at Kent Law School, directs the Centre for Critical
International Law; Kent Academic Repository, “Law’s Ends: On Algorithmic Warfare and Humanitarian
Violence,” ISBN 978-1-78661-365-3]
The challenge of thinking through LAWS is the challenge of speculative reasoning more broadly, but as a field that responds to the new by way
of analogy, law would approach LAWS by considering relations of likeness in bringing them under its jurisdiction.39 The development of
LAWS is meant to increase targeting precision and to mitigate the risk to a state’s own population, including its
military personnel, which makes it analogous in certain respects to the use of armed drones. Recent scholarship notes how “[u]nmanned or
human-replacing weapons systems first took the form of armed drones and other remote-controlled devices,” enabling human absence from
the battlefield.40 As with armed drones, however, the development of
AI -based weapons systems would deepen the
asymmetry of modern war fare, as some states and their attendant populations are able to mitigate risk more
readily than others through further technological development. Within states, it may be that the risk burden is shifted from
the military to civilians , as Grégoire Chamayou points out in relation to armed drones: “The paradox is that
hyperprotection of military personnel tends to compromise the traditional social division of danger in which
soldiers are at risk and civilians are protected. By maximizing the protection of military lives and making the
inviolability of its ‘safe zone’ the mark of its power, a state that uses drones tends to
divert reprisals toward its
own
population .”41
At stake in practice is not only whether LAWS can be subsumed under law , a philosophical matter entailing what law
requires as a cognitive response, but also the extent
to which relevant law could be applicable and made to apply as a matter of
(geo)politics. Noll’s argument stands with regard to law and the inhuman, yet against the backdrop of this uneven history and corresponding
geographies of power, the human subject who incarnates the law appears as a privileged bearer of enforceable protections. If the law at stake
is the law of armed conflict, as much scholarly debate around LAWS presumes, then the most
important addressees of this law
are strong states and their military personnel.42 The resulting hierarchical framing would seem to place military
over civilians , as Chamayou notes; between civilians, the populations of sovereign states are prioritized over those
whose sovereignty is “ contingent ” or otherwise compromised .
It is inherent to this body of law that it inscribes these distinctions, as the law governing armed conflict notoriously enables a degree of violence
even as it attempts to constrain it. As with humanitarianism more broadly, where beneficiaries are classified and managed according to
particular governing logics,43 international humanitarian law categorizes its subjects in ways that produce attendant hierarchies of life. The
central principle of proportionality explicitly justifies the loss of civilian life as balanced against military necessity. This has led some
commentators to observe how the law governing armed conflict in fact produces an “economy of violence” in which (state) violence is
managed according to “an economy of calculations and justified as the least possible means.”44 The development of LAWS not only reflects an
effort to improve upon fallible human systems, as its proponents claim, but also to minimize risk to certain actors, particularly citizens of
powerful states or members of their militaries. As Sven Lindqvist darkly observes, “[t]he laws of war protect enemies of the same race, class,
and culture. The laws of war leave the foreign and the alien without protection.”45
While scholars of international humanitarian law might contest Lindqvist’s claim that discrimination is inscribed into the laws themselves, their
selective and discriminatory enforcement is widely noted. As with the “unable or unwilling” theory advanced by the US, among other highly
militarized states such as Canada, Australia, and Turkey, exceptions to the international legal framework have been asserted through the same
legal terminology.46 Within this logic, the map of the world appears divided between states that are able to exert control over their territories
and others that struggle, often for reasons tied to the residues of colonial governance structures and continuing economic exploitation. The
experiment of
LAWS will likely play out to the benefit of the former upon the territory of the latter, much as some populations
are made to suffer the collective punishment of armed drone activity in their territory.47
Preemptive Temporality
The technological
developments informing the emergence of new weapons systems for armed conflict are not only
employed to minimize risk to particular populations, as I described above. They also illustrate a particular relationship to time, one that
philosopher and communications theorist Brian Massumi characterizes as an “ operative logic ” or “tendency” of
preemption .48 Preemption emerges prominently in the US with the administration of George W. Bush and the so-called “ war
on terror ,” but Massumi contends that it is not restricted to this historical moment or location. As with Noll’s attention to algorithmic forms
and code as the background thinking that shapes the turn to LAWS, Massumi is attuned to preemption as a temporal feature of our
contemporary landscape. In non-military applications such as high frequency trading,
algorithms are employed to hasten
response time and “to get to the front of the electronic queue” in submitting, cancelling, and modifying purchasing orders.49 In military
settings they
also enable faster data analysis , but an analysis oriented toward threat assessment, which brings them into a
relationship with this preemptive tendency.
Characterized by a concern for threats and security, preemption
sense of
produces a surplus value of threat tied to an ominous
indeterminacy: “Being in the thick of war has been watered down and drawn out into an endless
waiting, both sides poised for action.”50 The experience of temporality is of increasingly condensed intervals, accompanied by a
will to preemptively modulate “action potential” and to draw out the risk-mitigating capacity of laying claim to smaller units of time. The
dream at stake is to “ own ” time in the sense of exerting increasing mastery over ever-smaller units of it.
Massumi writes that in “ network-centric ” contemporary warfare,
political
the “real time” of war is now the formative infra-instant of suspended perception. What are normally taken to be cognitive functions
must telescope into that non-conscious interval. What
“ blink ” between consciously registered
would otherwise be cognition must zoom into the
perceptions —and in the same moment zoom instantly out into a new form of
awareness, a new collective consciousness.51
Such thinking
illustrates the presumptive need to augment human capacity on the battlefield, whether through
algorithmic enhancement of human cognition by machine intelligence or through neurotechnology’s combination of algorithms with
human biological/neural capacities. This raises the question of the role for human judgment in relation to the non-conscious interval, the
“blink” between the human capacity to perceive and act. If delegated to the machine, what arises is not comprehension and judgment but
rather what Arendt called “brain power,” as distinct from the workings of a mind or intellect. “Electronic brains share with all other machines
the capacity to do man’s work better and faster than man,” she noted, yet carrying out their assigned tasks does not constitute the exercise of
judgment.52 Writing over half a century ago, Arendt warned of the risk of losing sight of humanist considerations in the frenzied technological
drive to secure an Archimedean point beyond the human, yet the human seems inescapable, “less likely ever to meet anything but himself and
man-made things the more ardently he wishes to eliminate all anthropocentric considerations from his encounter with the non-human world
around him.”53 It would seem that what is distinct here, in Noll’s diagnosis of the thinking that undergirds the prospect of algorithmic warfare,
is the prospect of breaking free from the human through the singularity.
While I noted at the outset that LAWS at this stage are speculative and futural, incremental steps have been taken in their development. Both AI and neurotechnological dimensions are apparent in a recent program of the US
Defense Department, initially known as the Algorithmic Warfare Cross-Functional Team and informally as “Project Maven,” which was launched in April of 2017 with the objective of accelerating the department’s integration of big
data, AI, and machine learning to produce “actionable intelligence.” Maven is the inaugural project of this “algorithmic warfare” initiative in the US military.54 While this program is focused on intelligence rather than weapons
systems, characterized by a human-in-the-loop rather than a human-out-of-the-loop form of LAWS, the underlying algorithmic thinking is the same. The use of drones for combat also evolved out of intelligence gathering, and
critics of the integration of AI into military operations would have cause for concern about Project Maven paving the way—perhaps unintentionally—for future LAWS.
The Algorithmic Warfare Cross-Functional Team emerged in the Office of the Under Secretary of Defense for Intelligence, and was later brought under a new “Joint Artificial Intelligence Center” in the Defense Department.55 The
project forms part of the “third offset” or “3OS” strategy to protect US military advantage against rivals such as China and Russia, a strategy developed in 2014 to draw upon new technological capabilities in developing
“collaborative human-machine battle networks that synchronize simultaneous operations in space, air, sea, undersea, ground, and cyber domains.”56 What Massumi points out as a desire to maximize “action potential” in eversmaller units of time is evident here: the concern with bringing operations into a simultaneous harmony among different parties to the assemblage helps the military to “own time” more forcefully, and with it, to gain advantage
over its military competitors.
The memorandum establishing Project Maven in 2017 emphasizes the need to “move much faster” in employing technological developments, with its aim “to turn the enormous volume of data available to DoD into actionable
intelligence and insights at speed.”57 Deputy Secretary of Defense Robert Work describes relevant activities as 90-day “sprints”: after the project team provides computer vision algorithms “for object detection, classification, and
alerts for [full-motion video processing, exploitation and dissemination],” he notes, “[f]urther sprints will incorporate more advanced computer vision technology.”58 Among other things, Project Maven trains AI to recognize
potential targets in drone footage by focusing on “computer vision,” or the aspect of machine learning that autonomously extracts objects of interest from moving or still imagery using neural methods that are inspired by biology.
Public statements of military personnel involved in the project distance it from autonomous weapons or autonomous surveillance systems, claiming instead that they are attempting to “free up time” so that humans can focus on
other tasks: “we don’t want them to have to stare and count anymore.”59
The
D epartment o f D efense tells the narrative of Project Maven’s emergence as a story of augmentation : of
supplementing the labor of an overwhelmed, temporally lagging workforce with specialized entities that will help
to
speed up data processing. Speaking in July of 2017, the chief of the Algorithmic Warfare Cross- Functional Team claimed that AI
would be used to “complement the human operator”;60 elsewhere machines are presented as “teammates” paired with humans to “capitalize
on the unique capabilities that each brings to bear.” These teammates would work “symbiotically” toward a shared end: namely, “to increase
the ability of weapons systems to detect objects.”61 Figure 5.1, an icon appearing in a presentation by a Project Maven participant,
oscillates between the benign and the absurd .62
<<<FIGURE AND DESCRIPTION OMITTED>>>
Intent aside, this depiction
of harmless machines employed “to help” appearing in a Defense Department presentation on
Project Maven raises the question
of who stands to benefit and who may suffer from this cybernetic
experiment. That it unfolds incrementally rather than through the direct development of LAWS—on the grounds of assisting overworked
employees and with the objective of creating greater precision, a humanitarian end in line with the laws of armed conflict—does not diminish
the pressing need to reflect upon the development of these practices through a machine-independent evaluation.
As of December 2017, Project Maven’s
machine augmentation of the slow human intelligence analyst was reportedly being used
to support intelligence operations in Africa and the Middle East .63 Such spaces of contemporary armed
conflict are laden with histories of colonial intervention and technological experimentation in warfare; here the
smiling robots appear far more sinister. Bringing location and temporality together, the project seeks to
process information more quickly than human consciousness in order to avoid delayed responses to changing circumstances on hostile and
high-risk territory abroad, where human
inhabitants appear as the source of risk to remote populations on whose
behalf the intelligence is being gathered. There is a lingering question of what constituency this project serves: in a statement
shortly after its founding, the chief of Project Maven stated that the team was exploring “[how] best to engage industry [to]
advantage the taxpayer and the warfighter, who wants the best algorithms that exist to augment and complement the
work he does.”64
Within this vision, the machine
augments the human and private enterprise figures as a resource for the military . In
partner with private industry
to rapidly source private industry AI solutions to military problem sets.”65 The initiative draws private-sector
2015 the Defense Department established a Defense Innovation Unit in Silicon Valley, California, “to
expertise into military development, as has long been the practice in the US, but with apparently greater urgency. Robert Work’s
memorandum establishing Project Maven makes no mention of private-sector assistance apart from an oblique reference to the need to “field
technology” for augmenting existing operations. Yet according to military academics, forming partnerships with private-sector actors is
regarded as “key to obtaining the technology required to implement the 3OS. Many of the advancements in AI and other emerging
technologies are a result of significant investment by private industry for commercial applications.”66 By March 2018, the skilled “partner”
referenced in various press releases was revealed to be Google.67
The disclosure prompted widespread protests among Google employees. Some employees resigned, and thousands of others signed a petition
demanding termination of the Project Maven contract.68 In response the corporation not only decided against renewing their contract; it also
disseminated “principles for AI” that state the company would not develop intelligence for weapons or surveillance. In contrast to the military’s
urgent desire to hasten its conquest of ever-smaller units of processing time to preempt threats, the resistance is located in a different form of
preemption: namely, preventing their complicity in producing an untenable future. The arc of this temporal horizon appears longer and more
generalized, extending beyond the specifics of comparative military advantage gained by “owning” more of the “blink” between perception and
response, and looking instead to the risks that algorithmic autonomy might bring.69 Extending Massumi’s argument illustrates how the
preemptive tendency produces the fear that leads to the prospect of developing LAWS to combat future threats. But another preemptive
response is possible: namely, an ethico-political preemption of the threat LAWS pose to the primacy of human judgment.
What this response reveals is both a kind of military vulnerability and the power of (human, political) judgment. The military-private
hybrid appears as a dystopian
seems to open
assemblage of for-profit war fare technology development, but it also
a space for contestation through the power of laboring humans. Here resistance is not read as
insubordination to be punished, as in the military, but rather as talent to be lost in a privileged sector of the economy. Other contractors have
and will engage with what Google abandoned, and the extent of the corporation’s withdrawal from military projects remains unclear.70 But the
petition’s language
of accountability beyond law—of morality and ethics, responsibility, and trust—sets terms for political
resistance . To the internal corporate slogan adopted by the petition signatories—“don’t be evil”—the military would respond that its
development of AI technologies is in fact the lesser evil.71 But as we know from critical accounts of international humanitarian law, the logic of
the lesser evil is embedded within this law, as it is within the principle of proportionality.72 In this sense, the military only builds upon a
structure already present within the law itself, with its attendant forms of humanitarian sacrifice.
When it comes to the question of whether to use international law to ban LAWS, the US adopts a delayed approach to legal temporality: it
wishes to proceed with “deliberation and patience,” and to highlight how it is important “not to make hasty judgments about the value or likely
effects of emerging or future technologies . . . our views of new technologies may change over time as we find new uses and ways to benefit
from advances in technology.”73 It is too soon to judge, and yet it is not soon enough to develop the technologies that may later become
unmoored from the power to judge and constrain them. Initiatives such as Project Maven are presented as working in the pursuit of
humanitarian ends, yet this is what Talal Asad might call a “humanitarianism that uses violence to subdue violence.”74 The law that we might
seek to subsume LAWS under is complicit as well.
The logic of preemption could be transformed into an ethical call, as a form of political resistance in the present. Legal
form of regulatory or
solutions in the
ban treaties may come too late to integrate well into the already unfolding narrative. Turning the
preemptive logic of the military strike on its head, this ethical preemption would seek to undo the hastening of present efforts to adapt
algorithmic thinking for military ends. The
political urgency is even more pressing as Project Maven continues to unfold, with
further contracts awarded to a start-up firm whose founder, a former virtual-reality headset developer, described the future battlefield as
populated by “superhero” soldiers who “have the power of perfect omniscience over their area of operations, where they know where every
plurality of actors involved in this assemblage of military
production makes it challenging to parse responsibility —both in a dystopian future where automated weapons make
enemy is, every friend is, every asset is.”75 As Noll notes, the
targeting decisions, but also in the present development of AI for military use. The relationships within the military-corporate assemblage will
continue to push toward the singularity in incremental steps, whether intentionally or not. The exercise of human
politics of
judgment through a
refusal may push back more forcefully than a law steeped in humanitarian violence.
Independently, unaccountable military AI turns the Global South into a new frontier
for colonial violence---legal checks are critical.
Gracia ’19 [Eugenio; September; Senior Adviser on peace and security at the Office of the President of
the United Nations General Assembly; SSRN Papers; “The militarization of artificial intelligence: a wakeup call for the Global South,” https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3452323]
Payne argued that changes in the psychological element underpinning deterrence are among the most striking features of the AI revolution in
strategic affairs: ‘Removing
emotion from nuclear strategy was not ultimately possible ; a rtificial i ntelligence makes it
possible, and therein lies its true radicalism and greatest risk ’.25 In other words, loosely unleashed, AI has the power
to raise uncertainty to the highest degrees, thickening Clausewitz’s fog of war rather than dissipating it. The situation
would become untenable if a non-biological AGI were ever deployed for military purposes, virtually unaffected by typically human
cognitive heuristics, perceptions, and biases.
Compounded with a free-for-all security environment, in
such a brave new world the Global South would be exposed to all
sorts of vulnerabilities , lagging behind (again) in economic, scientific, and technological development, as well as becoming an
open ground for data-predation and cyber-colonization , further exacerbating inequalities among
nations, disempowerment, and marginalization , as Pauwels suggested. Small, tech-taking developing countries may
well turn into data-reservoirs and testbeds for dual-use tech nologies, precisely because they lack technical
expertise, scale, and scientific knowledge to take effective countermeasures against tech-leading powers.26
Fortunately, all these troubling scenarios
are not forcibly inescapable and should ideally be minimized through
responsible governance strategies. What does it mean? A broad definition of AI policymaking strategy has been proposed as ‘a
research field that analyzes the policymaking process and draws implications for policy design, advocacy, organizational strategy, and AI
governance as a whole’.27 Specifically on
security issues, Maas singled out four distinct rationales for preventing ,
channeling, or containing the proliferation, production, development, or
stability, or
deployment of military technologies: ethics, legality ,
safety . From his analysis of lessons learned from arms control of nuclear weapons, he concluded inter alia that ‘far from being
inevitable, the proliferation of powerful technologies such as military
AI might be slowed or halted through the
institutionalization of norms ’.28
Norms and other approaches to mitigate risks are one of the possible response s to the negative side of AI
technology. A recent study identified several of these unsettling aspects of AI: increased risk of war or a first
strike ; disruption in deterrence and strategic parity; flawed data and computer vision; data manipulation; ineffective crisis management;
unexpected results; failure in human-machine coordination; backlash in public perception; inaccuracy in decision- making; and public sectorprivate sector tensions.29 The current deficit in explainability on how neural networks reach a given outcome is likewise raising uneasiness:
AI’s black box opacity could increase the sense of insecurity rather than provide strategic reassurance.
2AC---Impacts True
Objective truths are possible to discern despite drives and the unconscious—denying
that dooms the effectiveness of psychoanalysis
Mills, 17—Professor of Psychology & Psychoanalysis at the Adler Graduate Professional School in
Toronto (Jon, “Challenging Relational Psychoanalysis: A Critique of Postmodernism and Analyst SelfDisclosure,” Psychoanalytic Perspectives Volume 14, 2017 - Issue 3, 313-335, dml)
The implications of such
positions immediately annul metaphysical assertions to truth, objectivity, free
will, and agency , among other universals. For instance, if everything boils down to language and culture, then
by definition we cannot make legitimate assertions about truth claims or objective knowledge
because these claims are merely constructions based on our linguistic practices to begin with rather
than universals that exist independent of language and socialization. So, one cannot conclude that
truth or objectivity exist. These become mythologies, fictions, narratives, and illusions regardless of
whether we find social consensus . Therefore, natural science—such as the laws of physics, mathematics, and formal logic—are
merely social inventions based on semantic construction that by definition annul any claims to objective observations or mind independent
reality. In other words, metaphysics is dead and buried—nothing exists independent of language.2
What perhaps appears to be the
most widely shared claim in the relational tradition is the assault on the analyst’s
epistemological authority to objective knowledge . Stolorow (1998) told us that “objective reality is
unknowable by the psychoanalytic method, which investigates only subjective reality . … There are no
neutral or objective analysts, no immaculate perceptions,
no God’s-eye views of anything ” (p. 425). What exactly does this mean?
If my patient is suicidal and he communicates this to me, providing he is not malingering, lying, or manipulating me for some reason, does this
not constitute some form of objective judgment independent of his subjective verbalizations? Do
we not have some capacities to
form objective appraisals (here the term objective being used to denote making reasonably correct judgments
about objects or events outside of our unique subjective experience )? Was not Stolorow making an
absolute claim despite arguing against absolutism when he said that “reality is unknowable?” Why
not say that knowledge is proportional or incremental rather than totalistic, thus subject to
modification , alteration, and interpretation rather than categorically negate the category of an objective
Would anyone care to defy the laws of
gravity by attempting to fly off the roof of a building by flapping their
arms?
epistemology? Are there no objective facts?
Because postmodern perspectives are firmly established in antithesis to the entire history of Greek and European ontology, perspectives widely
adopted by many contemporary analysts today, relational psychoanalysis has no tenable metaphysics, or in the words of Aner Govrin (2006), no
real “metatheory.” This begs the question of an intelligible discourse on method for the simple fact that postmodern sensibilities ultimately
collapse into relativism. Because there
are no independent standards, methods, or principles subject to
uniform procedures for evaluating conceptual schemas, postmodern perspectives naturally lead to
relativism . From the epistemic (perspectival) standpoint of a floridly psychotic schizophrenic, flying donkeys really do exist, but this does
not make it so. Relativism
is incoherent and is an internally inconsistent position at best. I once had a student
who was an ardent champion of relativism until I asked him to stand up and turn around. When he did I lifted his
wallet from his back pocket and said, “If everything is relative, then I think I am entitled to your wallet
because the university does not pay me enough.”
Needless to say, he wanted it back .
Relativism collapses into contradiction, inexactitude, nihilism, and ultimately absurdity because no
one person’s opinion is any more valid than another’s, especially including value judgments and ethical
behavior, despite qualifications that some opinions are superior to others. A further danger of embracing a “relativistic
science” is that psychoanalysis really has nothing to offer over other disciplines that may negate the
value of psychoanalysis to begin with (e.g., empirical academic psychology), let alone patients themselves whose
own opinions may or may not carry any more weight than the analysts with whom they seek out for expert
professional help. Imagine
saying to your patient, “I know nothing, now where’s my money?” When one
takes relativism to the extreme, constructivism becomes creationism, which is simply a grandiose
fantasy of omnipotence—“things are whatever you want them to be.”3
2AC---Wrong
Psychoanalysis isn’t universal and scaling it up to political conclusions is colonialist
Rogers, 17—Senior Lecturer in Criminology in the School of Political Sciences at the University of
Melbourne and Adjunct Professor at Griffith Law School, Queensland (Juliet, “Is Psychoanalysis
Universal? Politics, Desire, and Law in Colonial Contexts,” Political Psychology, Vol. 38, No. 4, 2017, dml)
The presumption of a universal form of desire is an important starting point for the analyst of any patient
who arrives on the couch in the psychoanalytic clinic. The psychoanalyst can only offer certain parameters, with all their limitations. The
patient, if the analyst allows for an interrogation of their own forms of resistance, however, can speak back to any frame of desire that the
analyst presumes or proposes. And the analyst—if Jacques Lacan’s thoughts on resistance are taken seriously (Lacan, 2007, p. 497; Rogers,
2016, pp. 183–187)—must listen, attend, learn, and adapt. But when
the desires of subjects are extended into the
political realm, when the wants and needs of every subject are presumed to articulate with a
psychoanalytic notion of universal desire, then something is lost . That something might be called the desire of the
other, or it might not be called desire at all.
The desire of the other is not easily seen in the wake of European Enlightenment that has engulfed the imagination of psychoanalytic and
political theorists and practitioners alike. It is not easily seen, and it is not easily conversed with when epistemological work presumes a
trajectory of desire and then applies it. In this application, there
is little space for a radically other performance of
politics as action or imagination to appear. The subject who is subsumed into this imagination—the subject
Gayatri Spivak (1996)1 describes as “the Other of Europe”—has little opportunity to do more than “utter” under the
weight of its imagined subjectivity. As Spivak (1999) says:
[I]n the constitution of that Other of Europe, great care was taken to obliterate the textual ingredients
with which such a subject could cathect, could occupy (invest?) its itinerary—not only by ideological and scientific
production, but also by the institution of the law. (p. 266)
Psychoanalysis is as guilty of exercising such a form of “ great care” as many of the occupations of French intellectuals
that Spivak has criticized for doing so. Psychoanalysis, with its attention to the many forms of the unconscious, can appear otherwise than
guilty of this. It can appear more open, generous, and curious about the many forms that desire can take. In its later forms of attention to a
politically constituted “symbolic order” under the guidance of Lacan (2007), it can also appear more attentive to the particularities of desires
informed by a politics of the time. I argue here, however, that attention is
already constituted by an imagination of a
subject who wants, who needs, who desires objects, things, rights, in a mode which cannot not start
from a point of origin, and a particular political form of origin which then precludes the recognition —
in both the clinic and in political analysis2 —of
its itinerary.” When
other forms of desire , “with which such a subject could cathect, could occupy (invest?)
practices such as political psychoanalysis presume a particular form of desire, what is
at stake in this constitution of desire is the political subject or the Other of Europe who cannot
“speak,” in Spivak’s terms. What is lost might be called radical desire; it might be an itinerary which is
cathected or invested otherwise, and, as such, it might not be recognizable in psychoanalysis or in
contemporary political psychology at all.
The nonrecognition of the Other of Europe, in her many forms, is a consistent political problem —
documented often and insistently by critical race and postcolonial analysts such as Spivak, but also Sanjay Seth, Leila Gandhi, Chandra Talpade
Mohanty, Aileen Moreton-Robinson, Elizabeth Povinelli, Ashis Nandy, Christine Black, and Homi Bhabha. Such
nonrecognition, however,
when repostulated in political psychoanalysis has another effect. The trajectory of the symptoms of
political practice—including desires for law, justice, particular election outcomes, rights, socioeconomic configurations, or even for the
formation of political structures themselves (democracy being only one)— presume a form of desire that refers to, and
endures in, its constitution. As Spivak (1999) notes in her critique of power and desire as universal:
[S]o is “desire” misleading because of its paleonomic burden of an originary phenomenal passion—from philosophical intentionality on the one
hand to psychoanalytic definitive lack on the other. (p. 107)
The psychoanalytic definitive lack she speaks of refers to the Lacanian configuration of desire as always attempting to recover, to master, to
instantiate an identity that is supposedly interminably lost as soon as language acts upon the subject. This
lack is inaugurated
through the subjects relation to what it cannot have, or, in Spivak’s terms, the “originary phenomenal passion” referring to
the oedipal scene, which is presumed to be the origin of desire for all. This configuration of desire renders
all subjects desiring of overcoming that lack. But it is a particular form of desire and a particular
quality of lack. The presumption of this quality—the presumption about what and how people desire—I argue here, must
be accountable to the politico-historical configurations which have produced it.
Politico-historical configurations, by definition, are not universal . That is, contra Zizek (2006), I argue that not all
the world is a symptom, but that any psychoanalysis of a political symptom, of a political subject, or of
the desires examined through psychoanalysis as they emerge in a political arena, assume a particular
formation of desire . And that such an analysis operates within the parameters and employs the understandings of the oedipal scene,
or, simply of a subjectivity split by language, including the language of law. As Lacan (2006) says “language begins along with law” (p. 225).
While this
split subjectivity may appear to be universal —and is convincingly employed as such by psychoanalytic and
political theorists, and often philosophers (Butler, 1997; Epstein, 2013; Zevnik, 2016; Zizek, 2006), this splitting
refers specifically to an
oedipal lineage, as a particular instantiation of Oedipal Law, and, as I argue positive law as a liberal law concerned with
rights and with what once can or cannot have from the polis as much as what one can take from the father. Thus the
“originary phenomenal passion,” which a psychoanalysis of the political engages, always refers back as I will explain, to a (primal) father as a
sovereign in a wrangle with his sons, a scene which itself cannot not be understood without its resonances to the French Revolution.
Psycho-analysis is wrong---terrible methodology, every refutable claim has been
disproven, ineffective results
Robert Bud and Mario Bunge 10 {Robert Bud is principal curator of medicine at the Science Museum
in London. 9-29-2010. “Should psychoanalysis be in the Science Museum?”
https://www.newscientist.com/article/mg20827806-200-should-psychoanalysis-be-in-the-sciencemuseum/}//JM (link credit to EM)
WE SHOULD congratulate the Science Museum for setting up an exhibition on psychoanalysis. Exposure
to pseudoscience greatly helps understand genuine science, just as learning about tyranny helps in
understanding democracy. Over the past 30 years, psychoanalysis has quietly been displaced in
academia by scientific psychology. But it persists in popular culture as well as being a lucrative profession. It is the
psychology of those who have not bothered to learn psychology, and the psychotherapy of choice for those who
believe in the power of immaterial mind over body. Psychoanalysis is a bogus science because its practitioners do
not do scientific research . When the field turned 100, a group of psychoanalysts admitted this gap and
endeavoured to fill it. They claimed to have performed the first experiment showing that patients benefited from
their treatment. Regrettably,
they did not include a control group and did not entertain the possibility of
placebo effects . Hence, their claim remains untested (The International Journal of Psychoanalysis, vol 81, p 513). More
recently, a meta-analysis published in American Psychologist (vol 65, p 98) purported to support the claim that a form of psychoanalysis called
psychodynamic therapy is effective. However, once again, the original studies did not involve control groups. In
110 years,
psychoanalysts have not set up a single lab. They do not participate in scientific congresses, do not submit
their papers to scientific journals and are foreign to the scientific community - a marginality typical of pseudoscience. This does
not mean their hypotheses have never been put to the test. True, they are so vague that they are hard to test and
some of them are, by Freud's own admission, irrefutable. Still, most of the testable ones have been soundly
refuted. For example, most dreams have no sexual content. The Oedipus complex is a myth; boys do not hate
their fathers because they would like to have sex with their mothers. The list goes on. As for therapeutic efficacy, little is known because
psychoanalysts do not perform double-blind clinical trials or follow-up studies. Psychoanalysis is a pseudoscience. Its concepts are woolly and
untestable yet are regarded as unassailable axioms. As a result of such dogmatism, psychoanalysis
has remained basically
stagnant for more than a century, in contrast with scientific psychology, which is thriving.
2AC---AI Good
AI isn’t intrinsically exploitive and refusing it fails.
Huq, 22—Frank and Bernice J. Greenberg Professor of Law at the University of Chicago Law School
(Aziz, “Can We Democratize AI?,” https://www.dissentmagazine.org/online_articles/can-wedemocratize-ai, dml)
For readers unfamiliar with the critical literature on AI, Crawford’s book provides a powerful, elegantly written synopsis. Her criticisms bite
unqualified insistence that AI serves
“systems that further inequality and violence ” obscures at the same time as it illuminates. If data-based
hard against the self-serving discourse of Silicon Valley. Yet I wonder whether her
prediction, as I learned as a teenager, has been around a long time , how and when did it become such an
irremediable problem?
More than a decade ago, the historian David Edgerton’s The Shock of the Old repudiated the notion that the future would be dematerialized,
weightless, and electronic. Edgerton insisted on the endurance of old tools—diesel-powered ships carrying large metal containers, for
example—as central components of neoliberal economic growth. It is a mistake, he suggested, to view our present deployments of technology
as a function of innovation alone, free from the influence of inherited technological forms and social habits.
Crawford underscores pressing contemporary concerns about resource extraction, labor exploitation, and state violence. But has
AI made
these problems worse —or are current crises, as Edgerton’s analysis hints, just the enduring shock waves created
by old technologies and practices? It’s not at all clear . Crawford, for instance, justly criticizes the energy
consumption of new data centers, but she gives no accounting of the preceding history of data
harvesting and storage unconnected to AI. As a result, it is unclear whether novel forms of AI have
changed global rates of energy consumption and, if so, by what extent . Nor is it clear whether commercial AI
is more or less amenable to reform than its predecessors. One of the leading scholarly articles on AI’s carbon footprint
proposes a number of potential reforms, including the possibility of switching to already-available tools that are more energy efficient. And
recent empirical work , published after Atlas of AI, shows that the energy consumption of major network
providers such as Telefonica and Cogent decreased in absolute terms between 2016 and 2020, even as data demands
sharply rose .
Similarly, Crawford’s analysis of
the labor market for AI-related piecework does not engage with the question of
whether new technologies change workers’ reservation wage —the lowest pay rate at which a person will take a
particular kind of job—and hence their proclivity to take degrading and harmful work. It isn’t clear whether AI firms are
lowering what is already a very lean reservation wage or merely using labor that would otherwise be
exploited in a similar way . Crawford underscores the “exhausting” nature of AI-related labor—the fourteen-hour shifts that leave
workers “totally numb.” But long, boring
shifts have characterized capitalist production since the eighteenth
century ; we cannot know whether AI is making workers worse off simply by flagging the persistence of
these conditions.
In Crawford’s narrative, AI
is fated to recapitulate the worst excesses of capitalism while escaping even the
most strenuous efforts at democratic regulation. Her critique of Silicon Valley determinism ends up resembling its
photographic negative. Crawford here devotes few words to the most crucial question: can we democratize AI? Instead, she calls for a
“ renewed politics of refusal ” by “national and international movements that refuse technology-first approaches and focus on
addressing underlying inequities and injustices.”
It’s hard to disagree with the idea of fighting “underlying inequities and injustice.” But it’s not clear what her slogan means for
the future use of AI. Consider the use of AI in breast-cancer detection . AI diagnostic tools have been available at least
since 2006; the best presently achieve more than 75 percent accuracy. No difference has been observed in accuracy rates between races. In
contrast, the (non-AI) pulse oximeter that found sudden fame as a COVID-19 diagnostic tool does yield sharp racial disparities. AI diagnostics for
cancer certainly have costs, even assuming adequate sensitivity and specificity: privacy and trust may be lost, and physicians may de-skill.
Returning to non-AI tools, though, will not necessarily eliminate “ inequities and injustice .”
But do
these problems warrant a “ politics of refusal ”? It would, of course, be a problem if accurate diagnostic AI were
available only for wealthy (or white) patients; but is
it a safe assumption that every new technology will reproduce
extant distributions of wealth or social status? AI tools are often adopted under conditions that
reinforce old forms of exploitation and domination or generate new ones. This is true of many technologies , however,
from the cotton gin to the Haber-Bosch process. But
cheaper cancer detection is just one of many possible examples
of AI technologies that could expand the availability of services previously restricted to elites. An
understanding of new AI technologies’ social potential demands not just Edgerton’s skepticism about novelty,
that is, but
an openness to ambiguity and contradiction in how tools can or should be used—a politics of
progressive repossession , and not just refusal .
Crawford’s critiques
of the damaging uses and effects of AI raise important questions about technological
change under contemporary social conditions. Yet, by sidelining issues of historical continuity and the
potential beneficial uses of new tools, she leaves us with an incomplete picture of AI, and no clear path
forward . Atlas of AI begins a conversation then—but leaves plenty more to be said.
2AC---Alt Fails
1. It locks in imperialism AND systematically shuts down resistance.
Hughes ’20 [Joshua G.; 2020; PhD candidate, philosophy, University of Lancaster Law School; “Law,
life, death, responsibility, and control in an age of autonomous robotic warfare,”
https://eprints.lancs.ac.uk/id/eprint/146504/1/2020HughesPhD.pdf]
There are several moral and ethical issues which AWS raise in relation to their technological sophistication (or lack thereof),
rather than their distance
from human decision-maker s .128 Leveringhaus suggests that the difference between targeting
by humans and machines is that the human has a choice not to attack, whereas a machine can only
follow its programming .129 Indeed, Purves et al. contend that as machines would just be applying human inputted instructions, AWS
could not apply their algorithms ‘for the right reasons.’130 Yet, Leveringhaus argues that the deployment of an AWS can be morally and
ethically permissible where the use of other remote-controlled or inhabited systems present a greater risk of undesirable consequences, or of
military defeat.131 Such situations are in-line with the missions that this thesis predicts AWS will be used for: those in communication-denied
environments, or in combat at super-human intensity. These are both situations where AWS would be advantageous and required to prevent
defeat on a tactical level, and therefore could be morally and ethically compliant.
Another common argument put forward for AWS being immoral is that these systems would lack the human qualities of mercy or compassion
needed to avoid morally and ethically problematic killings, and it is currently unforeseeable that humans will ever be able to programme these
traits.132 For example, if an AWS
could recognise the targetability of children acting for the enemy,133 or enemy
soldiers who are not in a position to defend themselves,134 the machine could not identify the moral
quandaries of such attacks and exercise restraint. It would simply attack . Conversely, Arkin notes that AWS would not have
their judgement clouded by anger, hate, or psychiatric issues, could act quicker than humans, and could take greater physical risks to eliminate
targets.135 Arkin also suggests that AWS could monitor the behaviour of human beings to ensure their ethical compliance.136
Some follow Arkin’s logic to suggest that an AWS would be incapable of carrying out an atrocity.137 However, the actions an AWS will take are
ultimately dependent upon its programming and instruction. As a feminist analysis of international crimes shows, atrocities in warfare, those of
sexual violence in particular, are often the result of a specific plan by high-ranking
authority figures .138 They could order AWS to
be (re-)programmed to commit atrocities just as easily as they could order humans to carry them out. Thus, the moral and
ethical permissibility of using an AWS is massively influenced by the underlying programming of the system, and the instructions they are given
on each mission.
In terms of strategic concerns, some have suggested that if an AWS were to act in an unpredictable way, whether through poor programming,
or malfunction, this could create significant risks of an AWS initiating unintended conflicts,139 or engaging in unlawful conduct.140 But, as
Meier points out, AWS that present a chance of unpredictable actions are unlikely to be developed or deployed.141 As we saw above, the
precise actions of an AWS might not be predictable, but what the system is capable of autonomously performing will be foreseeable in
accordance with the programming and instructions of the system.
Further, many have noted that AWS could provide
immense benefit to the deploying side, such that the
asymmetry between technologically advanced and less-advanced groups in warfare would widen .142 This
is a trend
that has been ongoing for centuries.143 Yet , the potentially enormous disparity between a belligerent
party using AWS and one without them asks whether the side using AWS are no longer engage in
fighting, but are actually ‘manhunting’
the other .144 Added to the reduced physiological and psychological harms
mentioned earlier, several authors have recognised that this
would change the calculous for political decision-makers
who would have one less reason to avoid conflict.145 For technologically advanced forces, these aspects are
strategically beneficial
as they are more likely to be able to achieve their strategic and military aims . However, for their opponents,
resistance may be
near-impossible .146 This, therefore, could lead to worries of technologically enabled
global imperialism . However, precision-guided munitions,147 high-altitude bombing,148 and UAVs have also generated
similar concerns that did not lead to such consequences.149
The alt can’t change macropolitical structures—it matters if they don’t solve their
impacts.
Sivaraman, 20—Ministry of Finance, Government of India, and ED, IMF, India (Madras, “Review of
Psychoanalysis and the GlObal,” International Journal of Environmental Studies, 77:1, 176-181, dml)
The editor is a Professor of Environmental Studies but the material edited consists of articles by scholars who have analysed every subject in
the book according to the
theories of Jacques Lacan a pre-eminent psychoanalyst whose controversial contributions appear to be
inexplicably influential . There are twelve essays on subjects varying from capitalism to culture and from empowering women to
architecture. All the authors have attempted to look at their themes from a
reading all the contributions to this book, it would still be
Lacanian standpoint . For a lay reader who succeeds in
obscure as to what purpose the Lacanian psychoanalytic
approach would serve particularly in policy formulation by governments or in corporate behaviour
toward individuals and society.
The book is not easy to read; the language of psychology and philosophy has been used to describe even common ideas. In his introduction Kapoor has stated ‘this book is about the hole at the heart of the glObal; it deploys psycho
analysis to expose the unconscious desires, excesses and antagonisms that accompany the world of economic flows, cultural circulation, and sociopolitical change . . . . . . .the point here is to uncover what Jacques Lacan calls the
Real of the global – its rifts, gaps, exceptions and contradictions.’(p ix)
What is that hole in the global does not come out clearly, since Lacan’s idea of ‘the real’ is impossible to define. But one may infer that it is the reality behind the façade of everything that globalisation sought to achieve, the
eradication of poverty and the spread of prosperity through liberalism in both politics and economics. Globalisation only led to long term financial crisis, the spread of gross inequality in incomes and aggressive financial capitalism
degenerating into a new kind of imperialism. Although all this happened in small or big measure the fact remains that millions of poor also came out of abject poverty. No author has attempted a solution to fill that hole. The world
has seen imperialism, fascism and finally liberalism and now with the rise of Trumpism in the USA, Brexit, Mr. Xi becoming almost a life time president of China and the rise of religiously oriented government in India, no one knows
whether the world has bid goodbye to all that liberalism stood for. Is Lacan’s idea of the true nature of man – the real – now coming to the fore? If so, is Lacan right? This compilation has sought to give some answers analysing the
subconscious of the individuals who constitute the society.
A general reader has to plod through the articles to understand what exactly the authors want to convey. Their overall
position
appears to be that most of the ills we see in capitalism or its connected areas come from the underlying desire to
accumulate even without purpose as the urge is so strong in every individual in a Lacanian sense of the
Real which itself cannot be defined very clearly. What is meant, it seems, is that the real is a part of
nature from which human beings have severed themselves by language but to which they try to get back
again and again.
So unless a person is able to destroy this desire to continue the existing order of things that gives
enjoyment but which is leading humanity into a stage of inevitable environmental collapse, by focusing on
alternatives that are more human and beneficial to all, this desire will persist inexorably to a collapse.
That is the strength of capitalism. Despite the knowledge, that it creates extremes in income distribution, affects the environment resulting in
permanent damage to the earth and also the happiness of the future generations, it continues tenaciously.
Even while the current generation witnesses its ravages on society the
desires that lie in the subconscious of individuals
impel them to continue with the capitalist policies that are taking them to disaster . This is reflected in the financial
crisis, in the way environmental damage is faced, in architecture, in the treatment of women, and in the move toward urbanisation across the
globe.
None of the authors has attempted to clarify whether such an esoteric analysis, a paradigm change in
the way in which ordinary mortals look at these happenings in the world, has any relevance to policy
makers who can use them for changes in the way laws are framed, or institutions are governed, to
make alterations in the subconscious of the people to change their desires , so as to fall in conformity with the
overall welfare of mankind now and in the future. This
Ordinarily understood psychoanalysis
does seem to be more than a small oversight .
is used to probe deep into the unconscious mind of individuals for
treatment of mental health disorders. When
we deal with capitalism, gender equality, corruption, architecture
of buildings, women’s empowerment and similar social problems that have inbuilt destructive
elements, can psychoanalysis provide solutions? In the experience of this reviewer, the main point is that policy makers
should understand the acquisitive urge. There
may be many existing systems of ethical constraints which can be
accepted to produce an improved outcome for the world. What is seen, at least as regards environment and climate
change, is a very slow, halting, apologetic set of legal steps to control pollution, fully realising that the apocalypse of climate collapse is
approaching at a faster pace. The book would have been more provocative if it had dealt with this struggle between governance and the people
from a Lacanian perspective.
Download