Uploaded by xjudgement76

1nc round 3

advertisement
1. No Inherency – the Status Quo system solves best – DOD policy ensures ethical standards and
responsible use, but retains flexibility to address future contingencies
Allen, 2022 - Director, AI Governance Project, Strategic Technologies Program at CSIS
The DOD recently announced that it is planning to update DODD 3000.09 this year. Michael
Horowitz, director of the DOD’s
Emerging Capabilities Policy Office, praised DODD 3000., that the directive laid out a very responsible
approach to the incorporation of autonomy and weapons systems.” While not making any firm predictions, Horowitz
suggested that major revisions to DODD 3000.09 were unlikely. In general, this is good news. The DOD’s existing
policy recognizes that some categories of autonomous weapons, such as cyber weapons and missile defense systems,
are already in widespread and broadly accepted use by dozens of militaries worldwide. It also allows for the possibility
that future technological progress and changes in the global security landscape, such as Russia’s
potential deployment of artificial intelligence (AI)-enabled lethal autonomous weapons in Ukraine,
might make new types of autonomous weapons desirable. This requires proposals for such weapons to
clear a high procedural and technical bar. In addition to demonstrating compliance with U.S. obligations
under domestic and international law, DOD system safety standards, and DOD AI-ethics principles,
proposed autonomous weapons systems must clear an additional senior review process
3. No inherency – NATO is solving now – they are setting ethical guidelines now.
Heikkilä, 2021 - Politico’s AI Correspondent in London
THE AI WARS: NATO
is working on an AI strategy it hopes to unveil before the summer as part of its bid to maintain an edge
over increasingly assertive rivals. The strategy will identify ways to operate AI systems responsibly, David
van Weel, NATO’s assistant secretary-general for emerging security challenges, told me. The strategy will
also set ethical guidelines to govern AI systems, for example by ensuring systems can be shut down by a
human at all times, and make them accountable by ensuring a human is responsible for the actions of AI
systems.
4. No inherency – The EU and the US established the Global Partnership for AI to increase
collaboration and create norms for use.
Lawrence and Cordey, 2020 – researchers for The Cyber Project at the Belfer Center for Science and
International Affairs
Another topic of de facto transatlantic collaboration and alignment is international principles for AI (i.e.,
norms for AI’s development, use, and governance). These principles—supported by the US, the EU, and
most European Member States—were developed by a group of international experts from member
countries, Global Partnership for AI (GPAI). This initiative, which is grounded in the OECD AI principles was co-founded in
June 2020 by the US and the EU.276 Its aims to develop AI “grounded in human rights, inclusion, diversity,
5. Russia and China take out solvency – they will not model plan.
Thornton, 2019 - Senior Lecturer in the Centre for Defence Education Research and Analysis, King’s
College [
. It
cannot really be imagined that the likes of China and Russia, as they develop their AI systems, will feel
limited by ethical sentiment. Their view will be that they cannot afford to be. They both see themselves
as weaker militarily than the combined forces of NATO and its partner countries and, as such, have doctrinally
declared that they will be seeking out any asymmetric advantage they can. If these Western powers –
including the UK – want to self-restrict their use of LAWS, for instance, then this will be seen by Beijing and
Moscow as a weakness to be exploited in an asymmetric sense. There may then come a future scenario
where UK force elements, facing adversaries with different ethical standards and free to deploy their
‘killer robots’, would be unable to reciprocate with their own. They could be left exposed; fighting with
one arm behind their back.
6.. No solvency - The US cannot lead AI collaboration – we are not the AI leader, we lack the AI
workforce, and funding is insufficient
Lawrence and Codey, 2020 – researchers for The Cyber Project at the Belfer Center for Science and
International Affairs
Despite the momentum within the US federal government to prioritize AI and align efforts across the
interagency to maintain America’s AI leadership, there are three key challenges that imperil the ability of
the US to achieve its strategic goals., China’s AI-related private industry and private funding, combined
with government funding, a lack of regulation, and widespread economic espionage constitute threats
to America’s edge.152 The decentralized US approach, uncertainty across the US private sector on how
to balance sometimes competing economic and ethical considerations, the US government has
recognized that it needs to build up its domestic workforce of AI talent as the demand exceeds the
supply. AI Funding: Although the Administration has pledged to increase (non-defense and defense) AI-related
spending and absolute AI R&D budget numbers have increased, there are concerns that these numbers may not
accurately reflect development.
Download