Uploaded by John Roger

Automation Backfile

advertisement
Automation Good
Econ Growth
Growth
Growth is stagnating – automation gives boost and leads to sustained growth while
creating minor tradeoffs with labor
James Manyika and Michael Spence 18 – Manyika, is the San Francisco-based director of the McKinsey Global Institute (MGI),
the business and economics research arm of McKinsey & Company. Spence, a Nobel laureate in economics, is Professor of Economics at NYU’s
Stern School of Business. [“The False Choice Between Automation and Jobs”, Harvard Business Review, February 5th,
https://hbr.org/2018/02/the-false-choice-between-automation-and-jobs, AZ]
We live in a world where productivity, a
key pillar of long-term economic growth, has crumbled. In the United States,
Europe, and other advanced economies, productivity growth has slowed so drastically in the past decade that economists
debate whether we have entered a new era of stagnation — and this at a time when we need productivity growth more
than ever to sustain growth, as working populations in countries from Germany to Japan age and shrink. Now comes potential help, in the
form of advanced robotics, machine learning, and artificial intelligence, which can already outperform humans in a range of activities,
from lip-reading to analyzing X-rays. The performance benefits for companies are compelling and not just (or even mainly) in terms
of reducing labor costs: automation can also bring whole new business models, and improvements that go beyond human capabilities, such as
increasing throughput and quality and raising the speed of responses in a variety of industries. Automation will give the global economy that
much-needed productivity boost, even as it enables us to tackle societal “moonshots” such as curing disease or contributing
solutions to the climate change challenge. The catch is that adopting these technologies will disrupt the world of work. No less significant than the
jobs that will be displaced are the jobs that will change — and those that will be created. New research by the McKinsey Global institute
suggests that roughly 15% of the global workforce could be displaced by 2030 in a midpoint scenario, but that the jobs likely
created will make up for those lost. There is an important proviso: that economies sustain high economic growth and dynamism, coupled with strong
trends that will drive demand for work. Even so, between 75 million to 375 million people globally may need to switch occupational categories by 2030, depending
on how quickly automation is adopted. It is no small challenge. The jobs gained will require higher educational attainment and more advanced levels of
communication and cognitive ability, as work requiring rote skills such as data processing or collection increasingly are taken over by machines. People
will
be augmented by increasingly capable machines acting as digital working partners and assistants, further requiring ongoing skills
development and evolution. In advanced economies, which the research shows will be the most affected, downward pressure on middle-wage jobs will likely grow,
exacerbating the already vexed issue of job and income polarization, although in emerging economies the balance between jobs lost and jobs gained looks to be
more favorable in the short-to-medium-run., and the net effect is likely to be an acceleration of growth in the middle class. Societies everywhere will have
important choices to make in response to these challenges. Some
may be tempted to try to halt or slow the adoption of
automation. Even if this were possible — and it may be as futile as King Canute’s attempts to turn the incoming tide — it would mean foregoing
the beneficial productivity effects the technology would bring. Other options are also less than desirable. Going back to the low-GDP
growth, low-job growth path we were on in the immediate aftermath of the global financial crisis will mean stagnation — and continued rising discontent about
incomes that don’t advance and income inequalities that continue to grow. And rapid automation that brings only efficiency-driven productivity growth rather than
value-added expansion, and hence fails to create jobs, could stir social unease. Our view is that we
should embrace automation technologies
for the productivity benefits they will bring, even as we deal proactively with the workforce transitions that will accompany adoption. The
tradeoff between productivity and employment is actually less than it might seem at first sight, since the GDP
bounce that productivity brings will raise consumption and hence labor demand, as it has always done in the past. This
effect will be stronger and faster if the gains in value added turn into income in the hands of those who are likely to spend it. Broadly distributing
income gains will then translate productivity growth into GDP growth.
Economic slowdown causes great power conflict – slow growth undermines
interdependence and makes reforms impossible
Daniel Drezner 16, Professor of International Politics, Tufts; Nonresident Senior Fellow, Brookings, May 2016, “Five Known Unknowns
about the Next Generation Global Political Economy”, Project on International Order and Strategy at Brookings,
http://www.anamnesis.info/sites/default/files/D_Drezner_2016.pdf
A slow-growth economic trajectory also creates policy problems that increase the likelihood of even
slower growth. Higher growth is a political palliative that makes structural reforms easier. For example,
Germany prides itself on the “Hartz reforms” to its labor markets last decade, and has advocated similar policies for the rest of the Eurozone
since the start of the 2008 financial crisis. But the Hartz reforms were accomplished during a global economic upswing, boosting German
exports and cushioning the shortterm cost of the reforms themselves. In
a low-growth world, other economies will be
understandably reluctant to engage in such reforms. It is possible that concerns about a radical growth slowdown are
exaggerated. In 1987, Robert Solow famously said, “You can see the computer age everywhere but in the productivity statistics.”85 A decade
later, the late 1990s productivity surge was in full bloom. Economists are furiously debating whether the visible innovations in the information
sector are leading to productivity advances that are simply going undetected in the current productivity statistics.86 Google’s chief economist
Hal Varian, echoing Solow from a generation ago, asserts that “there is a lack of appreciation for what’s happening in Silicon Valley, because we
don’t have a good way to measure it.”87 It is also possible that current innovations will only lead to gains in labor productivity a decade from
now. The OECD argues that the productivity problem resides in firms far from the leading edge failing to adopt new technologies and
systems.88 There are plenty of sectors, such as health or education, in which technological innovations can yield significant productivity gains. It
would foolhardy to predict the end of radical innovations. But
the possibility of a technological slowdown is a significant
“known unknown.” And if such a slowdown occurs, it would have catastrophic effects on the public
finances of the OECD economies. Most of the developed world will have to support disproportionately large numbers of pensioners
by 2036; slower-growing economies will worsen the debt-to-GDP ratios of most of these economies,
causing further macroeconomic stresses—and, potentially, political unrest from increasingly stringent
budget constraints.89 2. Are there hard constraints on the ability of the developing world to converge to developed-country living
standards? One of the common predictions made for the next generation economy is that China will displace the United States as the world’s
biggest economy. This is a synecdoche of the deeper forecast that per capita incomes in developing countries will slowly converge towards the
living standards of the advance industrialized democracies. The OECD’s Looking to 2060 report is based on “a tendency of GDP per capita to
converge across countries” even if that convergence is slow-moving. The EIU’s long-term macroeconomic forecast predicts that China’s per
capita income will approximate Japan’s by 2050.90 The Carnegie Endowment’s World Order in 2050 report presumes that total factor
productivity gains in the developing world will be significantly higher than countries on the technological frontier. Looking at the previous
twenty years of economic growth, Kemal Dervis posited that by 2030, “The rather stark division of the world into ‘advanced’ and ‘poor’
economies that began with the industrial revolution will end, ceding to a much more differentiated and multipolar world economy.”91
Intuitively, this seems rational. The theory is that developing countries have lower incomes primarily because they are capital-deficient and
because their economies operate further away from technological frontier. The gains from physical and human capital investment in the
developing world should be greater than in the developed world. From Alexander Gerschenkron forward, development economists have
presumed that there are some growth advantages to “economic backwardness”92 This intuitive logic, however, is somewhat contradicted by
the “middle income trap.” Barry Eichengreen, Donghyun Park, and Kwanho Shin have argued in a series of papers that as an economy’s GDP per
capita hits close to $10,000, and then again at $16,000, growth slowdowns commence.93 This makes it very difficult for these economies to
converge towards the per capita income levels of the advanced industrialized states. History bears this out. There is a powerful correlation
between a country’s GDP per capita in 1960 and that country’s per capita income in 2008. In fact, more countries that were middle income in
1960 had become relatively poorer than had joined the ranks of the rich economies. To be sure, there have been success stories, such as South
Korea, Singapore, and Israel. But other success stories, such as Greece, look increasingly fragile. Lant Prichett and Lawrence Summers conclude
that “past performance is no guarantee of future performance. Regression to the mean is the single most robust and empirical relevant fact
about cross-national growth rates.”94 Post-2008 growth performance of the established and emerging markets matches this assessment. While
most of the developing world experienced rapid growth in the previous decade, the BRICS have run into roadblocks. Since the collapse of
Lehman Brothers, these economies are looking less likely to converge with the developed world. During the Great Recession, the non-Chinese
BRICS—India, Russia, Brazil, and South Africa—have not seen their relative share of the global economy increase at all.95 China’s growth has
also slowed down dramatically over the past few years. Recent and massive outflows of capital suggests that the Chinese economy is headed
for a significant market correction. The collapse of commodity prices removed another source of economic growth in the developing world. By
2015, the gap between developing country growth and developed country growth had narrowed to its lowest level in the 21st century.96 What
explains the middle income trap? Eichengreen, Park and Shin suggest that “slowdowns coincide with the point in the growth process where it is
no longer possible to boost productivity by shifting additional workers from agriculture to industry and where the gains from importing foreign
technology diminish.”97 But that is insufficient to explain why the slowdowns in growth have been so dramatic and widespread. There are
multiple candidate explanations. One argument, consistent with Paul Krugman’s deconstruction of the previous East Asia “miracle,”98 is that
much of this growth was based on unsustainable levels of ill-conceived capital investment. Economies that allocate large shares of GDP to
investment can generate high growth rates, particularly in capital-deficient countries. The sustainability of those growth rates depends on
whether the investments are productive or unproductive. For example, high levels of Soviet economic growth in the 1950s and 1960s masked
the degree to which this capital was misallocated. As Krugman noted, a lesser though similar phenomenon took place in the Asian tigers in the
1990s. It is plausible that China has been experiencing the same illusory growth-from-bad-investment problem. Reports of overinvestment in
infrastructure and “ghost cities” are rampant; according to two Chinese government researchers, the country wasted an estimated $6.8 trillion
in “ineffective investment” between 2009 and 2013 alone.99 A political explanation would be rooted in the fact that many emerging markets
lack the political and institutional capabilities to sustain continued growth. Daron Acemoğlu and James Robinson argue that modern economies
are based on either “extractive institutions” or “inclusive institutions.”100 Governments based on extractive institutions can generate higher
rates of growth than governments without any effective structures. It is not surprising, for example, that post-Maoist Chinese economic growth
has far outstripped Maoist-era rates of growth. Inclusive institutions are open to a wider array of citizens, and therefore more democratic.
Acemoğlu and Robinson argue that economies based on inclusive institutions will outperform those based on extractive institutions. Inclusive
institutions are less likely to be prone to corruption, more able to credibly commit to the rule of law, and more likely to invest in the necessary
public goods for broad-based economic growth. Similarly, Pritchett and Summers conclude that institutional quality has a powerful and longlasting effect on economic growth—and that “salient characteristics of China—high levels of state control and corruption along with high
measures of authoritarian rule—make a discontinuous decline in growth even more likely than general experience would suggest.”101 A more
forward-looking explanation is that the changing nature of manufacturing has badly disrupted the 20th century pathway for economic
development. For decades, the principal blueprint for developing economies to become developed was to specialize in industrial sectors where
low-cost labor offered a comparative advantage. The resulting growth from export promotion would then spill over into upstream and
downstream sectors, creating new job-creating sectors. Globalization, however, has already generated tremendous productivity gains in
manufacturing—to the point where industrial sectors do not create the same amount of employment opportunities that they used to.102 Like
agriculture in the developed world, manufacturing has become so productive that it does not need that many workers. As a result, many
developing economies suffer from what Dani Rodrik labels “premature deindustrialization.” If Rodrik is correct, then going forward,
manufacturing will fail to jump-start developing economies into higher growth trajectories—and the political effects that have traditionally
come with industrialization will also be stunted.103 Both the middle-income trap and the regression to the mean observation are empirical
observations about the past. There is no guaranteeing that these empirical regularities will hold for the future. Indeed, China’s astonishing
growth rate over the past 30 years is a direct contradiction of the regression to the mean phenomenon. It is possible that over time the
convergence hypothesis swamps the myriad explanations listed above for continued divergence. But in sketching out the next generation global
economy, the implications of whether regression to the mean will dominate the convergence hypothesis are massive. Looking at China and
India alone, the gap in projections between a continuation of past growth trends and regression to the mean is equivalent to $42 trillion—more
than half of global economic output in 2015.104 This gap is significant enough to matter not just to China and India, but to the world economy.
As with the developed world, a
growth slowdown in the developing world can have a feedback effect that makes
more growth-friendly reforms more difficult to accomplish. As Chinese economic growth has slowed,
Chinese leader Xi Jinping’s economic reform plans have stalled out in favor of more political repression.
Follows the recent playbook of Russian President Vladimir Putin, who has added diversionary war as another distracting
tactic from negative economic growth. Short-term steps towards political repression will make politically
risky steps towards economic reform that less palatable in the future. Instead, the advanced developing
economies seem set to double down on strategies that yield less economic growth over time. 3. Will
geopolitical rivalries or technological innovation alter the patterns of economic interdependence? Multiple scholars have observed a secular
decline in interstate violence in recent decades.105 The Kantian triad of more democracies, stronger multilateral institutions, and greater levels
of cross-border trade is well known. In recent years, international relations theorists have stressed that commercial interdependence is a bigger
driver of this phenomenon than previously thought.106 The liberal logic is straightforward. The benefits of cross-border exchange and
economic interdependence act as a powerful brake on the utility of violence in international politics. The global supply chain and “just in time”
delivery systems have further imbricated national economies into the international system. This creates incentives for governments to preserve
an open economy even during times of crisis. The more that a country’s economy was enmeshed in the global supply chain, for example, the
less likely it was to raise tariffs after the 2008 financial crisis.107 Similarly, global financiers are strongly interested in minimizing political risk;
historically, the financial sector has staunchly opposed initiating the use of force in world politics.108 Even militarily powerful actors must be
wary of alienating global capital. Globalization therefore creates powerful pressures on governments not to close off their economies through
protectionism or military aggression. Interdependence can also tamp down conflicts that would otherwise be likely to break out during a great
power transition. Of the 15 times a rising power has emerged to challenge a ruling power between 1500 and 2000, war broke out 11 times.109
Despite these odds, China’s recent rise to great power status has elevated tensions without leading to anything approaching war. It could be
argued that the Sino-American economic relationship is so deep that it has tamped down the great power conflict that would otherwise have
been in full bloom over the past two decades. Instead, both China and the United States have taken pains to talk about the need for a new kind
of great power relationship. Interdependence can help to reduce the likelihood of an extreme event—such as a great power war—from taking
place. Will this be true for the next generation economy as well? The two other legs of the Kantian triad—democratization and
multilateralism—are facing their own problems in the wake of the 2008 financial crisis.110 Economic openness survived the negative shock of
the 2008 financial crisis, which suggests that the logic of commercial liberalism will continue to hold with equal force going forward. But some
international relations scholars doubt the power of globalization’s pacifying effects, arguing that
interdependence is not a powerful constraint.111 Other analysts go further, arguing that globalization exacerbates
financial volatility—which in turn can lead to political instability and violence.112 A different counterargument is
that the continued growth of interdependence will stall out. Since 2008, for example, the growth in global
trade flows has been muted, and global capital flows are still considerably smaller than they were in the
pre-crisis era. In trade, this reflects a pre-crisis trend. Between 1950 and 2000, trade grew, on average, more than twice as fast as global
economic output. In the 2000s, however, trade only grew about 30 percent more than output.113 In 2012 and 2013, trade grew less than
economic output. The McKinsey Global Institute estimates that global flows as a percentage of output have fallen from 53 percent in 2007 to 39
percent in 2014.114 While the stock
of interdependence remains high, the flow has slowed to a trickle. The
Financial Times has suggested that the
global economy has hit “peak trade.”115 If economic growth continues to outstrip trade,
then the level of interdependence will slowly decline, thereby weakening the liberal constraint on great
power conflicts. And there are several reasons to posit why interdependence might stall out. One possibility is due
to innovations reducing the need for traded goods. For example, in the last decade, higher energy prices in the United States triggered
investments into conservation, alternative forms of energy, and unconventional sources of hydrocarbons. All of these steps reduced the U.S.
demand for imported energy. A future in which compact fusion engines are developed would further reduce the need for imported energy
even more.116 A more radical possibility is the development of technologies that reduce the need for physical trade across borders. Digital
manufacturing will cause the relocation of production facilities closer to end-user markets, shortening the global supply chain.117 An even
more radical discontinuity would come from the wholesale diffusion of 3-D printing. The ability of a single printer to produce multiple
component parts of a larger manufactured good eliminates the need for a global supply chain. As Richard Baldwin notes, “Supply chain
unbundling is driven by a fundamental trade-off between the gains from specialization and the costs of dispersal. This would be seriously
undermined by radical advances in the direction of mass customization and 3D printing by sophisticated machines…To put it sharply,
transmission of data would substitute for transportation of goods.”118 As 3-D printing technology improves, the need for large economies to
import anything other than raw materials concomitantly declines.119 Geopolitical
ambitions could reduce economic
interdependence even further.120 Russia and China have territorial and quasi-territorial ambitions
beyond their recognized borders, and the United States has attempted to counter what it sees as
revisionist behavior by both countries. In a low-growth world, it is possible that leaders of either country
would choose to prioritize their nationalist ambitions over economic growth. More generally, it could be
that the expectation of future gains from interdependence—rather than existing levels of
interdependence—constrains great power bellicosity.121 If great powers expect that the future benefits
of international trade and investment will wane, then commercial constraints on revisionist behavior
will lessen. All else equal, this increases the likelihood of great power conflict going forward.
Automation boosts growth---increases productivity and opens up new areas of
employment and innovation.
Irving Wladawsky-Berger 17 – retired from IBM in 2007 after 37 years with the company. As Chairman Emeritus, IBM Academy of
Technology, he continues to participate in a number of IBM’s technical strategy and innovation initiatives. He is also Visiting Professor of
Engineering Systems at MIT. [“AI, Automation and the U.S. Economy”, January 16th, Medium, https://medium.com/mit-initiative-on-the-digitaleconomy/ai-automation-and-the-u-s-economy-357057e1a502, AZ]
Technology and Productivity Growth As has been the case with past technology innovations, in the long run AI
should make the
economy more efficient and lead to productivity growth and higher standards of living. Such an AI productivity
boost is particularly important, given that productivity has significantly slowed down over the past decade in the U.S. and
other advanced economies. How can AI boost productivity? I particularly like Kevin Kelly’s comparison to the advent of electricity a century ago
in a 2014 article in Wired: AI will likely evolve as a kind of “cheap, reliable, industrial-grade digital smartness running behind everything,
and almost invisible except when it blinks off… Everything that we formerly electrified we will now cognitize.” Like any other tool, this
“utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There
is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans
of the next 10,000 startups are easy to forecast: Take X and add AI.” Uneven Impact It’s very
hard to predict which jobs will be
most affected by AI-driven automation. Like IT, AI is a collection of technologies with varying impact on different tasks. Recent
research suggests that the effects of AI in the short- and medium-term will be similar to those of IT — lower skilled workers face the biggest
threats from AI-based automation, while higher-skilled workers stand to benefit most from the new kinds of jobs that AI might create. New AIbased jobs fall into four main categories: Engagement. Tasks that
cannot be substituted by automation are generally
complemented by it, often leading to higher demand for workers. “Many industry professionals refer to a large swath of
AI technologies as Augmented Intelligence, stressing the technology’s role as assisting and expanding the productivity of
individuals rather than replacing human work.” Development. We can expect a great need for highly-skilled software developers and engineers
to put AI to practical use across multiple industries. Given the central role of data in AI applications, there will be increased demand for data
scientists, and other data-oriented jobs. Supervision. There will also be a growing number of jobs to monitor, license, maintain and repair AI
systems and applications. “The capacity for AI-enabled machines to learn is one of the most exciting aspects of the technology, but it may also
require supervision to ensure that AI does not diverge from originally intended uses.” Response to Paradigm Shifts. AI innovations will likely
require major changes in the surrounding environment. Self-driving vehicles, for example, will lead to new careers in transportation engineering
and urban planning. Similarly, the increased use of robotics will be accompanied by major changes in manufacturing systems. Technology is Not
Destiny Policy plays a large role in shaping the direction and effects of technological change. “Given appropriate attention and the right policy
and institutional responses, advanced
automation can be compatible with productivity, high levels of
employment, and more broadly shared prosperity.” The report advocates three broad strategies for addressing the impacts of AIdriven automation across the U.S. economy: Invest In and Develop AI for its Many Benefits. Advances in AI promise to make
important contributions to productivity growth as well as helping the US stay on the cutting edge of
innovation. “AI technology itself has opened up new markets and new opportunities for progress in critical areas such
as health, education, energy, economic inclusion, social welfare, transportation, and the environment.
Substantial innovation in AI, robotics, and related technology areas has taken place over the last decade, but the United States will need a
much faster pace of innovation in these areas to significantly advance productivity growth going forward.”
GDP
Automation boosts global GDP by over 1 trillion dollars
Will Martin 17 – Reporter for Business Insider. [“Automation could add more than $1.1 trillion to the
.
global economy in the next 10 years”, November 10th, http://www.businessinsider.com/automationone-trillion-dollars-global-economy-jpmam-report-2017-11, AZ]
LONDON — Technological advances such as automation could increase global GDP by more than $1.1 trillion over the
next 10-15 years, according to a new report from analysts at JPMorgan Asset Management seen exclusively by Business Insider. The asset
management arm of banking giant JPMorgan believes that technological advances across all areas of society could
lead to big
productivity gains, which in turn will likely boost economic growth. "Technology will affect economic growth rates and
capital market returns in ways that are difficult to foresee," the report, authored by a team of strategists headed by John Bilton, JPMAM's Head
of Global Multi-Asset Strategy, argues. "Workforce
automation and AI have the potential to deliver significant overall
productivity gains, and some nations facing growth challenges from aging populations could see an additional boost to trend growth
rates." "In the past, technological innovation transformed society and increased labor productivity in three key ways," the report states. Those
were: "Replacing
existing workers with machines, and thus producing at least the same output with fewer
workers (e.g., refrigeration vs. the ice man);" "Complementing existing workers' jobs, boosting output per worker by
automating some of their tasks (e.g., power tools);" "Creating entirely new, higher productivity industries (e.g.,
computer software engineering), offsetting the displacement of workers by machines, or replacing altogether industries that have been made
obsolete." The extra
growth delivered, the report said, would likely be in the order of around a 1%-1.5% boost to
global GDP. The most recent estimates suggest that this is around $75.6 trillion, meaning that any boost would be worth in excess of $1
trillion. That's roughly equivalent to the GDP of Mexico or Australia in 2016. Here's JPMorgan Asset Management's chart: ***Chart
Omitted*** Workforce automation has been a much discussed topic in recent months and years, with many believing that human workers
in certain simpler professions will soon be replaced by robots or automated processes.
Income
Leads to higher real wages and increasing demand
Sarah Kessler 17 – Deputy editor of Quartz At Work, a new edition of Quartz covering management, office culture, productivity,
workplace inclusion, career development. [“The optimist’s guide to the robot apocalypse”, Quartz, March 9th, https://qz.com/904285/theoptimists-guide-to-the-robot-apocalypse/, AZ]
According to the optimist’s viewpoint, a factory
that saves money on labor through automation will either: Lower
prices, which makes its products more appealing and creates an increased demand that may lead to the need
for more workers. Generate more profit or pay higher wages. That may lead to increased investment or increased
consumption, which can also lead to more production, and thus, more employment. Amazon offers a more modern
example of this phenomena. The company has over the last three years increased the number of robots working in its
warehouses from 1,400 to 45,000. Over the same period, the rate at which it hires workers hasn’t changed. The
optimist’s take on this trend is that robots help Amazon keep prices low, which means people buy more stuff, which means the company needs
more people to man its warehouses even though it needs fewer human hours of labor per package. Bruce Welty, the founder of a fulfillment
company that ships more than $1 billion of ecommerce orders each year and another company called Locus Robotics that sells warehouse
robots, says he thinks the threat to jobs from the latter is overblown—especially as the rise of ecommerce creates more demand for
warehouse workers. His fulfillment company has 200 job openings at its warehouse. A handful of modern studies have noted that there’s often
a positive relationship between new technology and increasing employment—in manufacturing firms, across all sectors, and specifically in firms
that adopted computers. How automation impacts wages is a separate question. Warehouse jobs, for instance, have a reputation as grueling
and low-paying. Will automation make them better or worse? In the case of the loom workers, wages
went up when parts of their
jobs became automated. According to Bessen, by the end of the 19th century, weavers at the famous Lowell factory earned more than
twice what they earned per hour in 1830. That’s because a labor market had built up around the new skill (working the machines) and
employers competed for skilled labor. That, of course, is not the only option, but it is an outcome embraced by the optimist crowd. Similarly
positive results of automation:
If companies can make more money with the same number of workers, they can
theoretically pay those workers better. If the price of goods drops, those workers can buy more without a
raise.
Jobs
Automation generates more jobs than destroyed
Robert E. Litan 18 – Non-Resident Senior Fellow at the Brookings Institution, where he has previously been a Senior Fellow on staff, and
Vice President and Director of Economic Studies. Practicing attorney, as a partner with the law firm of Korein, Tillery. He has served as Vice
President for Research and Policy at the Kauffman Foundation and also the Director of Research at Bloomberg Government. B.S. in Economics
at the Wharton School of Finance at the University of Pennsylvania, J.D. at Yale Law School, Ph.D. at Yale University. [“Meeting the automation
challenge to the middle class and the American project”, Brookings, June 21st, https://www.brookings.edu/research/meeting-the-automationchallenge-to-the-middle-class-and-the-american-project/, AZ]
Like a chess player who doesn’t look past his next move, those who worry that automation will cost jobs
tend to focus only on the initial replacement of certain jobs by robots or software in specific firms and industries. They
do not count new jobs that will be created in the process, in several ways: Firms engaged in producing,
marketing and implementing productivity-enhancing automation will continue to need more software
programmers, engineers, psychologists, linguists and others needed for these functions. While firms use automation to
replace certain kinds of workers, automation requires other complementary skills, or individuals trained in data
visualization, in reasoning and collaborative skills to think “Big,” and how to use automation to design and deliver new
products and services. As automation drives down the costs of all kinds of goods and services, people buy more of
them, generating more jobs in the process for firms adopting automation. Perhaps most important, the cost savings from
automation do not disappear into thin air, but rather get spent on other goods and especially services – health
care, education, leisure, travel and entertainment – that will need more people to make and deliver them.
These four effects have worked in combination in the past to ensure that other life-changing technologies (think electricity
and computers) for over two centuries in the U.S. and increasingly around the world have continued, in the absence of cyclical downturns, to
generate enough jobs to replace those no longer needed. There is no reason for believing that continued
automation in the future – which will fundamentally change our economy and society in a combination of predictable and
unexpected ways – will be any different. Indeed, as I write these words, in June 2018, while many continue to worry that
automation will destroy jobs, the U.S. economy continues to generate more of them. The unemployment
rate is down to near-historic lows while more people who had quit looking for work during and after the Great Recession have rejoined the
labor force (although because each of these measures is imperfect, there is probably some slack still left in the economy). Nonetheless, the
constant of change presents huge challenges now and in the future: to generate a healthy
supply of “good” jobs and careers paying
at least “middle class” wages and smoothing the transition of people whose jobs are eliminated by automation to take
advantage of these other opportunities. A recent study by economists at the Organization for Economic Cooperation and Development (OECD)
warns that future automation could displace an average of 14 percent of jobs in OECD countries (though this estimate varies a lot across
countries). The study, like others, also projects automation’s impact to be much greater on jobs requiring low skill than those with higher skills,
thus aggravating income inequality, without countervailing public policies – which is the real threat and challenge that automation poses, not
the mass unemployment feared by technology pessimists.
More jobs---best and most recent data
Noah Smith 18 – Noah Smith is a Bloomberg Opinion columnist. He was an assistant professor of finance at Stony Brook University. ["As
Long as There Are Humans, There Will Be Jobs", 3-23-2018, Bloomberg, https://www.bloomberg.com/view/articles/2018-03-23/robots-won-ttake-all-jobs-because-humans-demand-new-things, AZ]
So far, the best known study seems to be a 2017 paper by Daron Acemoglu and Pascual Restrepo of MIT. Acemoglu
and Restrepo
find that between 1990 and 2007, places with more robots lost more jobs and saw lower wages. Much of the press
has picked up on this study, and taken it as evidence that automation really is bad for workers. But there are
big caveats to Acemoglu and Restrepo’s paper. The kind of robots they look at constitute only a small
fraction of the automation technologies being deployed across the world today. Economists from the Economic Policy
Institute looked closely at Acemoglu and Restrepo’s results and found that investment in computers is associated with job
gains rather than losses. An accurate picture requires a very general definition of automation. Economists Katja Mann and Lukas
Puttmann of the University of Bonn have a new paper in which they observe the march of automation-related technology by looking
at patent records. The authors use text algorithms to classify patents into automation patents and others, using their broad definition of “a
device that carries out a process independently.” This includes things like automated taco machines and hair dye applicators, but not hand-held
scanners. It’s not clear that this is the best way of defining automation -- after all, using a hand-held scanner could involve only a little more
human input than pressing the button to start an automated taco machine. But since there’s no unique and satisfying definition of automation,
Mann and Puttmann’s method is probably as good as most. They find, unsurprisingly, that the share
of patents related to
automation has climbed steeply -- from 25 percent in 1976 to 67 percent in 2014. The authors report that this increase
in automation technology has not led to the loss of jobs overall -- in fact, probably the opposite. By linking
patents with industries and industries with locations, they purport to measure the statistical effect of automation patents on
local employment. They find that over a five-year period, automation patents routinely led to an increase in total employment as a
percent of population. Assuming Mann and Puttmann have defined automation right, correctly linked it to specific locations, and chosen the
right time period over which to study the impact, this means that automation
is creating jobs. That could be because humans
continue to find new tasks to perform in order to complement new machines, or it could be because automation leads
to a boom that increases local labor demand. Either way, this research represents an important counterpoint to Acemoglu and Restrepo’s
paper. There is one caveat, though -- Mann and Puttmann find that automation is associated with job loss in the manufacturing industry. Even
as productivity in manufacturing has risen, demand for manufactured goods has not kept pace -- hence, workers in that industry have been
replaced rather than complemented. A paper by economist James Bessen argues that this is a universal pattern. When
an industry is
young, automation doesn’t displace workers, because people keep buying more and more of that industry’s products. But
when people eventually have enough of something -- couches, televisions, etc. -- automation can no longer increase an industry’s aggregate
size, and starts displacing workers instead. This model implies that as long as we keep inventing new products and services, automation isn’t
going to make humanity obsolete. Only if the human race runs out of new desires will the robots take our jobs. So far, that shows no signs of
happening. The prospect of automation threatening human usefulness remains firmly in the realm of science fiction.
Environment
Climate Change
Automation slows down climate change---climate models, climate friendly tech.
Bernard Marr 18 – Internationally best-selling business author, keynote speaker and strategic advisor to companies and governments.
[“The Amazing Ways We Can Use AI To Tackle Climate Change”, February 21st, https://www.forbes.com/sites/bernardmarr/2018/02/21/theamazing-ways-we-can-use-ai-to-tackle-climate-change/2/#1396dde92756, AZ]
While there are still some on the earth who claim climate change is a farce, the majority of us believe we need to throw everything possible
into slowing down or solving the problem. Artificial intelligence (AI) and machine learning are two tools in our climate-change-
halting toolbox. The more we utilize AI and machine learning technology to help us understand our current reality, predict future
weather events and create new products and services to minimize our human impact our chances of improving
and saving lives, creating a healthier world and making businesses more efficient, the better chance we have
to stall or even reverse the climate change trajectory we’re on. Here are just a few of the ways AI and machine learning
are helping us tackle climate change. Climate Study: A Big-Data Problem Machines can analyze the flood of data that is generated every day
from sensors, gauges and monitors to spot patterns quickly and automatically. By looking at data
about the changing conditions of
very accurate picture of how the
world is changing. The more accurate we’re able to be at the current status of our climate, the better our
climate models will be. This information can be used to identify our biggest vulnerabilities and risk zones. This
the world’s land surfaces that is gathered by NASA and aggregated at Landsat, it provides a
knowledge from climate scientists can be shared with decision-makers so they know how to respond to the impact of climate change—severe
weather such as hurricanes, rising sea levels and higher temperatures. Developing Better Solutions Artificial intelligence and
deep
learning can help climate researchers and innovators test out their theories and solutions about how to reduce air
pollution and other climate-friendly innovations. One example of this is the Green Horizon Project from IBM that analyzes
environmental data and predicts pollution as well as tests “what-if” scenarios that involve pollution-reducing tactics. By using the
information provided by machine learning algorithms, Google was able to cut the amount of energy it used
at its data centers by 15%. Similar insights can help other companies reduce their carbon footprint. Green Initiatives
While businesses and manufacturing might contribute significantly to greenhouse gas levels, it’s still imperative that each citizen commits to
reducing their impact as well. The easier we make green initiatives for each person, the higher
the adoption rate and the more
progress we make to save the environment. Artificial intelligence and machine learning innovations can help create products
and services that make it easier to take care of our planet. There are several consumer-facing AI devices such as smart thermostats (which
could save up to 15% on cooling annually for each household) and irrigation systems (which could save up to 8,800 gallons of water per home
per year) that help conserve resources. Everyone doing their part over time will add up. Better Weather Event Predictions The damage to
human lives and property can be reduced if there are earlier warning signs of a catastrophic weather event. There has been significant progress
in using machine-learning algorithms that were trained on data from other extreme weather events to identify tropical cyclones and
atmospheric rivers. The earlier warning that governments and citizens can get about severe weather, the better they are able to respond and
protect themselves. Machines
are also being deployed to assess the strengths of models that are used to
investigate climate change by reviewing the dozens of them that are in use and extracting intelligence from them. They also help
predict how long a storm will last and its severity. Since machines can’t tell you “how” it arrived at its prediction or decisions, most climate
professionals don’t feel comfortable relying on only what the machines suggest will happen, but use machine insight along with their own
professional analysis to complement one another. Climate
change is a gargantuan problem and its complexity is
exacerbated by the many people and players involved from divergent worldwide government entities to profit-driven
corporations and individuals who aren’t always open to change. Therefore, the faster and smarter we can become through the
use of AI and machine learning the higher our probability of success to at least slow down the damage
caused by climate change.
Climate change is real and causes extinction
Zach Ruiter 17, environmental reporter for Now Toronto and Torontoist, citing 15, 364 scientists from 184 countries in ‘World Scientists’
Warning to Humanity: A Second Notice’, 11-22-17, “Are we headed for near-term human extinction?” https://nowtoronto.com/news/are-weheaded-for-near-term-human-extinction/
A “warning to humanity” raising the spectre “of potentially catastrophic climate change... from burning
fossil fuels, deforestation and agricultural production – particularly from farming ruminants for meat consumption,” was
published in the journal BioScience last week. More than 15,000 scientists from 184 countries endorsed the caution, which
comes on the 25th anniversary of a letter released by the Union of Concerned Scientists in 1992, advising that “a great change in
our stewardship of the earth and the life on it is required, if vast human misery is to be avoided.” A quarter century on,
what gets lost in the dichotomy between climate change believers and deniers is that inaction and avoidance in our daily lives are forms of
denial, too. And what most
of us are collectively denying is the mounting evidence that points to a worst-case
human extinction. Exponential climate change In 2015, 195 countries signed the
Paris Climate Agreement to limit the rise in global temperature to below 2 degrees Celsius to avoid dangerous climate change. But
none of the major industrialized countries that signed the agreement are currently on track to meet the nonbinding targets. The Trump administration has indicated the United States will withdraw from the agreement entirely. In July, a study in
the peer-reviewed journal, Proceedings Of The National Academy Of Sciences Of The United States Of America, claimed “biological
annihilation via the ongoing sixth mass extinction” is underway. And that “all signs point to ever more
powerful assaults on biodiversity in the next two decades, painting a dismal picture of the future of life,
including human life,” the study states. According to scientists, the majority of previous mass extinctions in the
geologic record were characterized by abrupt warming between 6 to 7 degrees Celsius. As recently as
2009, British government scientists warned of a possible catastrophic 4 degrees Celsius global temperature
increase by 2060. As Howard Lee wrote in the Guardian in August, “Geologically fast build-up of greenhouse gas linked
to warming, rising sea-levels, widespread oxygen-starved ocean dead zones and ocean acidification are
fairly consistent across the mass extinction events, and those same symptoms are happening today as a
result of human-driven climate change.” Runaway climate change is non-linear. Shifts can be
exponential, abrupt and massive due to climate change “feedbacks,” which can amplify and diminish the
effects of climate change. Here are five you need to know about: 1. Climate lag Temperature increases lag by about a
decade, according to NASA’s Earth Observatory. “Just as a speeding car can take some time to stop after the driver hits the brakes, the
scenario unfolding of near-term
earth’s climate systems may take a while to reflect the change in its energy balance.” According to a NASA-led study released in July 2016,
“Almost
one-fifth of the global warming that has occurred in the past 150 years has been missed by historical
records due to quirks in how temperatures were recorded.” Adding the climate lag to the current level of
global temperature increase would take us past the 2-degree Paris Agreement climate target within a decade. 2. Icefree Arctic Dr. Peter Wadhams of the Polar Ocean Physics Group at Cambridge University told The Independent more than a year ago that the
central part of the Arctic and the North Pole could be ice-free within one to two years. Not only will
melting Arctic sea ice raise global sea levels, it will also allow the earth to absorb more heat from the
sun because ice reflects the sun’s rays while blue open water absorbs it. One study in the Proceedings Of The National Academy
Of Sciences Of The United States Of America estimates the extra heat absorbed by the dark waters of the Arctic in
summer would add the equivalent of another 25 per cent to global greenhouse gas emissions. 3. The 50
gigaton methane “burp” Dr. Natalia Shakhova, of the University of Alaska Fairbanks’ International Arctic Research Center has
warned that a 50-gigaton burp, or “pulse,” of methane from thawing Arctic permafrost beneath the East Siberian
Arctic Shelf is “highly possible at any time.” Methane is a greenhouse gas much more potent than carbon
dioxide. A 50 gigaton burp would be the equivalent of roughly two-thirds of the total carbon dioxide
released since the beginning of the industrial era. 4. Accelerated ocean acidification The world’s oceans are carbon sinks that
sequester a third of the carbon dioxide released into the atmosphere. The carbon dioxide emitted in addition to that which is
produced naturally has changed the chemistry of seawater. The carbon in the oceans converts into carbonic acid, which
lowers pH levels and makes the water acidic. As of 2010, the global population of phytoplankton, the
microscopic organisms that form the basis of the ocean’s food web, has fallen by about 40 per cent since 1950.
Phytoplankton also absorb carbon dioxide and produce half of the world’s oxygen output. The
accelerating loss of ocean biodiversity and continued overfishing may result in a collapse of all species of
wild seafood by 2048, according to a 2006 study published in the journal Science. 5. From global warming to global dimming The
Canadian government recently announced plans to phase out coal-fired electricity generation by 2030. But at the same time as warming the
planet, pollution
from coal power plants, airplanes and other sources of industrial soot, aerosols and
sulfates are artificially cooling the planet by filling the atmosphere with reflective particles, a process known as global
dimming. Airplanes, for example, release condensation trails (or contrails) that form cloud cover that reflects the sun. The effects of
global dimming are best evidenced by a 2 degree Celsius temperature increase in North America after all
commercial flights were grounded for three days following the attacks of 9/11. The take-away Out of control
climate change means feedback mechanisms may accelerate beyond any capacity of human control.
The occurrences discussed in this article are five of some 60 known weather-related phenomenon,
which can lead to what climate scientist James Hansen has termed the “Venus Syndrome,” where oceans would boil
and the surface temperature of earth could reach 462 degrees Celsius. Along the way humans could expect
to die in resource wars, starvation due to food systems collapse or lethal heat exposure. Given all that
remains unknown and what is at stake with climate change, is it irresponsible to rule out the possibility of
human extinction in the coming decades or sooner?
Artificial Intelligence solves environmental crises
Celine Herweijer 18 – PwC Partner in Advisory business, and WEF Young Global Leader. ASA Fellow and Doctor of Philosophy (Ph.D.)
focused in Earth Systems Modelling and Policy from Columbia University in the City of New York. [“8 ways AI can help save the planet”, January
24th, World Economic Forum, https://www.weforum.org/agenda/2018/01/8-ways-ai-can-help-save-the-planet/, AZ]
1. Autonomous and connected electric vehicles AI-guided autonomous vehicles (AVs) will
enable a transition to mobility on-
demand over the coming years and decades. Substantial greenhouse gas reductions for urban transport can be unlocked through
route and traffic optimisation, eco-driving algorithms, programmed “platooning” of cars to traffic, and autonomous ride-sharing services.
Electric AV fleets will be critical to deliver real gains. 2. Distributed energy grids AI
can enhance the predictability of demand and
supply for renewables across a distributed grid, improve energy storage, efficiency and load management, assist in the
integration and reliability of renewables and enable dynamic pricing and trading, creating market incentives. 3. Smart
agriculture and food systems AI-augmented agriculture involves automated data collection, decision-making and
corrective actions via robotics to allow early detection of crop diseases and issues, to provide timed nutrition to livestock, and
generally to optimise agricultural inputs and returns based on supply and demand. This promises to increase the resource efficiency of the
agriculture industry, lowering the use
of water, fertilisers and pesticides which cause damage to important ecosystems, and
increase resilience to climate extremes. 4. Next generation weather and climate prediction A new field of “Climate Informatics” is
blossoming that uses AI to fundamentally transform weather forecasting and improve our understanding of the
effects of climate change. This field traditionally requires high performance energy-intensive computing, but deeplearning networks can allow computers to run much faster and incorporate more complexity of the ‘realworld’ system into the calculations. In just over a decade, computational power and advances in AI will enable home computers to
have as much power as today’s supercomputers, lowering the cost of research, boosting scientific productivity and accelerating discoveries. AI
techniques may also help correct biases in models, extract the most relevant data to avoid data degradation, predict extreme events and be
used for impacts modelling. 5. Smart
disaster response AI can analyse simulations and real-time data (including social
media data) of weather events and disasters in a region to seek out vulnerabilities and enhance disaster preparation, provide early
warning, and prioritise response through coordination of emergency information capabilities. Deep reinforcement learning may one day be
integrated into disaster simulations to determine optimal response strategies, similar to the way AI is currently being used to identify the best
move in games like AlphaGo. 6. AI-designed intelligent, connected and livable cities AI could be used to simulate and automate the generation
of zoning laws, building ordinances and floodplains, combined with augmented and virtual reality (AR and VR). Real-time city-wide data on
energy, water consumption and availability, traffic flows, people flows, and weather could create an “urban dashboard” to optimise urban
sustainability. 7. A transparent digital Earth A real-time, open API, AI-infused, digital geospatial dashboard for the planet would
enable
management of environmental systems at a scale and speed never before possible – from
tackling illegal deforestation, water extraction, fishing and poaching, to air pollution, natural disaster
response and smart agriculture. 8. Reinforcement learning for Earth sciences breakthroughs This nascent AI technique – which
the monitoring, modelling and
requires no input data, substantially less computing power, and in which the evolutionary-like AI learns from itself – could soon evolve to
enable its application to real-world problems in the natural sciences. Collaboration with Earth scientists to identify the systems – from climate
science, materials science, biology, and other areas – which can be codified to apply reinforcement learning for scientific progress and discovery
is vital. For example, DeepMind co-founder, Demis Hassabis, has suggested that in materials science, a descendant of AlphaGo Zero could be
used to search for a room temperature superconductor – a hypothetical substance that allows for incredibly efficient energy systems. To
conclude, we live in exciting times. It is now possible to tackle
some of the world’s biggest problems with emerging
technologies such as AI. It’s time to put AI to work for the planet.
Transition to greener solutions require automation---Paris alone can’t solve
Renee Cho 18 – is a staff blogger for the Earth Institute. Renee was Communications Coordinator for Riverkeeper, the Hudson River
environmental organization. She received the Executive Education Certificate in Conservation and Sustainability from the Earth Institute Center
for Environmental Sustainability. [“Artificial Intelligence—A Game Changer for Climate Change and the Environment”, State of the Planet, Earth
Institute at Columbia University, June 5th, http://blogs.ei.columbia.edu/2018/06/05/artificial-intelligence-climate-environment/, AZ]
As the planet continues to warm, climate
change impacts are worsening. In 2016, there were 772 weather and disaster events,
triple the number that occurred in 1980. Twenty percent of species currently face extinction, and that number could rise to 50
percent by 2100. And even if all countries keep their Paris climate pledges, by 2100, it’s likely that average global
temperatures will be 3˚C higher than in pre-industrial times. But we have a new tool to help us better manage the
impacts of climate change and protect the planet: artificial intelligence (AI). AI refers to computer systems that “can sense
their environment, think, learn, and act in response to what they sense and their programmed objectives,” according to a World Economic
Forum report, Harnessing Artificial Intelligence for the Earth. In India, AI
has helped farmers get 30 percent higher
groundnut yields per hectare by providing information on preparing the land, applying fertilizer and choosing sowing dates. In Norway, AI
helped create a flexible and autonomous electric grid, integrating more renewable energy. And AI has helped researchers
achieve 89 to 99 percent accuracy in identifying tropical cyclones, weather fronts and atmospheric rivers, the latter of which can
cause heavy precipitation and are often hard for humans to identify on their own. By improving weather forecasts, these types of programs can
help keep people safe. What are artificial intelligence, machine learning and deep learning? Artificial intelligence has been around since the late
1950s, but today, AI’s capacities are rapidly improving thanks to several factors: the vast amounts of data being collected by sensors (in
appliances, vehicles, clothing, etc.), satellites and the Internet; the development of more powerful and faster computers; the availability of
open source software and data; and the increase in abundant, cheap storage. AI can now quickly discern patterns that humans cannot, make
predictions more efficiently and recommend better policies. The holy grail of artificial intelligence research is artificial general intelligence,
when computers will be able to reason, abstract, understand and communicate like humans. But we are still far from that—it takes 83,000
processors 40 minutes to compute what one percent of the human brain can calculate in one second. What exists today is narrow AI, which is
task-oriented and capable of doing some things, sometimes better than humans can do, such as recognizing speech or images and forecasting
weather. Playing chess and classifying images, as in the tagging of people on Facebook, are examples of narrow AI. When Netflix and Amazon
recommend shows and products based on our purchasing history, they’re using machine learning. Machine learning, which developed out of
earlier AI, involves the use of algorithms (sets of rules to follow to solve a problem) that can learn from data. The more data the system
analyzes, the more accurate it becomes as the system develops its own rules and the software evolves to achieve its goal. Deep learning, a
subset of machine learning, involves neural networks made up of multiple layers of connections or neurons, much like the human brain. Each
layer has a separate task and as information passes through, the neurons give it a weight based on its accuracy vis a vis the assigned task. The
final result is determined by the total of the weights. Deep learning enabled a computer system to figure out how to identify a cat—without any
human input about cat features— after “seeing” 10 million random images from YouTube. Because deep learning essentially takes place in a
“black box” through self-learning and evolving algorithms, however, scientists often don’t know how a system arrives at its results. Artificial
intelligence is a game changer Microsoft believes that artificial intelligence, often encompassing machine learning and deep learning, is a
“game changer” for climate change and environmental issues. The company’s AI for Earth program has committed $50
million over five years to create and test new applications for AI. Eventually it will help scale up and commercialize the most promising projects.
Columbia University’s Maria Uriarte, a professor of Ecology, Evolution and Environmental Biology, and Tian Zheng, a statistics professor at the
Data Science Institute, received a Microsoft grant to study the effects of Hurricane Maria on the El Yunque National Forest in Puerto Rico.
Uriarte and her colleagues want to know how tropical storms, which may worsen with climate change, affect the distribution of tree species in
Puerto Rico. Hurricane Maria’s winds damaged thousands of acres of rainforest, however the only way to determine which tree species were
destroyed and which withstood the hurricane at such a large scale is through the use of images. In 2017, a NASA flyover of Puerto Rico yielded
very high-resolution photographs of the tree canopies. But how is it possible to tell one species from another by looking at a green mass from
above over such a large area? The human eye could theoretically do it, but it would take forever to process the thousands of images. The team
is using artificial intelligence to analyze the high-resolution photographs and match them with Uriarte’s data—she has mapped and identified
every single tree in given plots. Using the ground information from these specific plots, AI can figure out what the various species of trees look
like from above in the flyover images. “Then we can use that information to extrapolate to a larger area,” explained Uriarte. “We use the plot
data both to learn [i.e. to train the algorithm] and to validate [how well the algorithm is performing].” Understanding
how the
distribution and composition of forests change in response to hurricanes is important because when forests are
damaged, vegetation decomposes and emits more CO2 into the atmosphere. As trees grow back, since they are
smaller, they store less carbon. If climate change results in more extreme storms, some forests will not recover, less carbon will be stored, and
more carbon will remain in the atmosphere, exacerbating global warming. Uriarte says her work could not be done without artificial
intelligence. “AI is going to revolutionize this field,” she said. “It’s becoming more and more important for everything that we
do. It allows us to ask questions at a scale that we could not ask from below. There’s only so much that one can do [on the ground] … and then
there are areas that are simply not accessible. The flyovers and the AI tools are going to allow us to study hurricanes in a whole different way.
It’s super exciting.” Another project, named Protection Assistant for Wildlife Security (PAWS) from the University of Southern California, is using
machine learning to predict where poaching may occur in the future. Currently the algorithm analyzes past ranger patrols and poachers’
behavior from crime data; a Microsoft grant will help train it to incorporate real-time data to enable rangers to improve their patrols. In
Washington State, Long Live the Kings is trying to restore declining steelhead and salmon populations. With a grant from Microsoft, the
organization will improve an ecosystem model that gathers data about salmon and steelhead growth, tracks fish and marine mammal
movements, and monitors marine conditions. The model will help improve hatchery, harvest, and ecosystem management, and support habitat
protection and restoration efforts. How AI is used for energy AI
is increasingly used to manage the intermittency of
renewable energy so that more can be incorporated into the grid; it can handle power fluctuations and improve energy
storage as well. The Department of Energy’s SLAC National Accelerator Laboratory operated by Stanford University will use machine learning
and artificial intelligence to identify vulnerabilities in the grid, strengthen them in advance of failures, and restore power more quickly when
failures occur. The system will first study part of the grid in California, analyzing data from renewable power sources, battery storage, and
satellite imagery that can show where trees growing over power lines might cause problems in a storm. The goal is to develop a grid that can
automatically manage renewable energy without interruption and recover from system failures with little human involvement. Wind
companies are using AI to get each turbine’s propeller to produce more electricity per rotation by
incorporating real time weather and operational data. On large wind farms, the front row’s propellers create a wake that decreases the
efficiency of those behind them. AI will enable each individual propeller to determine the wind speed and direction coming from other
propellers, and adjust accordingly. Researchers at the Department of Energy and National Oceanic and Atmospheric Administration (NOAA) are
using AI to better understand atmospheric conditions in order to more accurately project the energy output of wind farms. Artificial
intelligence can enhance energy efficiency, too. Google used machine learning to help predict when its data centers’ energy was
most in demand. The system analyzed and predicted when users were most likely to watch data-sucking Youtube videos, for example, and
could then optimize the cooling needed. As a result, Google reduced its energy use by 40 percent. Making cities more livable and sustainable AI
can also improve energy efficiency on the city scale by incorporating data from smart meters and the Internet of Things (the
internet of computing devices that are embedded in everyday objects, enabling them to send and receive data) to forecast energy demand. In
addition, artificial intelligence systems can simulate potential zoning laws, building ordinances, and flood plains to help with urban planning and
disaster preparedness. One vision for a sustainable city is to create an “urban dashboard” consisting of real-time data on energy and water use
and availability, traffic and weather to make cities more energy efficient and livable. In China, IBM’s Green Horizon project is using an AI
system that can forecast air pollution, track pollution sources and produce potential strategies to deal
with it. It can determine if, for example, it would be more effective to restrict the number of drivers or close certain power plants in order to
reduce pollution in a particular area. Another IBM system in development could help cities plan for future heat waves. AI would simulate the
climate at the urban scale and explore different strategies to test how well they ease heat waves. For example, if a city wanted to plant new
trees, machine learning models could determine the best places to plant them to get optimal tree cover and reduce heat from pavement. Smart
agriculture Hotter temperatures will have significant impacts on agriculture as well. Data from sensors in the field that monitor crop moisture,
soil composition and temperature help AI improve production and know when crops need watering. Incorporating this information with that
from drones, which are also used to monitor conditions, can help increasingly automatic AI
systems know the best times to
plant, spray and harvest crops, and when to head off diseases and other problems. This will result in increased efficiency,
enhanced yields, and lower use of water, fertilizer and pesticides. Protecting the oceans The Ocean Data Alliance is working with machine
learning to provide data from satellites and ocean exploration so that decision-makers can monitor shipping,
ocean mining, fishing, coral bleaching or the outbreak of a marine disease. With almost real time data, decision-makers and authorities
will be able to respond to problems more quickly. Artificial intelligence can also help predict the spread of invasive species,
follow marine litter, monitor ocean currents, keep track of dead zones and measure pollution levels. The Nature Conservancy is partnering with
Microsoft on using AI to map ocean wealth. Evaluating the economic value of ocean ecosystem services—such as seafood harvesting, carbon
storage, tourism and more—will make better conservation and planning decisions possible. The data
will be used to build models
that consider food security, job creation and fishing yields to show the value of ecosystem services under differing conditions. This
can help decision-makers determine the most important areas for fish productivity and conservation efforts, as well as the tradeoffs of
potential decisions. The project already has maps and models for Micronesia, the Caribbean, Florida, and is expanding to Australia, Haiti, and
Jamaica. More sustainable transport on land As vehicles become able to communicate with each other and with the infrastructure, artificial
intelligence will help drivers avoid hazards and traffic jams. In Pittsburgh, an artificial intelligence system incorporating
sensors and
cameras that monitors traffic flow adjusts traffic lights when needed. The systems are functioning at 50 intersections with plans for
150 more, and have already reduced
travel time by 25 percent and idling by more than 40 percent. Less idling, of course,
means fewer greenhouse gas emissions. Eventually, autonomous AI-driven shared transportation systems may replace personal
vehicles. Better climate predictions As the climate changes, accurate projections are increasingly important. However,
climate models often produce very different predictions, largely because of how data is broken down into discrete parts,
how processes and systems are paired, and because of the large variety of spatial and temporal scales. The Intergovernmental Panel on Climate
Change (IPCC) reports are based on many climate models and show the range of predictions, which are then averaged out. Averaging them out,
however, means that each climate model is given equal weight. AI is helping to determine which models are more reliable by
giving added weight to those whose predictions eventually prove to be more accurate, and less weight to those performing poorly. This will
help improve the accuracy of climate change projections. AI and deep learning are also improving weather forecasting and the prediction of
extreme events. That’s because they can incorporate much more of the real-world complexity of the climate system, such as atmospheric and
ocean dynamics and ocean and atmospheric chemistry, into their calculations. This sharpens the precision of weather and climate modeling,
making simulations more useful for decision-makers. AI has many other uses AI can help to monitor ecosystems and wildlife and their
interactions. Its fast processing speeds can offer almost real-time satellite data to track illegal logging in forests. AI can monitor drinking water
quality, manage residential water use, detect underground leaks in drinking water supply systems, and predict when water plants need
maintenance. It can also simulate weather events and natural disasters to find vulnerabilities in disaster planning, determine which strategies
for disaster response are most effective, and provide real-time disaster response coordination.
BEES
Machine learning and AI key to solve bee crisis
Innovation Enterprise 18 – [“Solving The Bee Crisis With Machine Learning”, June 28
th,
https://channels.theinnovationenterprise.com/articles/solving-the-bee-crisis-with-machine-learning, AZ]
Without the natural pollination bees provide, global food supply would deplete so rapidly, the effects would be
disastrous. They’re an essential part of our ecosystem, a part of a delicate tapestry that works to naturally pollinate our crops.
According to the British Bee Keepers Association, 1 in 3 mouthfuls of food we eat depend on bees. So, it’s pretty important that we work to
save these amazing creatures. BBC Earth Unplugged tells us over 70%
of our crops are pollinated by bees, with honeybees being
the biggest contributors to this. Not to mention, they're responsible for $30billion a year in crops for the economy.
Put simply, life without bees would not be sustainable. But, why are their populations declining so rapidly? And what is Big
Data and Artificial Intelligence doing to stop numbers from further decline? Bees are under threat from many things. One of the most common
is the Varroa Destructor - a parasite that feeds on bees and has the capability of destroying entire colonies. As is the nature of
most parasites, they reproduce quickly and measuring a mere 1mm in length, they’re extremely difficult for beekeepers to detect. This is where
the Bee Scanning app comes in. The app uses computer vision to help beekeepers detect early signs of the dangerous Varroa pests in their
colonies. Using
machine learning and object recognition, the red mites stand out on the bees bodies and an
algorithm detects them in the images taken by the beekeepers. Whilst it would be ideal to see the decline come to a halt, RoboBee is
working to aid pollination by using mini-drones called RoboBees. Sticky horsehair underneath the drone collects pollen
particles as they fly and rub off onto the next flower. They are currently manually controlled, but the team has reported they’re
developing autonomous drones using AI. There have also been dramatic changes in natural landscapes, with the loss of habitat
being to blame. Global warming has also disrupted pollination, with hibernation times and flowers blooming not always matching up.
Monitoring bees movements in accordance with their environment, the Bee Smart device allows beekeepers to remotely monitor their hives.
The device uses sensors to track colony activity, temperature and humidity in the hive, and even mating patterns. The data collected is sent to
the beekeeper through the cloud, via Bee Smart who process and analyze it. The use
of big data by beekeepers will allow for a
more proactive approach to beekeeping, even remotely. Bee populations are also in jeopardy from harmful pesticides called
neonicotinoids, and fungicides. The use of these chemicals is having devastating effects on bees. With firm data backing the fatal effects of
certain pesticides, we can look to going some way to opening a dialogue with the farmers, firms, and producers of these chemicals to seek out
alternatives. Or, in other cases take action to ban these totally. The Global Initiative For Honeybee Health is working on Smart Sensors that are
fitted onto bees like little backpacks. These sensors monitor
and collect data on how the bees interact with their
environment. This information is then processed to see how disease, diet, weather, pesticides, and pollution are affecting colonies.
Antennas are installed on entry sections to beehives so that when bees come and go, the sensor backpacks send data back via radio. This gives
researchers a better insight into modeling bee’s movements and noting changes in behaviors, such as their ability to pollinate. As
the
world faces many challenges, start-ups like the ones above are investing in making the world a better place for everyone by
using big data, machine learning, and artificial intelligence. It’s through the innovation of technology that our
ecosystems and workings of the natural world can be conserved.
Bee collapse wrecks food supply, bio-D, and turns the economy
Amélie Heuër and Carolin Ehrensperger, 2015. Amélie Heuër worked for SEED from 2009 to 2016 as Head of Research. Before
joining SEED Amélie worked for several years on marine conservation and coastal resources management in the Philippines, where she
conducted and managed several socio-economic and livelihood research projects. While working for the NGO Coral Cay Conservation, she was
also in charge of community development and initiated a few livelihood development projects. Previously she worked in London as Deputy
Manager for the Author’s Licensing and Collecting Society to protect and promote the rights of authors. Amélie has a Masters Degree (MSc) in
Human and Development Geography, which she studied at the University of Amsterdam (UvA) and the University College of London (UCL).
Carolin Ehrensperger is a Research Analyst at SEED's hosting partner adelphi research. She has supported SEED since 2014 and is involved with
the coordination and implementation of the SEED Awards and the SEED Capacity Building. At adelphi research she works on international
projects in the area of sustainability entrepreneurship, corporate responsibility and inclusive business models. Before joining adelphi, Carolin
has gained experience in a variety of organisations, such as the International Institute for Sustainable Development (IISD), CUTS International
Geneva, the Munich Re Foundation, the Kiel Institute for the World Economy, and the German Federal Foreign Office, and specifically
concentrated on the areas of sustainable development and on the role of the private sector. Carolin studied economics at the LudwigMaximilians-Universität München, Germany, and the Universitetet i Oslo, Norway. She also holds a Masters in International Affairs focusing on
sustainable development from the Graduate Institute of International and Development Studies, Geneva, Switzerland. 4/10/15. “How the
business of bees contributes to sustainable development” https://www.seed.uno/blog/articles/1702-how-the-business-of-bees-contributes-tosustainable-development.html Accessed 7/6/18 //WR-NCP
Bees play a vital part in our natural ecosystems as they are responsible for the pollination of many
fruit, nuts, vegetables and other species. For one thing, 100 crop species provide 90% of food worldwide
and of those 71 species are pollinated by bees. Extremely high mortality rates of bees in Europe, America
and Asia however are putting the balance at risk. For instance, each decade between 1-10% of the world’s
biodiversity is lost, one of the factors being the decreasing bee population. (UNEP) Africa is the only continent where
the bee population remains stable and unaffected from emerging diseases. Yet, most African countries still import the majority of their honey
for their domestic market. So why is the supply so low in Africa? Lack of knowledge about sustainable beekeeping methods, low honey yields,
complicated market access for beekeepers and over-exaggerated export regulations hinder the honey bucket from overflowing. (FIBL) Creating sustainable
practices in Africa Training 2720Currently most African honey comes from ‘honey hunting’ rather than beekeeping. Trees with bee nests are either cut down or fire
and smoke are used to get rid of the bees before the honey is harvested. Both methods destroy the entire colony, and smoking out bees can lead to wild fires. (FAO)
In addition, the honey is generally boiled for conservation; which results in the loss of its nutritious value. Nevertheless, harvesting of honey can be turned into a
sustainable business with fairly few resources: with the right knowledge, skills and tools beehives can generally be made from local resources; land ownership is not
essential, as the hives only take up little space; and bees do not need to be fed as they collect nectar and pollen from the surrounding areas. At the same time, there
is an increasing awareness that beekeeping should be centred around the needs of the bees, using indigenous bees and techniques appropriate for each location
and without the use of harmful pesticides in order to achieve truly sustainable practices. Training is key to success Unsurprisingly, smallholder farmers are generally
keen to take up beekeeping as it requires few resources and has the potential to provide a stable source of income. However, what is needed for success is
knowledge on the making of beehives, on locations to set them up, and on harvesting methods. Two 2014 SEED Winners, which are realising the potential of
beekeeping for sustainable development, have therefore made training a key component of their business models. Honey Products Shop Display Honey Products
Industries, a start-up founded in 2011 in Malawi, trains young people to own and operate business outlets located in specific geographical locations via a franchise
model. These outlet managers provide beekeeping equipment and training to local smallholder farmers. The raw honey is collected, tested for quality and
purchased by the outlets. The honey is then transported to the factory for processing, where it is labelled and finally distributed to community stores’ shelves. In the
remote community of Mutondu in northern Mozambique Pro-Sofala Verde enables families from this community to become beekeepers, with so far 31 community
members and their families trained. Local beekeepers act as ‘honey mentors’ and provide expert advice on bee maintenance and hygienic harvesting techniques.
Pro-Sofala Verde buys the high grade honey from the community at above market prices, which is then processed, packaged and marketed. Reaching Triple Bottom
Line (TBL) impacts Environmental impacts: In both cases, the SEED Winners are contributing to the conservation of biodiversity by preserving and even increasing
the bee population. The activities further contribute to natural resource conservation as trees are no longer felt or burned for honey hunting. Moreover,
communities are sensitised to the value of nature. Economic impacts: Beekeeping
also provides a sustainable source of income, in
areas where often few income opportunities exist. In the case of Honey Product Industries not only are income opportunities generated
for smallholder farmers, but also to young entrepreneurs, who are trained to run their outlets in remote locations.
Agriculture
Machine learning systems making farming efficient while prevent environmental
catastrophes
Evan Fraser and Sylvain Charlebois 16 – Fraser is Canada research chair and professor of geography. Charlebois is professor of
food distribution and policy. They work at the University of Guelph, Canada and are affiliated with the university’s Food Institute. [“Automated
farming: good news for food security, bad news for job security?”, February 18th, The Guardian, https://www.theguardian.com/sustainablebusiness/2016/feb/18/automated-farming-food-security-rural-jobs-unemployment-technology, AZ]
Around the world, but especially in the developing world, food
and farming systems continue to rely on 20th century
technology. But this is changing. The same information technologies that brought us the internet and transformations in medicine are
now revolutionising farming. It’s a new era for agriculture and it’s taking off in at least two distinct areas. On the farm, technology is
changing the way farmers manage farmland and farm animals – such as the use of satellite driven geo-positioning
systems and sensors that detect nutrients and water in soil. This technology is enabling tractors, harvesters and planters
to make decisions about what to plant, when to fertilise, and how much to irrigate. As this technology progresses, equipment will ultimately be
able to tailor decisions on a metre-by-metre basis. Robots already do much of the harvesting of lettuce and tomatoes in our greenhouses. And
it’s even becoming feasible to place fitness trackers on farm animals to monitor their health and welfare. The dairy industry has been at the
vanguard of this where robotic milking and computer controlled feeding equipment allow for the careful management of individual animals
within a herd. A similar
tech revolution is happening with the genetics of the plants we grow and the animals
we raise. Genomic tools are on the cusp of allowing scientists to rapidly and inexpensively evaluate the genetic code of individual plants and
animals. This makes it much easier to identify individual plants and animals that are particularly robust or productive. This knowledge, in
combination with traditional breeding, can accelerate how quickly we improve the genetic potential of our crops and livestock. Scientists at UK
research institute the John Innes Centre, for example, are attempting to create a strain of barley that would make its own ammonium fertiliser
from nitrogen in the soil, something which could save farmers the cost of artificial fertilisers. Taken together, both farm and genome-scale
technologies are boosting the efficiency of modern farming, which is increasingly important to feed a
growing population set to reach almost 10 billion by 2050. But this is just the beginning. Many experts are looking forward to a future
where the Internet of Things (where physical objects such as vehicles, buildings and devices are connected to collect and exchange data) is
applied to food and farming to create an Internet of Living Things.
In this future, advanced sensors embedded in fields,
waterways, irrigation systems and tractors will combine with machine-learning systems, genome-identifying
devices and data dashboards to give rise to a generation of smart farming technology that will have the capacity to sense and respond to its
environment in a way that
maximises production while minimising negative impact.
Food shortages cause WWIII
Carolyn Heneghan 15, Reporter, citing UN experts, Global Harvest Initiative Report, 1/22/2015, Where food crises and global conflict
could collide, http://www.fooddive.com/news/where-food-crises-and-global-conflict-could-collide/350837/
World War III is unimaginable for many, but some experts believe that not only is this degree of global
conflict imminent, but it may be instigated not by military tensions, oil and gas, or nuclear threats, but
instead by, of all things, food.¶ As it stands, countries across the globe are enduring food crises, and the U.N.’s Food &
Agriculture Organization (FAO) estimates that about 840 million people in the world are undernourished, including the one
in four children under the age of 5 who is stunted because of malnutrition.¶ Assistant director-general of U.N. FAO AsiaPacific Hiroyuki Konuma told Reuters that social and political unrest, civil wars, and terrorism could all
be possible results of food crises, and “world security as a whole might be affected.” Such consequences
could happen unless the world increases its output of food production 60% by mid-century. This includes
maintaining a stable growth rate at about 1% to have an even theoretical opportunity to circumvent severe shortages. These needs are
due to the growing global population, which is expected to reach 9 billion by 2050 while demand for
food will rise rapidly.¶ Where the problems lie¶ Exacerbating this issue is the fact that the world is spending less on agricultural
research, to the dismay of scientists who believe global food production may not sustain the increased demand. According to American
Boondoggle, “The pace of investment growth has slowed from 3.63 percent per year (after inflation) during 1950–69, to 1.79 percent during
1970–89, to 0.94 percent during 1990– 2009.” Decreased growth in agricultural research and development spending has slowed across the
world as a whole, but it is even slower in high-income countries.¶ Water scarcity is another problem, including in major food-producing nations
like China, as well as climate change. Extreme weather events
are having a severe effect on crops, which have
been devastated in countries like Australia, Canada, China, Russia, and the U.S., namely due to floods
and droughts. An Intergovernmental Panel on Climate change recently warned that climate change may result in “a 2% drop each decade
of this century,” according to RT.¶ Rising food costs also contribute to poor food security across the world as
prices remain high and volatile. Higher food costs inhibit lower socioeconomic people’s access to food, which contributes to the
FAO’s disturbing figure of global malnutrition. In addition to an inability for people to feed themselves, poverty can also reduce food
production, such as some African farmers being unable to afford irrigation and fertilizers to provide their regions with food.¶ Still
another
issue for decreased food production is the fact that many farmers are turning crops like soy, corn, and
sugar into sources for biofuel rather than edible consumption, which means these foods are taken away
from people to eat.¶ Could these shortages lead to a major global conflict?¶ Studies suggest that the food crisis could begin as
early as 2030, just a short 15 years from now, particularly in areas such as East Asia and Sub-Saharan
Africa. Both regions have significant problems with domestic food production.¶ Some experts believe that,
to secure enough food resources for their populations, countries may go to war over the increasingly
scarce food supply. This could be due in part to warring parties blocking aid and commercial food
deliveries to areas supporting their enemies, despite the fact that such a practice breaks international
humanitarian law.¶ Conflict also leads to lack of food supply for populations as people become displaced and forced from their homes,
jobs, and income and thus cannot buy food to feed themselves. Displaced farmers are also unable to produce their normal crops, contributing
still more to food shortages in certain countries.¶ Food
insecurity is a major threat to world peace and could
potentially incite violent conflict between countries across the world. Thus, the U.N. and other governmental bodies
are desperately trying to find ways to solve the problem before it becomes something they cannot control.
AI mitigates environmental issues
Sarath Muraleedharan 18 – Electronics and Communication Engineer graduate and IT professional from Kerala, India. He completed
his primary and high school education in Qatar and graduated from Amrita School of Engineering, Tamil Nadu (India). [“Role of Artificial
Intelligence in Environmental Sustainability”, March 6th, https://www.ecomena.org/artificial-intelligence-environmental-sustainability/, AZ]
In recent years, the environmental
issues have triggered debates, discussions, awareness programs and public outrage that have
in new technologies, such as Artificial Intelligence. Artificial Intelligence finds application in a wide array
of environmental sectors, including resource conservation, wildlife protection, energy management,
clean energy, waste management, pollution control and agriculture. Artificial Intelligence (also known as AI) is considered to
catapulted interest
be the biggest game-changer in the global economy. With its gradual increase in scope and application, it is estimated that by 2030, AI will
contribute up to 15.7 trillion of the global economy which is more than the current output of China and India combined. The UN Artificial
Intelligence Summit held in Geneva (2017) identified that AI
has the potential to accelerate progress towards a dignified life,
in peace and prosperity, for all people and have suggested to refocus the use of this technology, that is responsible for self-driving cars
and voice/face recognition smart phones, on sustainable development and assisting global efforts to eliminate poverty and hunger, and to
protect the environment and conserve natural resources. Multitude of AI Applications Many organizations like Microsoft, Google and Tesla,
whilst pushing the boundaries for human innovations, have made considerable efforts in developing ‘Earth Friendly’ AI systems. For instance,
Google’s very own DeepMind AI
has helped the organization to curb their data center energy usage by 40 percent making
them more energy efficient and reducing overall GHG emissions. As data centers alone consume 3 percent of global
energy each year, development of such AI’s not only improve the energy efficiency but also assist in providing energy
access to remote communities, setting up microgrids and integrating renewable energy resources. Installation of smart grids in cities
can utilize artificial intelligence techniques to regulate and control parts of neighborhood power grid to deliver exactly the amount of electricity
needed, or requested from its dependents, against the use of conventional power grids that can be wasteful due to unplanned power
distribution. With AI-driven autonomous vehicles waiting to break into the automobile market, techniques like route optimization, eco-driving
algorithms and ride-sharing services would help in streamlining the carbon footprint and reducing the overall number of vehicles on the road.
Viewed on a macro scale, the emergence of smart buildings and the smart cities in which they are built can leverage built-in sensors to use
energy efficiently, and buildings and roads will also be constructed out of materials that work more intelligently. Taking a nod from natural
patterns, material scientists and architects have developed innovative building materials from natural resources, such as bricks made of
bacteria, cement that captures carbon dioxide, and cooling systems that use wind and sun. Solar
power is increasingly present
within cities and outside to supply larger urban area. These are the first early steps towards sustainable infrastructure cutting
costs and helping to make us environmentally conscious. Controlling industrial emissions and waste management is another
challenge that can be dealt with the advanced learning machines and smart networks that could detect leaks, potential hazards and diversions
from industrial standards and governmental regulations. For example, IoT technology was incorporated into several industrial ventures, from
refrigerators and thermostats and even retail shops. As scientists still struggle to predict climate changes and other potential environmental
hurdles or bottlenecks due to lack of algorithms for converting the collected useful data into required solutions, Microsoft’s AI for Earth, a 50
million dollar initiative, was announced in 2017 with the sole purpose to find solutions to various challenges related to climatic changes,
agriculture, water and biodiversity. Other similar AI infused Earth applications are iNaturalist and eBirds that collect data from its vast circle of
experts on the species encountered, which would help to keep track of their population, favorable eco systems and migration patterns. These
applications have also played a significant role in the better identification and protection of fresh water and
marine ecosystems. There are various institutions, NGOs and start-ups that work to deliver smart agricultural solutions by implementing
fuzzy neural networks. Besides the use of both artificial and bio-sensor driven algorithms to provide a complete monitoring of the soil and crop
yield, there are technologies that can used to provide predictive analytic models to track and predict various factors and variables that could
affect future yields. Berlin-based agricultural tech startup PEAT has developed a deep learning application called Plantix that reportedly
identifies potential defects and nutrient deficiencies in soil. Analysis is conducted by software algorithms which correlate particular foliage
patterns with certain soil defects, plant pests and diseases. AWhere and FarmShots, both United States based companies
use
machine learning algorithms in connection with satellites to predict weather, analyze crop sustainability
and evaluate farms for the presence of diseases and pests. Adaptive irrigation systems in which the land is automatically
irrigated based on the data collected from the soil via sensors by an AI system is also gaining wide popularity among the farmers for its
important role in water management. Developments in the Middle East As more countries drastically shift towards the use of AI and other
advanced technologies, this enormous wave has hit the Middle East region too. The United Arab Emirates, Saudi Arabia and Qatar have shown
a promising commitment towards the development and implementation of technologies like information technology and digital
transformation, to improve the efficiency and effectiveness of the healthcare sector and to provide citizens with knowledge and skills to meet
the future needs of the labor market. By 2030, the Middle East countries are expected to be one of the major players in this field as the
volatility of oil prices have forced the economy to look for new sources for revenue and growth. With numerous untapped markets and sectors,
the future investments in AI in the MENA region are estimated to contribute to around 15 per cent of their combined GDP. It can also be
expected that with this rapid growth, the Governments will also consider a much more aggressive approach
towards using these
technologies for putting together an effective model for environmental sustainability. With many countries in the
Middle East strongly committed to protect the aquatic diversity of its surrounding waters, an intelligent tracking system could help to
prevent overfishing and contamination, and implement much more effective aquaculture techniques, innovations in sea
farming and better utilization and protection of freshwater resources. Future Outlook Researchers and scientists must ensure
that the data provided through Artificial Intelligence systems are transparent, fair and trustworthy. With an increasing demand of automation
solutions and higher precision data-study for environment related problems and challenges, more multinational companies, educational
institutions and government sectors need to fund more R&D of such technologies and provide proper standardizations for producing and
applying them. In addition, there is a necessity to bring in more technologists and developers to this technology. Artificial intelligence is steadily
becoming a part in our daily lives, and its impact can be seen through the advancements made in the field of environmental sciences and
environmental management.
Oil
Peak oil will occur unless we transition to automation
Jason Bordoff 18 – a former special assistant to President Obama, is a professor of professional practice in international and public
affairs and founding director of the Center on Global Energy Policy at Columbia University. [“How AI Will Increase the Supply of Oil and Gas—
and Reduce Costs”, The Wall Street Journal, May 3rd, https://blogs.wsj.com/experts/2018/05/03/how-ai-will-increase-the-supply-of-oil-and-gasand-reduce-costs/?guid=BL-258B-8256&mod=searchresults&page=1&pos=1&dsk=y, AZ]
Global oil demand may peak, as climate policies and technologies like electric vehicles advance, but it won’t be
because we run out of oil. So what’s next? The coming wave of disruptive innovation in the energy sector is likely to be driven
by digital tools bringing together artificial intelligence, machine learning, data analytics, supercomputing and automation. In the
public’s imagination, AI’s impact on the energy sector has perhaps been most widely celebrated with visions of us all moving about in a fleet of
self-driving cars. AI
is also rightly touted as accelerating the shift to clean energy, for example by boosting the output of
renewables and energy efficiency, or by better integrating distributed renewable energy sources into the grid.
Yet less noticed in the public discussion about AI and other digital tools is how they could also transform more traditional
energy sectors, such as oil and gas, upending our current understanding of how much oil and gas can be
produced and at what cost. Digital innovation is an equal opportunity disrupter. AI will improve oil and gas production
rates and lower costs. With advances in quantum computing, machine learning and AI, tools can now be used to
troubleshoot underperforming wells, enhance reservoir modeling, carry out preventive maintenance before problems arise,
optimize well design, drilling and completion, and even use machines to carry out tasks on unmanned, automated drilling platforms and well
pads. The shale patch is well-suited to the application of new technologies given its shorter investment cycles. Shale oil break-even prices have
come down from around $70 per barrel in 2013 to $50 today, and Goldman Sachs projects they could fall $10 further with the application of
both today’s leading-edge technologies and new digital tools like AI. Offshore oil
and gas production will also benefit from the
digital age as unmanned and remotely operated production platforms substantially reduce costs and
improve operational safety. The International Energy Agency estimates digital technologies could boost the volume of oil and gas that
can be produced by around 5% and reduce costs by 10% to 20%. Many in the oil-and gas-industry believe that the potential
impact of digitalization is substantially bigger. Morgan Stanley sees digital technologies delivering cost declines not
seen since the industry’s Golden Decade from 1987 to 1997.
Oil scarcities causes conflict
LEA WINTER 16 – Columbia University in the City of New York. [“Fueling Oil Scarcity: Produced Scarcity and the Sociopolitical Fate of
Renewable Energy”, January 1st, https://jia.sipa.columbia.edu/fueling-oil-scarcity-produced-scarcity-sociopolitical-fate-renewable-energy, AZ]
Political Effects of Scarcity Oil influences political relations and is often manipulated – via production of scarcity – as an instrument of
political power. Politics constitute the main determinant of the volume of oil produced and how it is allocated.29 Even the oil company Exxon
has asserted that "peak
oil" will actually result from sociopolitical relations rather than from geological limits, including "government
politics, lack of access to existing resources, [and] competition from alternative energy sources."30 Although
this analysis may represent a method employed by the gas company to manipulate perceptions and thus encourage consumers and investors to
support companies’ access to oil sources, it highlights the heavy influence of politics on the availability of oil. British Petroleum (BP) has
expressed its beliefs that the peak oil dialogue derives from social and political limitations. In 2007, the company’s global vice president for
exploration stated at a conference that peak
oil is a "metaphor for a deeper anxiety about energy security in the
western world, rooted in politics and concern about climate change," rather than based on geological limits.31 As expressed by these oil
companies, political powers can manipulate the social experiences of scarcity to create fear about the imminent social realties of imagined peak
oil, thus motivating legislation to further limit production. This view is implicit in the anti-peak maxim that the "Stone Age didn’t end because of
a lack of stone."32 The political power conferred by oil is uneven, reflecting the uneven distribution of oil resources.33 The resources are
literally "embedded" in the "territorial framework of states" and are often considered the property of the states in which they are found.34 Oil
power is manipulated not only by corporate powers and private greed, but it is also regulated through public patrimony, state institutions,
and the agency of state actors – often in secrecy. Oil wealth facilitates secrecy among political powers, leading authoritarian
governments to depend upon oil revenue to placate the public as a means of preventing democratizing
revolts. Oil states are 50 percent more likely to be autocratic and more than twice as likely to have civil wars as non-oil states.35 These
political and military effects are correlated to statistics that these states are more secretive, more financially volatile, and bar women from
economic and political opportunities. As oil companies came to be owned by states, the scale of production, control over the source of
production, checks on stability, and secrecy throughout the process became warped. Control over oil
resources by authoritarian
governments provides autocrats with a mechanism for silencing dissent. If a government is primarily financed by
taxes, it is inherently constrained by the wills of its citizens. When it is funded by oil, though, it possesses independent revenue and becomes
less susceptible to public pressure. The secrecy that cloaks oil revenue enables dictators to remain in power by concealing
evidence of their greed and incompetence, and to deliver more benefits to citizens than the amount they collect in taxes would otherwise allow
them. Whereas non-oil autocracies generally become democratic over time through popular dissent, oil-fueled
dictatorships can
persist, reinforced by secrecy.36 Their control over oil and management of scarcity leads to the perpetuation of social and
political inequality. The regimes thus persist as dictatorships, and violent civil unrest becomes rampant. Insurgents are often
reluctant to agree to lay down their arms due to distrust of their government based on experience with its secrecy and dishonesty surrounding
inequitable distribution of oil revenues.37 The appearance of scarcity
is key to harnessing the political power of oil. This
power capitalizes upon fears surrounding limitations on access to oil, igniting political tensions and "resource wars."38
Ordinary consumers have felt the effects of political control over oil, especially, for example, during the oil embargo of 1973. National security
became equated with "energy security," and more specifically, oil security. The "oil weapon" seemed powerful enough to overwhelm "centuries
of Euro-American global domination." Tensions over the oil squeeze partially motivated the U.S. invasion of Iraq in
2003. Americans began to protest this political and military move, supporting a new theme emerging in world oil politics: "No Blood for Oil." A
new type of imperialism arose based on conquest for oil and the pursuit of control over the flows of oil, where local
stability and lives would be sacrificed in order to secure control over oil. Oil has become both a cause for and a tool
of political action, motivating attempts to control access to it and promoting threats of economic and social strangulation
through produced scarcity.
Cybersecurity
Automation solves cyberthreats
Cisco 18 – [“Cisco 2018 Annual Cybersecurity Report”, Cisco, 2018, https://www.cisco.com/c/dam/m/digital/elqcmcglobal/witb/acr2018/acr2018final.pdf?dtid=odicdc000016&ccid=cc000160&oid=anrsc005679&ecid=8196&elqTrackId=686210143d34494fa
27ff73da9690a5b&elqaid=9452&elqat=2, AZ]
Applying machine learning to the threat spectrum To
overcome the lack of visibility that encryption creates, and reduce
adversaries’ time to operate, we see more enterprises exploring the use of machine learning and artificial
intelligence. These advanced capabilities can enhance network security defenses and, over time, “learn” how
to automatically detect unusual patterns in web traffic that might indicate malicious activity. Machine learning
is useful for automatically detecting “known-known” threats—the types of infections that have been seen before (see Figure 3). But
its real value, especially in monitoring encrypted web traffic, stems from its ability to detect “known-unknown” threats (previously unseen
variations of known threats, malware subfamilies, or related new threats) and “unknown-unknown” (net-new malware) threats. The
technology can learn to identify unusual patterns in large volumes of encrypted web traffic and automatically
alert security teams to the need for further investigation. That latter point is especially important, given that the lack of trained
personnel is an obstacle to enhancing security defenses in many organizations, as seen in findings from the Cisco 2018 Security Capabilities
Benchmark Study (see page 35). Automation
and intelligent tools like machine learning and artificial intelligence can
help defenders overcome skills and resource gaps, making them more effective at identifying and
responding to both known and emerging threats. ***TABLE OMITTED*** Cisco 2018 Security Capabilities Benchmark Study: Defenders
report greater reliance on automation and artificial intelligence Chief information security officers (CISOs) interviewed for the Cisco 2018
Security Capabilities Benchmark Study report that they are eager to add tools that use artificial intelligence and machine learning, and believe
their security infrastructure is growing in sophistication and intelligence. However, they
are also frustrated by the number of false
positives such systems generate, since false positives increase the security team’s workload. These concerns should ease
over time as machine learning and artificial intelligence technologies mature and learn what is “normal” activity in the
network environments they are monitoring. When asked which automated technologies their organizations rely on the most, 39 percent of
security professionals said they are completely reliant on automation, while 34 percent are completely reliant on machine learning; 32 percent
said they are completely reliant on artificial intelligence (Figure 4). Behavior analytics tools are also considered useful when locating malicious
actors in networks; 92 percent of security professionals said these tools work very to extremely well (Figure 5).
Cyberattacks on infrastructure cause great power war
Robert Farley 16, Senior Lecturer at the Patterson School of Diplomacy and International Commerce at
the University of Kentucky, 12/17/16, “5 Places World War III Could Start in 2017,”
http://nationalinterest.org/blog/the-buzz/5-places-world-war-iii-could-start-2017-18760
The Trump administration enters office in an unsettled time. For a variety of reasons (some directly connected to Trump’s
rhetoric), the great powers face more uncertainty than at any time in recent memory. In the first few months of Trump’s presidency (indeed,
perhaps even before his presidency begins) the
United States will have to navigate several extremely dangerous
flashpoints that could ignite, then escalate, conflict between the US, Russia, and China. Korean Peninsula
Reportedly, President Obama suggested to President Trump that North Korea policy would represent the first big test of his administration.
North Korea continues to build more and more effective ballistic missiles, as well (most analysts suspect) to expand its nuclear arsenal. While
the economy and political system remain moribund, the state itself has shown no inclination to collapse. Moreover, South Korea has mired
itself in a serious political crisis of its own. Conflict could erupt in any of several ways; if the United States decides to curtain North Korea’s
ballistic missile programs with a preventative attack, if North Korea misreads US signals and decides to preempt, or if a governance collapse
leads to chaos. As was the case in 1950, war on the peninsula could easily draw in China, Russia, or Japan. Syria Recent Russia victories in Syria
appear to have paved the way for the Assad regime to shift the civil war to a new phase. The United States declined to intervene in defense of
Aleppo, instead concentrating its forces on Iraq and the fight against ISIS. The Obama administration will not contest Russia’s support of Assad,
and there is little to indicate that the Trump administration will seek confrontation. But while the most dangerous moments may have passed,
US and Russian forces continue to operate in close proximity of one another. The US airstrike near Deir al-Zour, which killed sixty-two Syrian
troops, derailed the prospect for US-Russian cooperation in Syria. A similar event, launched either by Russian or American forces, could produce
retaliatory pressures in either country. Moreover, the presence of spoilers (terrorist groups and militias on either side, as well as a variety of
interested states) serves to increase complexity, and the chances for a miscalculation or misunderstanding. “War” in Cyberspace The
United
States, Russia, and China are not at “war” in cyberspace, notwithstanding the success of Russian efforts to intervene in the
US Presidential election, or the ongoing Chinese efforts to steal intellectual property and technology from US companies. However, the
US security establishment may feel an increasing need to respond to what it views as Russian and Chinese
provocations, if only to deter other attacks against critical US cyber-assets. Specialists disagree over whether even a serious escalation
over current activity would constitute a cyber-“war.” And the agencies delegated with responsibility over offensive cyber-capabilities have
proven loathe to use them; attacks on
critical vulnerabilities often only work once. Still, if China, Russia, or other
actors come to believe that they can attack the US without fear of response, they may end up pushing the US
government into costly responses that could create an unfortunate escalatory spiral.
Machine learning is essential to enhance cybersecurity defenses
Cisco 18 – [“Cisco 2018 Annual Cybersecurity Report Reveals Security Leaders Rely on and Invest in Automation, Machine Learning and
Artificial Intelligence to Defend Against Threats”, Cisco, February 21st, https://newsroom.cisco.com/press-releasecontent?type=webcontent&articleId=1911494, AZ]
SAN JOSE, Calif. – February 21, 2018 —Malware sophistication is increasing as adversaries
begin to weaponize cloud services
and evade detection through encryption, used as a tool to conceal command-and-control activity. To reduce adversaries' time to
operate, security professionals said they will increasingly leverage and spend more on tools that use AI and
machine learning, reported in the 11th Cisco® 2018 Annual Cybersecurity Report (ACR). While encryption is meant to enhance security,
the expanded volume of encrypted web traffic (50 percent as of October 2017) — both legitimate and malicious — has created more challenges
for defenders trying to identify and monitor potential threats. Cisco threat researchers observed more than a threefold increase in encrypted
network communication used by inspected malware samples over a 12-month period. Applying
machine learning can help
enhance network security defenses and, over time, "learn" how to automatically detect unusual patterns
in encrypted web traffic, cloud, and IoT environments. Some of the 3,600 security professionals interviewed for the Cisco 2018
Security Capabilities Benchmark Study report, stated they were reliant and eager to add tools like machine learning and AI,
but were frustrated by the number of false positives such systems generate. While still in its infancy, machine learning and AI
technologies over time will mature and learn what is "normal" activity in the network environments they are monitoring. "Last
year's evolution of malware demonstrates that our adversaries continue to learn," said John N. Stewart, Senior Vice President and Chief
Security and Trust Officer, Cisco. "We have to raise the bar now – top down leadership, business led, technology investments, and practice
effective security – there is too much risk, and it is up to us to reduce it."
Disease
Solves disease---predictions and helps stimulate pharma innovation
Daniel Faggella 18 – founder of Tech Emergence, Has worked for TechCrunch, Boston Business Journal, VentureBeat, Xconomy, VICE
MotherBoard. [“7 Applications of Machine Learning in Pharma and Medicine”, June 1st, https://www.techemergence.com/machine-learning-inpharma-medicine/, AZ]
When it comes to effectiveness of machine learning, more data almost always yields better results—and the healthcare sector is sitting on a
data goldmine. McKinsey estimates that big data and machine
learning in pharma and medicine could generate a value of
up to $100B annually, based on better decision-making, optimized innovation, improved efficiency of research/clinical trials, and new tool
creation for physicians, consumers, insurers, and regulators. Where does all this data come from? If we could look at labeled data streams, we
might see research and development (R&D); physicians and clinics; patients; caregivers; etc. The array of (at present) disparate origins is part of
the issue in synchronizing this information and using it to improve healthcare infrastructure and treatments. Hence, the present-day core issue
at the intersection of machine learning and healthcare: finding ways to effectively collect and use lots of different types of data for better
analysis, prevention, and treatment of individuals. Burgeoning
applications of ML in pharma and medicine are
glimmers of a potential future in which synchronicity of data, analysis, and innovation are an everyday reality. We
provide a breakdown of several of these pioneering applications, and provide insight into areas for continued innovation. Applications of
Machine Learning in Pharma and Medicine 1 – Disease Identification/Diagnosis Disease
identification and diagnosis of ailments is at
the forefront of ML research in medicine. According to a 2015 report issued by Pharmaceutical Research and Manufacturers of
America, more than 800 medicines and vaccines to treat cancer were in trial. In an interview with Bloomberg Technology, Knight Institute
Researcher Jeff Tyner stated that while this is exciting, it also presents the challenge of finding ways to work with all the resulting data. “That is
where the idea of a biologist working with information scientists and computationalists is so important,” said Tyner. It’s no surprise that large
players were some of the first to jump on the bandwagon, particularly in high-need areas like cancer identification and treatment. In October
2016, IBM Watson Health announced IBM Watson Genomics, a partnership initiative with Quest Diagnostics, which aims to make strides in
precision medicine by integrating cognitive computing and genomic tumor sequencing. Boston-based biopharma company Berg is using AI
to
research and develop diagnostics and therapeutic treatments in multiple areas, including oncology. Current
research projects underway include dosage trials for intravenous tumor treatment and detection and management of prostate cancer. Other
major examples include Google’s DeepMind Health, which last year announced multiple UK-based partnerships, including with Moorfields Eye
Hospital in London, in which they’re developing technology to address macular degeneration in aging eyes. In the area of brain-based diseases
like depression, Oxford’s P1vital® Predicting Response to Depression Treatment (PReDicT) project is using predictive analytics to help diagnose
and provide treatment, with the overall goal of producing a commercially-available emotional test battery for use in clinical settings. 2 –
Personalized Treatment/Behavioral Modification Personalized medicine, or more effective treatment based on individual health data paired
with predictive analytics, is also a hot research area and closely related to better disease assessment. The domain is presently ruled by
supervised learning, which allows physicians to select from more limited sets of diagnoses, for example, or estimate patient risk based on
symptoms and genetic information. IBM Watson Oncology is a leading institution at the forefront of driving change in treatment decisions,
using patient medical information and history to optimize the selection of treatment options: Over the next decade, increased use of micro
biosensors and devices, as well as mobile apps with more sophisticated health-measurement and remote monitoring capabilities, will provide
another deluge of data that can be used to help facilitate R&D and treatment efficacy. This type of personalized
treatment has important implications for the individual in terms of health optimization, but also for reducing overall healthcare costs. If more
patients adhere to following prescribed medicine or treatment plans, for example, the decrease in health-care costs will trickle up and
(hopefully) back down. Behavioral modification is also an imperative cog in the prevention machine, a notion that Catalia Health’s Cory Kidd
talked about in a December interview with TechEmergence. And there are plenty of start-ups popping up in the cancer identification,
prevention, and treatment space (for example), with varying degrees of success. A select two from a round-up in Entrepeneur include: Somatix
– a data-analytics B2B2C software platform company whose ML-based app uses “recognition of hand-to-mouth gestures in order to help people
better understand their behavior and make life-affirming changes”, specifically in smoking cessation. SkinVision – the self-described “skin
cancer risk app” makes its claim as “the first and only CE certified online assessment.” Interestingly, we couldn’t find SkinVision in the app store.
The first that app that came up under a “SkinVision” Search was DermCheck, in which images are submitted to dermatologists (people, not
machines) by phone in exchange for a personalized treatment plan—perhaps a testament to some of the kinks in machine learning-based
accuracy at scale that still need to be ironed out. 3 – Drug Discovery/Manufacturing The use of machine
learning in preliminary
discovery has the potential for various uses, from initial screening of drug compounds to predicted
success rate based on biological factors. This includes R&D discovery technologies like next-generation sequencing. Precision
(early-stage) drug
medicine, which involves identifying mechanisms for “multifactorial” diseases and in turn alternative paths for therapy, seems to be the frontier
in this space. Much of this research involves unsupervised learning, which is in large part still confined to identifying patterns in data without
predictions (the latter is still in the realm of supervised learning). Key players in this domain include the MIT Clinical Machine Learning Group,
whose precision medicine research is focused on the development of algorithms to better understand disease processes and design for
effective treatment of diseases like Type 2 diabetes. Microsoft’s Project Hanover is using ML technologies in multiple initiatives, including a
collaboration with the Knight Cancer Institute to develop AI technology for cancer precision treatment, with a current focus on developing an
approach to personalize drug combinations for Acute Myeloid Leukemia (AML). The UK’s Royal Society also notes that ML in bio-manufacturing
for pharmaceuticals is ripe for optimization. Data from experimentation or manufacturing processes have the potential to help pharmaceutical
manufacturers reduce the time needed to produce drugs, resulting in lowered costs and improved replication. 4 – Clinical Trial Research
Machine learning has several useful potential applications in helping shape and direct clinical trial research.
Applying advanced predictive analytics in identifying candidates for clinical trials could draw on a much wider range of data than at present,
including social media and doctor visits, for example, as well as genetic information when looking to target specific populations; this would
result in smaller, quicker, and less expensive trials overall. ML can also be used for remote monitoring and real-time data access for increased
safety; for example, monitoring biological and other signals for any sign of harm or death to participants. According to McKinsey, there are
many other ML applications for helping increase clinical trial efficiency, including finding best sample sizes for increased efficiency; addressing
and adapting to differences in sites for patient recruitment; and using electronic medical records to reduce data errors (duplicate entry, for
example). 5 – Radiology and Radiotherapy In an October 2016 interview with Stat News, Dr. Ziad Obermeyer, an assistant professor at Harvard
Medical School, stated: “In 20 years, radiologists won’t exist in anywhere near their current form. They might look more like cyborgs:
supervising algorithms reading thousands of studies per minute.” Until that day comes, Google’s DeepMind Health is working with University
College London Hospital (UCLH) to develop machine learning algorithms capable of detecting differences in healthy and cancerous tissues to
help improve radiation treatments. DeepMind and UCLH are working on applying ML to help speed up the segmentation process (ensuring that
no healthy structures are damaged) and increase accuracy in radiotherapy planning. 6 – Smart Electronic Health Records Document
classification (sorting patient queries via email, for example) using support vector machines, and optical character recognition (transforming
cursive or other sketched handwriting into digitized characters), are both essential ML-based technologies in helping advance the collection and
digitization of electronic health information. MATLAB’s ML handwriting recognition technologies and Google’s Cloud Vision API for optical
character recognition are just two examples of innovations in this area: The MIT Clinical Machine Learning Group is spearheading the
development of next-generation intelligent electronic health records, which will incorporate built-in ML/AI to help with things like diagnostics,
clinical decisions, and personalized treatment suggestions. MIT notes on its research site the “need for robust machine learning algorithms that
are safe, interpretable, can learn from little labeled training data, understand natural language, and generalize well across medical settings and
institutions.” 7 – Epidemic
Outbreak Prediction ML and AI technologies are also being applied to monitoring
and predicting epidemic outbreaks around the world, based on data collected from satellites, historical information on
the web, real-time social media updates, and other sources. Support vector machines and artificial neural networks have
been used, for example, to predict malaria outbreaks, taking into account data such as temperature, average monthly rainfall,
total number of positive cases, and other data points. Predicting outbreak severity is particularly pressing in third-world
countries, which often lack medical infrastructure, educational avenues, and access to treatments. ProMED-mail is an internet-based reporting
program for
monitoring emerging diseases and providing outbreak reports in real-time:
Pharma innovation solves disease---extinction
Engelhardt 8 – PhD, MD, Professor of Philosophy @ Rice (Hugo, “Innovation and the Pharmaceutical Industry: Critical Reflections on
the Virtues of Profit,” EBrary)
Many are suspicious of, or indeed jealous of, the good fortune of others. Even when profit is gained in the market without fraud and with the
consent of all buying and selling goods and services, there is a sense on the part of some that something is wrong if considerable profit is
secured. There is even a sense that good fortune in the market, especially if it is very good fortune, is unfair. One might think of such
rhetorically disparaging terms as "wind-fall profits". There is also a suspicion of the pursuit of profit because it is often embraced not just
because of the material benefits it sought, but because of the hierarchical satisfaction of being more affluent than others. The pursuit of profit
in the pharmaceutical and medical-device industries is tor many in particular morally dubious because it is acquired from those who have the
bad fortune to be diseased or disabled. Although the suspicion of profit is not well-founded, this suspicion is a major moral and public-policy
challenge. Profit
in the market for the pharmaceutical and medical-device industries is to be celebrated. This is the
case, in that if one is of the view (1) that the presence of additional resources for research and development spurs
innovation in the development of pharmaceuticals and med-ical devices (i.e., if one is of the view that the allure of profit
is one of the most effective ways not only to acquire resources but productively to direct human
energies in their use), (2) that given the limits of altruism and of the willingness of persons to be taxed, the possibility of profits is necessary
to secure such resources, (3) that the allure of profits also tends to enhance the creative use of available resources in the
pursuit of phar-maceutical and medical-device innovation, and (4) if one judges it to be the case that such innovation is both
necessary to maintain the human species in an ever-changing and always dangerous environment in
which new microbial and other threats may at any time emerge to threaten human well-being, if not
survival (i.e., that such innovation is necessary to prevent increases in morbidity and mortality risks), as well
as (5) in order generally to decrease morbidity and mortality risks in the future, it then follows (6) that one should be
concerned regarding any policies that decrease the amount of resources and energies available to
encourage such innovation. One should indeed be of the view that the possibilities for profit, all things being equal, should be highest
in the pharmaceutical and medical-device industries. Yet, there is a suspicion regarding the pursuit of profit in medicine and especially in the
pharmaceutical and medical-device industries.
Automation boosts pharma innovation
Trevor Marshall 18 – Director of enterprise system integration at Zenith Technologies and has worked for over 20 years in the
pharmaceutical and biopharmaceutical industry. Working as a Business Unit lead for Zenith Technologies he oversees the delivery of projects
globally for the company. He also leads the consulting business, advising clients on best practices for implementation of manufacturing systems
encompassing DCS, MES, and Historian systems. [“Automation in Pharmaceutical Manufacturing”, March 9th, Contractpharma,
https://www.contractpharma.com/issues/2018-03-01/view_features/automation-in-pharmaceutical-manufacturing/49797, AZ]
The need for process
optimization, regulatory compliance, and improvements in the supply chain are driving
investment in automation technologies across the pharmaceutical industry. Consequently, the systems used to automate
process steps during the manufacture of pharmaceuticals are continuously evolving with new instrumentation and control products coming to
market. This article will
consider trends that impact the future of automation and what will likely be the biggest
influences in transforming the pharmaceutical manufacturing environment. Multi-Product Manufacturing Facilities Gone are
the days when manufacturing facilities could rely on developing the same product year after year. More targeted therapies that need to be
manufactured in smaller volumes for smaller populations means the industry is seeing a transition away from “one-line-one-product” setups in
favor of multi-product manufacturing facilities. These sites must be designed to be more agile, with the capability to react to changing demands
quickly. The growing trend toward contract manufacturing is also driving the need for more flexible facilities to meet the needs of multiple
customers. Flexibility is key, and modern facilities need to be able to re-orientate their processes according to the requirements of individual
products. As a result, sites are now being designed in a way that ensures a high degree of segregation between process steps, provides cross
contamination control, and limits product exposure to the environment. Single-Use Technologies Aligned with today’s growing pipeline of high
potency and biological drugs, the adoption of single-use technologies, such as single-use bioreactors and other unit operations, is having a
significant impact on the way that automation is delivered. The integration of process control systems and manufacturing execution system
(MES) solutions with start-to-finish technologies and single-use manufacturing platforms is helping the industry to deploy biopharmaceutical
manufacturing with increased productivity and efficiency, and at a lower cost, which can significantly reduce the time-to-market for new
products. Both upstream and downstream manufacturing processes benefit from single-use systems. They reduce or eliminate the time
required to perform cleaning and steaming, and they allow manufacturers to switch quickly from one product to another, or from batch to
batch. Single-use components are also an enabling technology for smaller scale production of biopharmaceuticals, including antibodies,
proteins, vaccines, and cell therapies, which would otherwise be much more difficult to produce. In addition, as the world of gene therapy
continues to evolve, the industry can expect to see even greater reliance on single-use technologies. However, as a starting point, many
companies may in the first instance choose to pursue hybrid facilities with both stainless steel and single-use components. Continuous
Manufacturing Batch manufacturing processes have a pre-defined maximum asset utilization on the plant floor. Traditional
pharmaceutical companies have in the past been slow to investigate new manufacturing techniques, preferring a more riskadverse approach to modifying the validated batch manufacturing design. Cost pressures and the need to find ways to increase productivity
have led to the introduction of new continuous manufacturing techniques across a number of unit operations in the life science industry. Oral
solid dose tableting lines, continuous API production, and continuous chromatography in biological processes are but a few examples of where
continuous manufacturing provides greater productivity for companies. With this also comes new challenges from an automation perspective,
not only in the continuous manufacturing process but also in the batch record and genealogy requirements for the product. Industry 4.0
Industry 4.0 is becoming increasingly important to the continued success and competitiveness of
pharmaceutical manufacturers. It refers to new tools and processes that are enabling smart, decentralized production,
with intelligent factories, integrated IT systems, the Internet of Things (IoT), and flexible, highly integrated manufacturing
systems. For the life science manufacturing industry, it’s not about being new—it’s about using proven solutions and approaches to
decision making to improve quality, reliability, and reducing waste. Companies in the life science industry have been
collecting and using evidentiary data to improve their manufacturing processes for nearly 40 years and have some of the best quality systems in
the world. Industry 4.0 is simply the latest wave of technological advances that will drive the next phase of pharmaceutical manufacturing. It
will enable manufacturers to have full visibility of operations and allow them to be responsive to information, while bringing
connectivity of equipment, people, processes, services, and supply chains. Industry 4.0 will take automation to a
new level with individual management processes expected to become automated. For example, if a temperature gauge makes a higher than
expected reading, the machine will detect this and rectify the situation rather than requiring an operator to intervene and make an assessment
about the required course of action. In addition, future developments may mean that machine learning algorithms will be able to adjust
manufacturing lines and production scheduling quickly. New developments will also pave the way for predictive maintenance and the
opportunity to identify and correct issues before they happen. The food and drinks industry is leading the charge in implementing Industry 4.0,
with some companies in the sector beginning to use artificial intelligence to improve processes. Similarly, the automotive industry is also
making considerable progress in terms of smart devices, the IoT, and achieving connectivity between all systems within a manufacturing plant.
Due to regulatory constraints, the pharmaceutical industry has been slower to adopt this type of cutting-edge technology. While embracing the
potential for Industry 4.0 is going to be critical to future operational efficiency for all manufacturers, it may be a long time before the industry is
able to complete the digital transformation and have fully automated and connected facilities that can take advantage of all the age of digital
manufacturing has to offer. Leveraging Data & Analytics While the idealistic end game of fully connected, self-optimizing production processes
may be further down the road, the first steps to digital manufacturing are well under way. Automation and technology create
the
opportunity to leverage data and analytics to improve processes. Often referred to as enterprise Manufacturing
Intelligence (MI), access to more meaningful data means a better view of operations, allowing for better analytics and real-time
responsive decision-making to drive continuous improvement and operational excellence. With Industry 4.0 comes the
introduction of edge devices; computing to make it easier to connect machines and the ability to create organization-wide data lakes. These
edge devices can also be used to run analytics in real time close to the equipment while big data is analyzed in the cloud. Big data also allows
for the creation of digital twins. A digital twin can be made up of end-to-end data in the manufacture of a product where a fleet’s data can be
used to find insights. Extension of the traditional “golden batch,” where data was very much process control-based, will be supplemented and
surrounded with environmental data, raw material data, training data, and any other digital data available that goes toward influencing the
golden batch. With this digital information available across multiple sites, batches and suppliers, sophisticated advanced analytics can provide a
digital twin that best represents the golden batch and alert controllers to any problems based on these specific data sets. Final Thoughts
Automation and other manufacturing systems, such as MES, have the potential to transform processes within pharmaceutical manufacturing
facilities, opening the door to fundamental performance improvements. For manufacturers that fail to leverage these technologies, the
introduction of new pharmaceutical products may take months or years rather than weeks, and they will likely find themselves falling behind
their competitors in the efficiency stakes. Companies that take the initiative early stand to gain the biggest competitive advantage, ensuring
they can operate with greater agility, cost-efficiency and compliance.
Automation Bad
US Populism
U.S. response to automation determines the global sustainability of populism –
American actions maintain or destroy the global liberal order
Francis Fukuyama 17, the Olivier Nomellini Senior Fellow at the Freeman Spogli Institute for International Studies (FSI), and the
Mosbacher Director of FSI's Center on Democracy, Development, and the Rule of Law, Stanford University, 12/4/17, “The Future of Populism at
Home and Abroad,” https://www.the-american-interest.com/2017/12/04/future-populism-home-abroad/
Few populist
nationalist parties have appeared across the developed world, and threaten to undermine the liberal
international order. What is the likelihood that they will succeed? For better or worse, a lot depends on what
will happen in the United States. American power was critical in establishing both the economic and political pillars of
the liberal order, and if the United States retreats from that leadership role, the pendulum will swing quickly
in favor of the nationalists. So we need to understand how populism is likely to unfold in the worlds
leading liberal democracy. The American Constitution’s system of checks and balances was designed to deal with the problem of
“Caesarism,” that is, a populist demagogue who would accumulate power and misuse it. It is for this reason that vetocracy exists, and so far
into the Trump Administration, it appears to be working. Trump’s
attacks on various independent institutions—the
only had modest success. In
particular, he has not been able to get a significant part of his legislative agenda, like Obamacare repeal or the border wall, passed. So at the
moment he looks like a weak and ineffective president. However, things could change. The factor most in his
intelligence community, the mainstream media, the courts, and his own Republican party—have
favor is the economy: wages have been growing after stagnating for many years, and growth has reached 3 percent for two quarters now. It
may move even higher if the Republicans succeed in passing a stimulative tax cut as they seem poised to do. All of this is bad policy in the long
run: the United States is not overtaxed; the stimulus is coming at the exactly wrong point in the business cycle (after eight years of expansion);
it is likely to tremendously widen fiscal deficits; and it will lay the ground for an eventual painful crash. Nonetheless, these consequences are
not likely to play themselves out for several years, long enough to get the Republicans through the 2018 midterm elections and even the 2020
presidential contest. What matters to voters the most is the state of the economy, and that looks to be good despite the President’s undignified
tweeting. Foreign policy is another area where Trump’s critics could be surprised. It is entirely possible that he will take action on some of his
threats—indeed, it is hard to see how he can avoid action with regard to North Korea’s nuclear ballistic missile program. Any U.S. move would
be highly risky to its South Korean and Japanese allies, but it is also possible that the U.S. will call North Korea’s bluff and force a significant
climbdown. If this happens, Trump will have lanced a boil in a manner that has eluded the last three presidents. Finally, it is not possible to beat
something with nothing. The Democrats, under a constant barrage of outrageous behavior from the Administration, have been moving steadily
to the left. Opposition to Trump allows them to focus on the enemy and not to define long-term policies that will appeal to voters. As in Britain,
the party itself in increasingly dominated by activists who are to the left of the general voter base. Finally, the Democrats have lost so much
ground in statehouses and state legislatures that they do not have a strong cadre of appealing, experienced candidates available to replace the
Clinton generation. Since American elections are not won in the popular vote but in the Electoral College (as Bruce Cain has recently pointed
out in these pages), it does not matter how many outraged people vote in states like California, New York, or Illinois; unless the party can
attract centrist voters in midwestern industrial states it will not win the Presidency. All of this suggests that Trump could not just serve out the
remainder of his term, but be re-elected in 2020 and last until 2024. Were the Republicans to experience a setback in the midterm elections in
2018 and then lose the presidency in 2020, Trump might go down in history has a fluke and aberration, and the party could return to the
control of its elites. If this doesn’t happen, however, the country’s polarization will deepen even beyond the point it has reached at present.
More importantly, the institutional checks may well experience much more significant damage, since their independence is, after all, simply a
matter of politics in the end. Beyond this, there
is the structural factor of technological change. Job losses among
low skill workers is fundamentally not driven by trade or immigration, but by technology. While the country
can try to raise skill levels through better education, the U.S. has shown little ability or proclivity to do this. The Trump agenda is to seek to
employ 20th century workers in their old jobs with no recognition of how the technological environment has changed. But it is not as if the
Democrats or the progressive Left has much of an agenda in this regard either, beyond extending existing job training and social programs.
How the U.S. will cope with this is not clear. But then, technological change is the ultimate political challenge
that all advanced societies, and not just the democratic ones, will have to face. Outside the United States, the populist surge has
yet to play itself out. Eastern Europe never experienced the kind of cultural liberalization experienced by Germany and other
Western European countries after World War II, and are now eagerly embracing populist politicians. Hungary and
Poland have recently been joined by Serbia and the Czech Republic, which have elected leaders with many Trump-like
characteristics. Germany’s consensus politics, which made the country a rock of EU stability over the past decade, appears
to be fraying after its recent election, and the continuing threat in France should not be underestimated—Le Pen and the far-left
candidate Melenchon between them received half the French vote in the last election. Outside Europe, Brazil’s continuing
crisis of elite legitimacy has given a boost to Jair Bolsonaro, a former military officer who talks tough and promises to clean
up the country’s politics. All of this suggests that the world will be in for interesting times for some time to come.
Populist pressures are declining – automation is the one factor that locks populism in
Eduardo Porter 18, MS in quantum fields and fundamental forces from the Imperial College of Science and Technology in London,
Columnist for the New York Times, 1/30/2018, “Is the Populist Revolt Over? Not if Robots Have Their Way”, New York Times,
https://www.nytimes.com/2018/01/30/business/economy/populist-politics-globalization.html
As the world’s oligarchy gathered last week in Davos, Switzerland, to worry about the troubles of the middle class, the
real question on
every plutocrat’s mind was whether the populist upheaval that delivered the presidency to the
intemperate mogul might mercifully be over. If it was globalization — or, more precisely, the shock of imports from China —
that moved voters to put Mr. Trump in the White House, could politicians get back to supporting the market-oriented order once the China
shock played out? But for all the wishful elucidations, the cosmopolitan elite can’t rid themselves of a stubborn fear: The populist wave that
produced President Trump — not to mention Prime Minister Viktor Orban of Hungary, President Recep Tayyip Erdogan of Turkey and former
Prime Minister Silvio Berlusconi in Italy, as well as Britain’s exodus from the European Union and the rise of the National Front in France — may
be here to stay. China’s shock to American politics may be over. Its entry into the market economy at the turn of this century
cost millions of manufacturing jobs in the United States. Workers and communities were ravaged, and political positions were pushed to
ideological extremes. ADVERTISEMEN But few
manufacturing jobs are left to lose. And rising wages in China are
discouraging some companies from relocating production across the Pacific. What’s more, the spread of
automation across industries suggests that the era of furious outsourcing in search of cheap foreign labor may be ending. Immigration
pressures are likely to persist across the Atlantic, continuing to drive the populist revolt against the establishment elite in Europe. But in the
United States, the population of unauthorized immigrants is declining, disproving one of Mr. Trump’s core claims to power. You have 4 free
articles remaining. Subscribe to The Times Economists studying the changes in the nature of work that produced such an angry political
response suggest, however, that another
wave of disruption is about to wash across the world economy,
knocking out entire new classes of jobs: artificial intelligence. This could provide decades’ worth of fuel
to the revolt against the global elites and their notions of market democracy. As Frank Levy of the Massachusetts Institute of Technology noted
this month in an analysis on the potential impact of artificial intelligence on American politics, “Given globalization’s effect on the 2016
presidential election, it is worth noting that near-term A.I. and globalization replace many of the same jobs.” Consider the
occupation of truck drivers. Mr. Levy expects multiple demonstrations of fully autonomous trucks to take place within five years. If they work,
the technology will spread, starting in restricted areas on a limited number of dedicated highway lanes. By 2024, artificial intelligence might
eliminate 76,000 jobs driving heavy and tractor-trailer trucks, he says. Similarly, he expects artificial intelligence to wipe out 210,000 assembler
and fabricator jobs and 260,000 customer service representatives. “Let’s not worry about the future of work in the next 25 years,” he told me.
“There’s plenty to worry about in the next five or six years.” These may not be big numbers, but they
are hitting communities that
expressed their contempt for the status quo in 2016. White men and women without a four-year college degree accounted
for just under half of Mr. Trump’s voters — compared with fewer than a fifth of Hillary Clinton’s. Seventy percent of truck drivers, 63 percent of
assemblers and fabricators, and 56 percent of customer service representatives share these characteristics. To be sure, economic
dislocations don’t have to produce populist politics. Daron Acemoglu of M.I.T. notes that geography makes a difference: If
the dislocation from A.I. is concentrated in big cities, where workers have more options to find new jobs, the backlash will be more muted than
it was when trade took out the jobs of single-industry company towns. What’s more, Mr. Acemoglu added, the political system can respond in
different ways to workers’ pain: The Great Depression not only led to Nazi Germany, it also produced Sweden’s social democracy. It’s not
immediately obvious that artificial intelligence will produce the same kind of reaction that trade did. Sure, machines inspired the most
memorable worker rebellion of the industrial revolution — when the Luddites smashed the weaving machines that were taking over their jobs.
The word “sabotage” comes from the French workers who took to destroying gears. Unions are suspicious of technology. The United Farm
Workers loudly protested tomato-harvesting machines after they were introduced in California in the 1960s. In New York, the local of the
“sandhogs” who dig subway tunnels negotiated a deal where it gets $450,000 for each tunnel-digging machine used, to make up for job losses
caused by “technological advancement.” Yet though automation has displaced many more jobs than trade ever could, robots have never
inspired the fury that trade routinely does. “By all accounts, automation and new digital technologies played a quantitatively greater role in
deindustrialization and in spatial and income inequalities,” wrote Dani Rodrik of the Kennedy School of Government at Harvard University. “But
globalization became tainted with a stigma of unfairness that technology evaded.” It’s
easier to demonize people — especially
foreigners — than machines, the children of invention. What’s more, imports from countries with cheaper labor, weaker worker
protections and threadbare environmental standards will be seen as unfair. Thea Lee, a former deputy chief of staff of the A.F.L.-C.I.O. who now
heads the Economic Policy Institute, notes that workers’ anger is directed against “the particular set of rules about globalization that we chose,”
which spreads benefits among financiers and corporations while disregarding workers. This
time could be different, though.
“That sense of unfairness can be attached to technological changes, too,” Mr. Rodrik told me. “It’s not Bill Gates, who
came out of nowhere, but big corporations that are getting bigger and becoming monopolists.”
Populism causes extinction
Alex de Waal 16, Executive Director of the World Peace Foundation at the Fletcher School at Tufts University, 12/5/16, “Garrison
America and the Threat of Global War,” http://bostonreview.net/war-security-politics-global-justice/alex-de-waal-garrison-america-and-threatglobal-war
Polanyi recounts how economic and
financial crisis led to global calamity. Something similar could happen
today. In fact we are already in a steady unpicking of the liberal peace that glowed at the turn of the millennium. Since
approximately 2008, the historic decline in the number and lethality of wars appears to have been reversed. Today’s wars
are not like World War I, with formal declarations of war, clear war zones, rules of engagement, and definite endings. But they are wars
nonetheless. What
does a world in global, generalized war look like? We have an unwinnable “war on terror” that is
metastasizing with every escalation, and which has blurred the boundaries between war and everything else. We have deep states—built
on a new oligarchy of generals, spies, and private-sector suppliers—that are strangling liberalism. We have emboldened
middle powers (such as Saudi Arabia) and revanchist powers (such as Russia) rearming and taking unilateral
military action across borders (Ukraine and Syria). We have massive profiteering from conflicts by the arms industry, as well as through
the corruption and organized crime that follow in their wake (Afghanistan). We have impoverishment and starvation through economic
warfare, the worst case being Yemen. We have “peacekeeping” forces fighting wars (Somalia). We have regional
rivals threatening
one another, some with nuclear weapons (India and Pakistan) and others with possibilities of acquiring them
(Saudi Arabia and Iran). Above all, today’s generalized war is a conflict of destabilization, with big powers intervening in the
domestic politics of others, buying influence in their security establishments, bribing their way to big commercial contracts and thereby
corroding respect for government, and manipulating public opinion through the media. Washington, D.C., and Moscow each does this in its
own way. Put the pieces together and a
global political market of rival plutocracies comes into view. Add virulent
reactionary populism to the mix and it resembles a war on democracy. What more might we see? Economic liberalism is
a creed of optimism and abundance; reactionary protectionism feeds on pessimistic scarcity. If we see punitive trade wars and
national leaders taking preemptive action to secure strategic resources within the walls of their garrison states, then old-fashioned
territorial disputes along with accelerated state-commercial grabbing of land and minerals are in prospect. We could see
mobilization against immigrants and minorities as a way of enflaming and rewarding a constituency that can police borders, enforce the new
political rightness, and even become electoral vigilantes. Liberal
multilateralism is a system of seeking common wins
through peaceful negotiation; case-by-case power dealing is a zero-sum calculus. We may see regional arms races,
nuclear proliferation, and opportunistic power coalitions to exploit the weak. In such a global political marketplace, we would see
middle-ranking and junior states rewarded for the toughness of their bargaining, and foreign policy and security strategy delegated to the CEOs
of oil companies, defense contractors, bankers, and real estate magnates. The United Nations system appeals to leaders to live up to the
highest standards. The fact that they so often conceal their transgressions is the tribute that vice pays to virtue. A
cabal of plutocratic
populists would revel in the opposite: applauding one another’s readiness to tear up cosmopolitan liberalism
and pursue a latter-day mercantilist naked self-interest. Garrison America could opportunistically collude with similarly
constituted political-military business regimes in Russia, China, Turkey, and elsewhere for a new realpolitik global concert, redolent
of the early nineteenth-century era of the Congress of Vienna, bringing a façade of stability for as long as they collude—
and war when they fall out. And there is a danger that, in response to a terrorist outrage or an international political crisis, President
Trump will do something stupid, just as Europe’s leaders so unthinkingly strolled into World War I. The multilateral security system is in poor
health and may not be able to cope. Underpinning this is a simple truth: the
plutocratic populist order is a future that does
not work. If illustration were needed of the logic of hiding under the blanket rather than facing difficult realities, look no further than
Trump’s readiness to deny climate change. We have been here before, more or less, and from history we can gather important lessons about
what we must do now. The
importance of defending civility with democratic deliberation, respecting human rights and values, and
maintaining a commitment to public goods and the global commons—including the future of the
planet—remain evergreen. We need to find our way to a new 1945—and the global political settlement for a
tamed and humane capitalism—without
having to suffer the catastrophic traumas of trying everything else first.
Automation creates massive workforce disruption – that leads to populism
Darrell M. West 18 – Vice president and director of governance studies and director of the center for technology innovation at the
Brookings Institution. Editor in Chief of the Brookings technology policy blog, TechTank. [“Will robots and AI take your job? The economic and
political consequences of automation”, April 18th, https://www.brookings.edu/blog/techtank/2018/04/18/will-robots-and-ai-take-your-job-theeconomic-and-political-consequences-of-automation/, AZ]
Yet amid these possible benefits, there is widespread fear
that robots and AI will take jobs and throw millions of people into
poverty. A Pew Research Center study asked 1,896 experts about the impact of emerging technologies and
found “half of these experts (48 percent) envision a future in which robots and digital agents [will] have
displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast
increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social
order.”[3] These fears have been echoed by detailed analyses showing anywhere from a 14 to 54 percent automation
impact on jobs. For example, a Bruegel analysis found that “54% of EU jobs [are] at risk of computerization.”[4] Using European data, they
argue that job losses are likely to be significant and people should prepare for large-scale disruption. Meanwhile, Oxford University researchers
Carl Frey and Michael Osborne claim that technology will transform many sectors of life. They studied 702 occupational groupings and found
that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”[5] A McKinsey Global Institute
analysis of 750 jobs concluded that “45% of paid activities could be automated using ‘currently demonstrated technologies’ and . . . 60% of
occupations could have 30% or more of their processes automated.”[6] A more recent McKinsey report, “Jobs Lost, Jobs Gained,” found that 30
percent of “work activities” could be automated by 2030 and up to 375 million workers worldwide could be affected by emerging
technologies.[7] Researchers at the Organization for Economic Cooperation and Development (OECD) focused on “tasks” as opposed to “jobs”
and found fewer job losses. Using task-related data from 32 OECD countries, they estimated that 14 percent of jobs are highly automatable and
another 32 have a significant risk of automation. Although their job loss estimates are below those of other experts, they concluded that “low
qualified workers are likely to bear the brunt of the adjustment costs as the automatibility of their jobs is higher compared
to highly qualified workers.”[8] While some dispute the dire predictions on grounds new positions will be created to
offset the job losses, the fact that all these major studies report significant workforce disruptions should be
taken seriously. If the employment impact falls at the 38 percent mean of these forecasts, Western
democracies likely could resort to authoritarianism as happened in some countries during the Great Depression of the
1930s in order to keep their restive populations in check. If that happened, wealthy elites would require armed
guards, security details, and gated communities to protect themselves, as is the case in poor countries today with high
income inequality. The United States would look like Syria or Iraq, with armed bands of young men with few
employment prospects other than war, violence, or theft. Yet even if the job ramifications lie more at the low
end of disruption, the political consequences still will be severe. Relatively small increases in unemployment or
underemployment have an outsized political impact. We saw that a decade ago when 10 percent unemployment during the Great
Recession spawned the Tea party and eventually helped to make Donald Trump president. With some workforce
disruption virtually guaranteed by trends already underway, it is safe to predict American politics will be chaotic and
turbulent during the coming decades. As innovation accelerates and public anxiety intensifies, right-wing and left-wing
populists will jockey for voter support. Government control could gyrate between very conservative and very liberal leaders as
each side blames a different set of scapegoats for economic outcomes voters don’t like. The calm and predictable politics of the post-World
War II era likely will become a distant memory as the American system moves toward Trumpism on
steroids.
Spreads the income gap and decreases wages
Yen Nee Lee and Nancy Hungerford 18 – Correspondents of CNBC. Cites Christopher Pissarides, a
professor from the London School of Economics, 2010 Nobel Lauerate. [“Nobel prize winner:
Automation is holding down paychecks”, January 8th, https://www.cnbc.com/2018/01/08/salaries-jobsand-automation-christopher-pissarides-on-wages.html, AZ]
The subdued growth in wages amid an expanding economy and declining unemployment has puzzled
many, but one economics professor said he may have an explanation for that phenomenon. The answer
lies in automation, according to Christopher Pissarides from the London School of Economics. He
explained that technology has helped certain segments of the workforce do their jobs better and
subsequently increase their incomes. But technology hasn't made the same impact on workers at the
lower-end, whose salaries have not grown much. That widening income gap is partly why wage growth,
on a national level, has been subdued, he told CNBC on Tuesday. "You see successful entrepreneurs
becoming wealthy, for example, whereas at the lower end, computerization and robotics don't do
anything for workers at the lower end, like the janitors, the cleaners," Pissarides, a 2010 Nobel laureate
in economic sciences, said at the UBS Greater China Conference in Shanghai. "So, it's very difficult to see
wages rising by their own internal forces at the lower end," he added. Major central banks such as the
Federal Reserve and Bank of Japan are aiming for a 2 percent inflation target, which appeared elusive as
wage growth has not caught up with a stronger economy and an improving jobs market. Pissarides said
he has always thought central bankers should re-think their inflation goal and set targets that tailor to
the circumstances in their respective countries. There's little that central bankers can do, however, in
curbing wage inequality, he said.
Weapons
Commercial automation sets the pace for autonomous weapons development –
accelerating the rate of automation is uniquely likely to make AWs destabilizing
Jürgen Altmann 17, Researcher and Lecturer, Department of Physics at Technische Universität Dortmund , Frank Sauer, 9/18/2017,
“Autonomous Weapon Systems and Strategic Stability”, Survival: Global Politics and Strategy, 59(5),
https://www.iiss.org/en/publications/survival/sections/2017-579b/survival%E2%80%94global-politics-and-strategy-october-november-20177ccd/59-5-10-altmann-and-sauer-0e2f
AWS are not yet operational, but decades of military research and development, as well as the growing technological overlap between
the rapidly expanding commercial use of artificial intelligence (AI) and robotics, and the accelerating
‘spin-in’ of these technologies into the military realm, make autonomy in weapon systems a possibility
for the very near future. Military programmes adapting key technologies and components for achieving
autonomy in weapon systems, as well as the development of prototypes and doctrine, are well under way in a number of states.¶ Accompanying this work is a rapidly expanding body of
literature on the various technical, legal and ethical implications of AWS. However, one particularly crucial aspect has – with exceptions confirming the rule4 – received comparably little systematic attention: the potential impact of
AWS
are prone to proliferation and bound to foment an arms race resulting in increased crisis instability and
escalation risks. We conclude that these strategic risks justify a critical stance towards AWS.¶ Defining the debate¶ It is worth noting that some weapon systems, so far used only for defensive purposes, have long been
autonomous weapon systems on global peace and strategic stability.¶ By drawing on Cold War lessons and extrapolating insights from the current military use of remotely controlled unmanned systems, we argue that
able to identify, track and engage incoming targets on their own. These systems can already be set up so that humans are cut out of decision-making, a capability deemed necessary because there can be instances in which there is
not enough time for humans to react, as during attacks with missiles or mortar shells.¶ These defensive weapons are stationary or fixed on ships or trailers, and are designed to fire at inanimate targets. They repeatedly perform
pre-programmed actions within tightly set parameters and time frames in comparably structured and controlled environments. Consequently, they are commonly thought to be only the precursors to AWS, and might be described
as automatic, as distinct from the autonomous systems currently being developed. The latter will be able to operate without human control or supervision in dynamic, unstructured, open environments, attacking various sets of
targets, including inhabited vehicles, structures or even individuals. They will operate over an extended period of time after activation – and will potentially be able to learn and adapt their behaviour.¶ It can be difficult, however,
to differentiate between automatic and autonomous systems in practice, with many systems falling into a considerable grey area. Autonomous functionality in weapon systems develops over a continuum. Some advanced
‘automatic’ systems are already behaving in ways that might be considered autonomous – for instance, when automatically (autonomously?) targeting the source of incoming fire. Such systems also blur the line between
‘defensive’ and ‘offensive’. Nevertheless, juxtaposing automatic and autonomous systems is a helpful mental exercise to grasp what AWS are going to be like, and what benefits they, according to their proponents, will provide.¶
Such benefits include the possibility that new systems will combine superior performance with lower costs due to a reduced need for personnel. Moreover, AWS are said to render constant control and communication links
obsolete. Daisy-chained, line-of-sight connections can already allow for control and communication without necessarily revealing a system’s location. But dispensing with a communications link altogether could offer even stronger
insurance against communications disruption or hijacking. Much more importantly, being able to do without an up- and downlink removes the inevitable delay between the human operator’s command and the system’s response,
thus generating a clear tactical advantage over a remotely controlled, ‘slower’ adversarial system. Finally, some proponents hope that, since AWS experience neither fear nor stress, and do not overreact, they might render warfare
more humane and prevent some of the atrocities of war. Not only are machines devoid of negative human emotions, they also lack a self-preservation instinct, so they could well delay returning fire, it is argued. They are supposed
to allow not only for greater restraint but better discrimination between civilians and combatants, resulting in an application of force that accords with international humanitarian law.5¶ Critics counter that militarised AI systems
are – and for the foreseeable future will be – incapable of distinguishing between combatants and civilians, as well as being unable to assure a proportionate application of military force, which renders the battlefield use of AWS
illegal.6 Also, should an autonomous weapon system nevertheless be fielded and end up causing disproportionate loss of life among (or injury to) civilians, or damage to civilian objects, it is unclear who might be held legally
responsible, since machines can obviously not be court-martialled.7¶ Critics concerned with the ethical, rather than legal, implications of AWS argue that such systems are intrinsically amoral because delegating kill decisions to an
algorithm in a machine – which is not accountable for its actions in any meaningful ethical sense – infringes on fundamental human values including dignity and the right to life.8 Such humanitarian concerns are also reflected in
public opinion. Representative polling data suggests that a majority of US citizens oppose the use of AWS, with 40% ‘strongly opposing’ them.9 An online poll conducted by the Open Roboethics Initiative in 14 different languages
supports these findings at the global level.10¶ Finally, operational risks are cause for concern. For instance, the potential of AWS for high-tempo fratricide, way beyond the speed of human intervention, incentivises militaries to
avoid full autonomy in weapon systems, and instead to retain humans in the chain of decision-making as a fail-safe mechanism.11 We argue that concerns of this nature are relevant not just at the operational level, but point to the
The goal of upholding stability to prevent a catastrophic
nuclear war was a central feature of the Cold War. Destabilisation loomed with the arms build-up, in particular with
potentially detrimental impact of AWS on overall strategic stability.¶ Two dimensions of instability¶
the development of ballistic missiles carrying multiple independently targetable re-entry vehicles (MIRVs), and of missile defence. The former dramatically increased fears of a first strike and thus the pressure to launch on warning,
that is, before the arrival of enemy warheads 10–30 minutes later. ‘Accidental nuclear war’ scares, fuelled by human and technical errors in early-warning systems, informed the decisions to limit anti-ballistic-missile systems and to
preferentially reduce MIRVed missiles and warhead counts.12 The goal of stability was also taken up in the realm of conventional military armaments, mainly in the Treaty on Conventional Armed Forces in Europe (CFE Treaty).13¶
The lessons of the Cold War are worth remembering. They suggest that instability has two dimensions. The first encompasses military instability with regard to the
proliferation of arms and the emergence of arms races. During the Cold War, the perceived risk of ‘horizontal proliferation’ – that is, the spread of nuclear weapons beyond the existing nuclear-weapons states – gave rise to the
Non-Proliferation Treaty and various export-control regimes. The risk of vertical proliferation – that is, an uncontrolled build-up of arms that drives up military expenditure and exacerbates the security dilemma, thus increasing the
likelihood of crises – was reflected in the various strategic arms-limitation and -reduction agreements between the US and the Soviet Union. As the US Office of Technology Assessment (OTA) put it in 1985,¶ Arms race stability
involves the effect of planned deployments on the scope and pace of the arms race … If a deployment on one side is likely to lead to a responding deployment on the other side which is in turn likely to induce a still higher level of
deployment on the first side, the first side’s deployment might be seen as ‘destabilizing’ the arms competition.14¶ Generally speaking, any quantitative or qualitative arms race between – in this example – two potential
But a race’s pace can vary widely. Destabilisation becomes a particular concern
when qualitatively new technologies promising clear military advantages seem close at hand. When potential
adversaries make special efforts to get ahead themselves, or at least to avoid falling behind, this can trigger a dynamic intensified by mutual observation of
– as well as speculation in light of uncertainty about – the other side’s advances. If the situation is
perceived as urgent, and precedents have been or are about to be set, there are compelling incentives for accelerating the
development of technology and incorporating it into militaries, a process that is then more likely to
outpace and render moot any attempt at agreement on mutual, preventive prohibitions.¶ The second dimension of
adversaries involves an element of instability.
strategic instability is crisis instability and escnolalation, either across the threshold from peace to war, or, when war has already broken out, to a higher level of violence – in particular from conventional to nuclear weapons. With
respect to nuclear weapons, crisis stability during the Cold War was seen, according to the OTA, as the degree to which strategic force characteristics might, in a crisis situation, reduce incentives to initiate the use of nuclear
weapons … Weapon systems are considered destabilizing if in a crisis they would add significant incentives to initiate a nuclear attack, and particularly to attack quickly before there is much time to collect reliable information and
carefully weigh all available options and their consequences.15¶ In terms of conventional forces, the preamble of the CFE Treaty encompasses crisis stability in its commitment to ‘establishing a secure and stable balance of
conventional forces at lower levels … eliminating disparities detrimental to stability and security [and] eliminating … the capability for launching surprise attack and for initiating large-scale offensive action in Europe’.16¶ Both
dimensions are closely connected. New kinds of weapons, developed as an outcome of an arms race, can increase crisis instability, with MIRVed missiles being a prominent Cold War example. And (perceived) crisis instability can
create motives for diversifying weapon carriers and fuel the arms race in turn, as the development of nuclear submarines demonstrates.¶ Proliferation and arms-race instability¶ As early as 2007, the US Department of Defense
wrote in its Unmanned Systems Roadmap that for processor technology ‘the ultimate goal is to replace the operators with a mechanical facsimile [of] equal or superior thinking speed, memory capacity, and responses gained from
training and experience’. The document also stated that the ‘primary technical challenges for weapon release from unmanned systems include the ability to reliably target the right objective’.17 The goal of weapon autonomy
pervades all subsequent road maps.18 Autonomous weapon-system functions have since been tested on land, under water, on the sea and, most notably, in the air. In fact, current trends with respect to unmanned combat aerial
vehicles (UCAVs or ‘combat drones’) provide indicators for what to expect with regard to AWS. Unlike today’s high-profile UCAVs, such as the Reaper, which are propeller driven, slow, carry comparably small payloads and have few
to no capabilities for operating in contested airspace, future systems will be less dependent on human control, faster, stealthy and capable of delivering bigger payloads.¶ The X-47B, for instance, has demonstrated autonomous
take-off from and landing on an aircraft-carrier deck, as well as autonomous aerial refuelling. This technology demonstrator was developed by the US Navy’s Unmanned Carrier-Launched Airborne Surveillance and Strike
programme (UCLASS). Similarly, the British Taranis UCAV was described by the UK Ministry of Defence as ‘fully autonomous’ and able to ‘defend itself against manned and other unmanned enemy aircraft’ with ‘almost no need for
operator input’.19 However, the ministry also stated that ‘the operation of weapons systems will always be under human control’.20¶ While AWS test beds such as Taranis and the X-47B rely on familiar designs, in this case the
airframes of a fast, stealthy, next-generation drone with substantial payload capabilities, future systems will display an autonomous swarming capability, and thus AWS will also come in much smaller sizes. In October 2016, for
instance, the US Department of Defense demonstrated a swarm of 103 Perdix micro drones capable of ‘advanced swarm behaviors such as collective decision-making, adaptive formation flying, and self-healing’.21 In the future,
such micro drones are to be 3D printed in large batches and deployed from (manned) flying systems. This dispensing method has already been successfully tested at Mach 0.6 speed by two F/A-18 Super Hornets releasing a Perdix
drone swarm. The US Navy’s LOCUST programme is also seeking to develop swarming, disposable unmanned aerial vehicles (UAVs).22¶ The overall goal for this new ecosystem of flying assets is to replace not just the old
generation of drones but also manned aircraft, thus continuing the trend towards keeping human pilots out of harm’s way and providing superior unmanned air-to-ground and air-to-air capabilities across the board.23 In air-to-air
combat, the big, fast autonomous drones currently envisioned will be able to fly high-g manoeuvres no human pilot would be able to endure. More importantly, they would ensure much shorter reaction times. On-board sensors
combined with artificial ‘intelligence’ – either located onboard or distributed in the swarm and based on decision-making algorithms endowed with the authority to initiate an attack without awaiting human input – are to make
these weapons autonomous and hence provide a decisive edge over remotely controlled and human-piloted adversary systems alike.¶ While the development of AWS is currently most advanced in the air and under water – that is,
in less cluttered environments – the example of autonomous (swarms of) UCAVs demonstrates the generally valid proposition that for future unmanned systems, operational speed will reign supreme, regardless of the domain. In
that sense, technological developments in AI and robotics, as well as current expectations regarding future armed conflict (and the need for speed), jointly point towards AWS. In fact, US deputy secretary of defense Bob Work
stated in March 2016 that even the final delegation of lethal authority to autonomous systems will inexorably happen as a result of this race for speed.24 According to Work, the United States ‘will not delegate lethal authority for a
machine to make a decision … The only time we’ll delegate authority is in things that go faster than human reaction time, like cyber or electronic warfare.’ Yet, he conceded that such self-restraint may be unsustainable if an
authoritarian rival acts differently. ‘We might be going up against a competitor who is more willing to delegate authority to machines than we are and, as that competition unfolds, we’ll have to make decisions on how we can best
compete’, Work said. ‘It’s not something that we have fully figured out, but we spend a lot of time thinking about it.’25¶ Operational speed will reign supreme¶ To further deepen our understanding of AWS, it is useful to take a
step back and underline that they need not necessarily take the shape of a specific weapon system akin to, for instance, a drone or a missile. AWS also do not require a specific military-technology development path, the way
nuclear weapons do, for example. As AI, autonomous systems and robot technologies mature and begin to pervade the civilian sphere, militaries will increasingly be able to make use of them for their own purposes, as the
development of information and communication technology suggests. Naturally, any military adaptation of a dual-use technology will need to fulfil specific military requirements that do not exist in a civilian environment, or are
less relevant for mass markets. Nevertheless, AWS development will profit from the implementation or mirroring of a variety of civilian technologies (or derivatives thereof) and their adoption for military purposes, technologies
which are currently either already available or on the cusp of becoming ready for series production in the private sector. This trend is already observable in the case of armed drones. Light detection and ranging (LIDAR) systems are
another example. These are the optical sensors used by the automotive industry to give self-driving cars a 360-degree picture of their surroundings. LIDAR prices have recently dropped from five figures to a few hundred dollars.
The units have also become more rugged and much smaller.26 Given that these components, which are necessary for endowing mobile systems with autonomy, are now cheaply and readily available off the shelf, there is every
reason to expect the military to adapt, and, if required, adjust and refine, them for their own purposes.27¶ It is clear that the research and development for AWS-relevant technology is well under way and distributed across
countless university laboratories and, especially, commercial enterprises that are making use of economies of scale and the forces of the free market to spur competition, lower prices and shorten innovation cycles. This renders the
military research and development effort in the case of AWS different from those of past high-tech conventional weapon systems (the F-35 comes to mind), let alone nuclear weapons. So while the impact of AWS might be
the military is merely continuing and, with outside
help and technology lifted from the private sector, accelerating an already existing trend to replace
labour with capital and automate dull, dirty and dangerous military tasks.28 For example, former secretary of
defense Ashton Carter sought closer ties with Silicon Valley to hasten the incorporation of technological
innovations into the US military after the US officially declared AI and robotics cornerstones of its new
‘third offset’ strategy to counter rising powers.29¶ Thus, AWS are easy to obtain compared with other paradigm-shifting weapons, such as nuclear weapons, which even now
revolutionary in terms of their implications for warfare, their development within the context of the military is best described as evolutionary:
require the Herculean effort of a state-run, focused politico-military effort to produce. AWS do not require ores, centrifuges, high-speed fuses or other comparably ‘exotic’ components to be assembled and tested in a clandestine
manner. Consequently, while nuclear technologies can be – and are – proliferation controlled, AWS are much harder to regulate. With comparatively fewer choke points that might be targeted by non-proliferation policies, AWS
are potentially available to a wide range of state and non-state actors, not just those nation-states that are willing and able to muster the considerable resources needed for the robotic equivalent of the Manhattan Project.30 This
carries significant implications for arms control.¶ There will of course be differences in quality. Sophisticated AWS will have to meet the same or similar military standards that current weapon systems, such as main battle tanks or
combat aircraft, do. Moreover, technologically leading nations such as the US and Israel are carrying out research to produce autonomous systems that comply with international humanitarian law. Less scrupulous actors, however,
will find AWS development much easier. Comparably crude AWS which do not live up to the standards of a professional military in terms of reliability, compliance with international humanitarian law or the ability to go head-tohead with systems of a near-peer competitor could, in fact, be put together with technology available today by second- or third-tier state actors, and perhaps even non-state actors. Converting a remotely controlled combat drone
to autonomously fire a weapon in response to a simple pattern-recognising algorithm is already doable. Even the technological edge displayed by sophisticated AWS is unlikely to be maintained over the longer term. While sensor
and weapon packages to a large degree determine the overall capabilities of a system, implementing autonomy ultimately comes down to software, which is effortlessly copied and uniquely vulnerable to being stolen via
computer-network operations. Thus, while the development of AWS clearly presents a challenge to less technologically advanced actors, obtaining AWS with some degree of military capability is a feasible goal for any country
already developing, for example, remotely controlled armed UAVs – the number of which rose from two to ten between 2001 and 2016.31 Admittedly, the US and Israel are still in the lead with regard to developing unmanned
systems and implementing autonomous-weapon functionality – China only recently test-fired a guided missile from a drone via satellite link for the first time.32 But considering that drone programmes can draw from the vibrant
global market for unmanned aerial vehicles of all shapes and sizes, the hurdles regarding AWS are much lower than those of other potentially game-changing weapons of the past.¶ Implementing autonomy comes down to
software¶ Proliferation of AWS could of course also occur via exports, including to the grey and black markets. In this way, autonomous systems could fall not only into the hands of technologically inferior state actors, but also
those of non-state actors, including extremist groups. Hamas, Hizbullah and the Islamic State have already deployed and used armed drones. As sensors and electronics are increasingly miniaturised, small and easily transportable
systems could be made autonomous with respect to navigation, target recognition, precision and unusual modes of attack.33 Terrorist groups could also gain access to comparably sophisticated systems that they could never
develop on their own. Again, autonomy in this context does not necessarily require military-grade precision – a quick and dirty approach would suffice for these actors. In fact, it stands to reason that terrorist groups would use
autonomous killing capabilities indiscriminately in addition to using them, if possible, in a precise fashion for targeted assassinations.¶ It is still unclear how the development of unmanned systems on the one hand and specific
countermeasures on the other will play out. Traditional aircraft-sized drones such as the X-47B or Taranis, to stick with these examples, are obviously susceptible to existing anti-aircraft systems. As for smaller-sized systems,
various tools, from microwaves to lasers to rifle-sized radio jammers for disrupting the control link, are currently being developed as countermeasures. Simpler, less exotic methods such as nets, fences or even trained hunting birds
might also prove effective for remotely controlled and autonomous systems alike. It is clear, however, that saturation attacks have been identified as a key future capability for defeating a wide range of existing and upcoming
defensive systems – both human-operated and automatic.34 The latter are a particular focus of research into swarming as a potential solution.35 And military systems operating at very high speeds and in great numbers or swarms
there are obvious dual-use problems and an
unusually high risk of proliferation when it comes to AWS. Should one of the technologically leading
nation-states go forward with the deployment of AWS, it would be comparably easy – and thus very likely – that others
would follow suit.36 In that sense, the development of AWS could well trigger a destabilising arms race.¶ Crisis instability and escalation¶ Increasing operational speeds
mean that human involvement in AWS would be limited to, at best, general oversight and decisionmaking in instances where communication delays of up to a few seconds – and thinking and deliberation
times of a few minutes – could be deemed acceptable, meaning they would not result in defeat or the loss of systems. Many situations would not allow for the luxury
are bound to generate new instabilities, to which we will turn in our next section.¶ To first sum up our argument so far,
of human pondering, however. In such cases, the actions and reactions of individual AWS, as well as AWS swarms, would have to be controlled autonomously by algorithms – in other words determined only by programming
software in advance and possibly through the adaptation and learning of the systems themselves. After all, as Paul Scharre put it, ‘winning in swarm combat may depend upon having the best algorithms to enable better
One such swarm-combat situation could be a severe political crisis
in which adversaries believe that war could break out. With swarms deployed in close proximity to each
other, control software would have to react to signs of an attack within a split-second time frame – by evading or,
possibly, counter-attacking in a use-them-or-lose-them situation. Even false indications of an attack – sun glint interpreted as a rocket flame,
sudden and unexpected moves of the adversary, or a simple malfunction – could trigger escalation.
coordination and faster reaction times, rather than simply the best platforms’.37¶
Autonomous weapons escalate – arm racing and accidents – they’re a threat to human
survival
Mark Gubrud 16, adjunct professor in the Curriculum in Peace, War & Defense at the University of North Carolina, 6/1/2016, “Why
Should We Ban Autonomous Weapons? To Survive”, IEEE, https://spectrum.ieee.org/automaton/robotics/militaryrobots/why-should-we-ban-autonomous-weapons-to-survive
Killer robots pose a threat to all of us. In the movies, this threat is usually personified as an evil machine bent on destroying
humanity for reasons of its own. In reality, the threat comes from within us. It is the threat of war. In today’s drone warfare, people kill
other people from the safety of cubicles far away. Many do see something horrific in this. Even more are horrified by the idea of replacing the
operator with artificial intelligence, and dispatching autonomous weapons to hunt and kill without further human involvement. Proponents of
autonomous weapons say their use is inevitable and natural, a mere extension of human will and judgment through the agency of
machines. They question whether artificial intelligence will always be incapable of distinguishing civilians from combatants, or even of making reasonable tradeoffs
between military gains and risk or harm to civilians. After all, they argue, people are often cruel and stupid, and soldiers under extreme stress sometimes go berserk
and commit atrocities. What
if autonomous weapons, used judiciously, could actually save lives of soldiers and
civilians? I’ll agree that we can imagine circumstances in which using an intelligent autonomous weapon could cause less harm than a more destructive, dumb
weapon, if those were the only choices. But human-controlled robotic weapons could often be just as effective, or it might be possible to avoid violence altogether.
Autonomous weapons could malfunction, kill innocents, and nobody be held responsible. Which kind of situation
would occur most often, and whether autonomous weapons would be more or less deadly than their prohibition, assuming everything else would be the same, is
endlessly debatable. But everything else won’t be the same. Proponents
claim that machine intelligence and autonomous
weapons will revolutionize warfare, and that no nation can risk letting its enemies have a monopoly on
them. Even if this is exaggerated, it shows the potential for a strong stimulus to the global arms race. These
technologies are being pursued most vigorously by the nuclear-armed nations. In the United States, they are touted as the answer to rising challenges from China
and Russia, as well as from lesser powers armed with modern weaponry. The major powers are developing autonomous missiles and drones that will hunt ships,
subs, and tanks, and piecing together highly automated battle networks that will confront each other and have the capability of operating without human control.
Autonomous weapons are a salient point of departure in a technology-fueled arms race that puts
everyone in danger. That is why I believe we need to ban them as fast and as hard as we possibly can. A BRIGHT RED LINE It’s a view I’ve held for almost
three decades, and it wasn’t inspired by the The Terminator, but by the 1988 incident in which a U.S. Navy air defense system mistakenly shot down an Iranian
airliner. Although human error appears to have played the deciding role in that incident, part of the problem was excessive reliance on complex automated systems
under time pressure and uncertain warnings of imminent danger—the classic paradigm for “accidental war.” At the time, as an intern at the Federation of American
Scientists in Washington, D.C., I was looking at nanotechnology and the rush of new capabilities that would come as we learn to build ever more complex systems
with ever smaller parts. We see that today in billion-transistor chips and the computers, robots, and machine learning systems they are making possible. I worried
about a runaway arms race. I was asked to come up with proposals for nanotechnology arms control. I decided it wasn’t about banning teeny-tiny Kalashnikovs, but
identifying the qualitatively distinct new things that emerging technologies would enable. One of my first ideas was a ban on autonomous kill decision by machines.
“I knew that most people would agree we should not have killer robots, but when I started talking about banning them, people would mostly stare” I knew that
most people would agree we should not have killer robots. This made lethal autonomy a bright red line at which it might be possible to erect a roadblock to the
arms race. I also knew that unless we resolved not to cross that line, we would soon enter an era in which, once the fighting had started, the complexity and speed
of automated combat, and the delegation
of lethal autonomy as a military necessity, would put the war machines
effectively beyond human control. But when I started to talk about banning killer robots, people would mostly stare. Military people angrily
denied that anyone would even consider letting machines decide when to fire guns and at what or at whom. For many years the U.S. military resisted autonomous
weapons, concerned about their legality, controllability and potential for friendly-fire accidents. Systems like the CAPTOR mine, designed to autonomously launch a
homing torpedo at a passing submarine, and the LOCAAS mini-cruise missile, designed to loiter above a battlefield and search for tanks or people to kill, were
canceled or phased out. As late as 2013, a poll conducted by Charli Carpenter, a political science professor at the University of Massachusetts Amherst, found
Americans against using autonomous weapons by 2-to-1, and tellingly, military personnel were among those most opposed to killer robots. Yet starting in 2001, the
use of armed drones by the United States began to make the question of future autonomous weapons more urgent. In a 2004 article, Juergen Altmann and I
declared that “Autonomous ‘killer robots’ should be prohibited” and added that “a human should be the decision maker when a target is to be attacked.” In 2009,
Altmann, a professor of physics at Technische Universität Dortmund, co-founded the International Committee for Robot Arms Control, and at its first conference a
year later, I suggested human control as a fundamental principle. The unacceptability of machine decision in the use of violent force could be asserted, I argued,
without need of scientific or legal justification. In 2012, Human Rights Watch began to organize the Campaign to Stop Killer Robots, a global coalition that now
includes more than 60 nongovernmental organizations. The issue rose to prominence with astonishing speed, and the United Nations Convention on Certain
Conventional Weapons (CCW) held its first “Meeting of Experts on Lethal Autonomous Weapon Systems” in May 2014, and another the following year. This past
April, the third such meeting concluded with a recommendation to form a “Group of Governmental Experts,” the next step in the process of negotiating…
something. Many statements at the CCW have endorsed human control as a guiding principle, and Altmann and I have suggested cryptographic proof of
accountable human control as a way to verify compliance with a ban on autonomous weapons. Yet the CCW has not set a definite goal for its deliberations. And in
the meantime, the killer robot arms race has taken off. FULL SPEED AHEAD In 2012, the Obama administration, via then-undersecretary of defense Ashton Carter,
directed the Pentagon to begin developing, acquiring, and using “autonomous and semi-autonomous weapon systems.” Directive 3000.09 has been widely
misperceived as a policy of caution; many accounts insist that it “requires a human in the loop.” But instead of human control, the policy sets “appropriate levels of
human judgment” as a guiding principle. It does not explain what that means, but senior officials are required to certify that autonomous weapon systems meet this
standard if they select and kill people without human intervention. The policy clearly does not forbid such systems. Rather, it permits the withdrawal of human
judgment, control, and responsibility from points of lethal decision. The policy has not stood in the way of programs such as the Long Range Anti-Ship Missile, slated
for deployment in 2018, which will hunt its targets over a wide expanse, relying on its own computers to discriminate enemy ships from civilian vessels. Weapons
like this are classified as merely “semi-autonomous” and get a green light without certification, even though they will be operating fully autonomously when they
decide which pixels and signals correspond to valid targets, and attack them with lethal force. Every technology needed to acquire, track, identify, and home in or
control firing on targets can be developed and used in “semi-autonomous weapon systems,” which can even be sent on hunt-and-kill missions as long as the quarry
has been “selected by a human operator.” (In case you’re wondering, “target selection” is defined as “The determination that an individual target or a specific group
of targets is to be engaged.”) It’s unclear that the policy stands in the way of anything. In reality, the directive signaled an upward inflection in the trend toward
killer robots. Throughout the military there is now open discussion about autonomy in future weapon systems; ambitious junior officers are tying their careers to it.
DARPA and the Navy are particularly active in efforts to develop autonomous systems, but the Air Force, Army, and Marines won’t be left out. Carter, now the
defense secretary, is heavily promoting AI and robotics programs, establishing an office in Silicon Valley and a board of advisors to be chaired by Eric Schmidt, the
executive chairman of Google’s parent company Alphabet. The message has been received globally as well. Russia in 2013 moved to create its own versions of
DARPA and the of the U.S. Navy’s Laboratory for Autonomous Systems Research, and deputy prime minister Dmitry Rogozin called on Russian industry to create
weapons that “strike on their own,” pointing explicitly to American developments. China, too, has been developing its own drones and robotic weapons, mirroring
the United States (but with less noise than Russia). Britain, Israel, India, South Korea… in fact, every significant military power on Earth is looking in this direction.
Both Russia and China have engaged in aggressive actions, arms buildups, and belligerent rhetoric in recent years, and
it’s unclear whether they could be persuaded to support a ban on autonomous weapons. But we aren’t even
trying. Instead, the United States has been leading the robot arms race, both with weapons development and with a policy
that pretends to be cautious and responsible but actually clears the way for vigorous development and early use of autonomous weapons. Deputy defense
secretary Robert Work has championed the notion of a “Third Offset” in which the United States would leap to the next generation of military technologies ahead of
its “adversaries,” particularly Russia and China. To calm fears about robots taking over, he emphasizes “human-machine collaboration and combat teaming” and
says the military will use artificial intelligence and robotics to augment, not replace human warfighters. Yet he worries that adversaries may field fully autonomous
weapon systems, and says the U.S. may need to “delegate authority to machines” because “humans simply cannot operate at the same speed.” Work admits that
the United States has no monopoly on the basic enabler, information technology, which today is driven more by commercial markets than by military needs. Both
China and Russia have strong software and cyber hacking capabilities. Their latest advanced fighters, tanks, and missiles are said to rival ours in sophistication. Work
compares the present to the “inter-war period” and urges the U.S. to emulate Germany’s invention of blitzkrieg. Has he forgotten how that ended? Autonomous
weapons include the DARPA Sea Hunter submarine Photo: DARPA DARPA and the U.S. Office of Naval Research recently unveiled the Sea Hunter, an unmanned
vessel designed to track enemy submarines. The current prototype doesn't have weapons, but during a ceremony in April, deputy defense secretary Robert Work
raised the possibility of arming the Sea Hunter in the future. A DISASTER WAITING TO HAPPEN Nobody wants war. Yet, fearing enemy aggression, we position
ourselves at the brink of it. Arms
races militarize societies, inflate threat perceptions, and yield a proliferation of
opportunities for accidents and mistakes. In numerous close calls during the Cold War, it came down to the judgment of one or a few people
not to take the next step in a potentially fatal chain of events. But machines simply execute their programs, as intended. They also behave in ways we did not intend
or expect. “Networks
of autonomous weapons could accidentally ignite a war and, once it has started,
rapidly escalate it out of control. To set up such a disaster waiting to happen would be foolish” Our experience with the unpredictable failures
and unintended interactions of complex software systems, particularly competitive autonomous agents designed in secrecy by hostile teams, serves as a warning
that networks of autonomous weapons could accidentally ignite a war and, once it has started, rapidly escalate it out of control. To
set up such a
disaster waiting to happen would be foolish, but not unprecedented. It’s the type of risk we took during the Cold War, and it’s similar to
the military planning that drove the march to war in 1914. Arms races and confrontation push us to take this kind of risk. Paul Scharre, one of the architects of
Directive 3000.09, has suggested that the risk of autonomous systems acting on their own could be mitigated
by negotiating “rules of the road” and including humans in battle networks as “fail-safes.” But it’s asking
a lot of humans to remain calm when machines indicate an attack underway. By the time you sort out a
false alarm, autonomous weapons may actually have started fighting. If nations can’t agree to the simple idea of a verified
ban to avoid this danger, it seems less likely that they will be able to negotiate some complicated system of rules and safeguards.
Download