The question whether it is ethically and morally

advertisement
Bonaventura Model United Nations
26th, 27th and 28th of September
2014
Research Report
Forum: Special committee
Issue: The question whether it is ethically and morally responsible to
manufacture robot workers
Student Officer: Thomas Fassotte
Position: Deputy Chair
All the subsections below must be filled in to adequately inform the delegates.
Introduction
As early as in ancient times people have tried to build self-operating machines that could release the
work pressure of people. Around the turn of the 19th century torpedo’s that could be operated from
a distance started to hit the market. Yet the first electrical robot that operated autonomous and had
some form of complex behavior was built by William Grey Walter. Robotics has developed quickly
and today we have all kinds of robots, ranging from service robots, educational robots to
manufacturing robots. The technological advancement of robot technology that allows robots to take
over certain jobs of humans is inevitably linked with the question whether it’s morally and ethically
right that they take over the jobs of human beings. To deal with problems arising from the robotic
overtake of human labor, delegates should consider whether it is possible to give robots jobs that
require morals and whether this technological development will cause unemployment problems in
the (near) future.
Definition of key terms
Robot: A robot is a mechanical or virtual artificial agent, usually an electro-mechanical machine that
is guided by a computer program or electronic circuitry. In practical terms, "robot" usually refers to a
machine which can be electronically programmed to carry out a variety of physical tasks or actions.
Ethics: the study of what is morally right and wrong, or a set of beliefs about what is morally right
and wrong.
Morals: standards for good or bad character and behavior.
AI: is an abbreviation for ‘artificial intelligence.’ Artificial intelligence is the study of how to produce
machines that have some of the qualities that the human mind has, such as the ability to understand
language, recognize pictures, solve problems, learn and have emotions.
1
Bonaventura Model United Nations
26th, 27th and 28th of September
2014
Background information
‘The debate over what technology does to work, jobs, and wages is as old as the industrial era itself.
In the second decade of the nineteenth century, a group of English textile workers called the
introduction of spinning frames, power looms and other machines of the Industrial Revolution a
threat to their jobs.’ Erik Brynjolfsson and Adrew Mcfee state in the introduction to an essay in the
magazine ‘Foreign Affairs.’ This sums up a core problem that is seen by many people when talking
about the ethics of robotics. Is it legitimate that robots take over the work of people that thus lose
their job? But beyond that: can robots be trusted with important jobs or jobs that ask for human
empathy? Who will be convicted if the robot breaks the law or makes mistakes?
About half of all the robots in the world are in Asia, of which Japan is the biggest stakeholder with a
total of 40% of all the robots in the world. 32% of the robots are in Europe, North Americas has 16%
of the robots and the remainder of about 2% is divided over Africa (1%) and Australasia (1%). The
complexity of robots is increasing and the amount of robots that are used is rising quickly.
Statistics do show that robots take over jobs of humans and there have been researches trying to
estimate how many jobs will be taken over. For example the estimate below:
Is it ethically right to cut
human jobs and replace
them with robots? The
financial crisis of the
twenty first century has
shown the devastating
effect of unemployment to
many people. Yet it is also
worth noticing that the use
of robots magnificently
increases labor
productivity and it is the
technological development
and advancement that
stimulates economic
growth. Besides that, there
are many criticizers that doubt the correlation between the employment of robots and extreme job
loss. For example:
‘The substantial variation of the degree to which countries deploy robots should provide clues. If
robots are a substitute for human workers, then one would expect the countries with much higher
investment rates in automation technology to have experienced greater employment loss in their
manufacturing sectors. …. Yet the evidence suggests there is essentially no relationship between the
change in manufacturing employment and robot use. Despite the installation of far more robots
between 1993 and 2007, Germany lost just 19 percent of its manufacturing jobs between 1996 and
2012. ….. Another way to look at this is to ask: How many jobs would each economy have lost if the
2
Bonaventura Model United Nations
26th, 27th and 28th of September
2014
decline in manufacturing employment was proportional to the increase in robots? By this metric the
United States should have lost one-third more manufacturing jobs than it actually did and Germany
should have lost 50 percent more, while the United Kingdom lost five times more than it should
have. The lesson is that the net impacts of automation on employment in manufacturing are not
simple, and at least during the time frame studied here they cannot be said to have caused job
losses. ’ (brookings.edu, April 2015)
Robots are present in all kinds of, if not all, labor branches. Especially branches that require humans
to take decisions that are based on morals or empathy can be confronted with ethical and legislative
problems in the (near) future. For example in the military: ‘Weapons systems currently have human
operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the
loop” operation, with machines carrying out orders autonomously,’ The Economists writes (2012).
Who will you blame if a robot has decided to bomb a certain area? Do you blame the manufacturer
who had the job of programming morals into the robot? Which morals do you program into a robot?
Do you blame the agency who uses the robots? These are important facets to think about, but it is
hard to find an answer to them.
Asimov came up with the ‘three laws of robotics,’ which gave a framework on how to legislatively go
about robots. The laws read as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to
harm.
2. A robot must obey the orders given it by human beings except where such orders would
conflict with the First Law
3. A robot may not injure a human being or, through inaction, allow a human being to come to
harm.
In 2012 a project named ‘Robolaw’ was launched (see major parties involved) and the organization
has set up a document that names ‘Guidelines on Regulating Robotics.’ The document contains a set
of recommendations in order to help legislators in the European Union manage the upcoming robot
age. This body, consisting of many researchers, said that the ‘three laws of robotics’ are very likely to
fail and are not the best approach to this problem (for more information, see ‘major parties
involved’).
Major Parties Involved
There are no real major parties seeing that all nations in the world that do use robots of any kind are
faced with the problem of moral codes and ethical questions when using robots. Japan can be
specifically mentioned, seeing the wide extend to which they use robots.
Japan: Japan is a major party, simply because they have by far the largest amount of robots in their
country, both relatively and nominally. They account for about 40% of the robots in the world.
Robolaw: Robolaw is a project that has been paid for by primarily the European Union. Their goal
was to investigate the ways in which technologies in the field of robotics have challenged the law and
then give countries, or political bodies like the EU, guidelines on how to establish a solid framework
3
Bonaventura Model United Nations
26th, 27th and 28th of September
2014
of ‘robolaw’ (in EU, but it can be relevant outside the EU). They for example warn against excessively
restrictive legislation, because that might discourage innovation and economic growth. The
document is more or less discussing policies and it has been influential in debates concerning
robotics.
Timeline of key events
1942: Asimov’s short story Runaround states the Three Laws (see back ground information). It is later
specifically used in science fiction.
1949: the first electronic autonomous robot with complex behavior was created by William Grey
Walter
1954: the first digitally operated and programmable robot was invented by George Devol
2010: a research workshop by the Engineering and Physical Sciences Research Council (EPRSC) and
the Arts and Humanities Research Council (AHRC) of Great Britain has produced a set of principles
and rules on the use of robots and their intelligence.
2014: the Robolaw project was finished. They have produced an influential document that provides
guidelines to create legal frameworks for robots and possible policies. This includes the argument
that robots should always be treated as objects and not as subjects and thus legitimizing the fact that
robots do not receive rights.
Previous Solutions
As you have read throughout the report, there have been many ideas about what to do with artificial
intelligence. Some of the concerns (for example about the danger of replacing humans by robots in
very dangerous jobs or jobs that ask for moral judgement) are concerns that are going to be relevant
in the (near) future. There are countries, especially in Asia, that have adopted laws on ethics
concerning robots, but have also adopted laws which state to what extend the use of robots are
allowed. For example Nevada has passed a law that authorizes the use of driverless cars.
Though, also interesting are the many conferences that have brought up principles for the use of
robots (like robolaw mentioned under Timeline of key events and the background information
section). One other example; the Engineering and Physical Sciences Research Council (EPSRC) and
the Arts and Humanities Research country (AHRC) of Great Britain have drafted some principles and
rules:
1. ‘Robots should not be designed solely or primarily to kill or harm humans.
2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human
goals.
3. Robots should be designed in ways that assure their safety and security.
4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an
emotional response or dependency. It should always be possible to tell a robot from a
human.
4
Bonaventura Model United Nations
26th, 27th and 28th of September
2014
5. It should always be possible to find out who is legally responsible for a robot.
The messages intended to be conveyed were:
1. We believe robots have the potential to provide immense positive impact to society. We
want to encourage responsible robot research.
2. Bad practice hurts us all.
3. Addressing obvious public concerns will help us all make progress.
4. It is important to demonstrate that we, as roboticists, are committed to the best possible
standards of practice.
5. To understand the context and consequences of our research, we should work with experts
from other disciplines, including: social sciences, law, philosophy and the arts.
6. We should consider the ethics of transparency: are there limits to what should be openly
available?
7. When we see erroneous accounts in the press, we commit to take the time to contact the
reporting journalists.’
Possible solutions
To my opinion solutions can be sought in a nuanced, but also relevant view on robotics. With
nuanced I mean to say that simply forbidding robots or allow anything on the area of robotics does
not seem to be an effective solutions. That, because the former would restrict economic innovation
and the latter would pay no attention to the dangers.
A solution should entail clear consideration of principles outlined by experts and of course possible
creativity that is supported with solid argumentation. Yet it should also take in account the clear
difference between remotely controlled robots, robots with intelligence nowhere close to human
intelligence and hyper intelligent AI instrument that can make decisions for themselves.
For example the principle outlined by EPSRC and AHRC that states that ‘robots should not be
designed solely or primarily to kill or harm humans,’ can provide a good basis for a clause that
restricts the use of robots in very morally loaded aspects of the military. Yet robots can be helpful
when it for examples takes away danger for humans on the battlefield (think of remotely controlled
robots that defuse bombs).
In this manner think of clauses that fit your countries interests best, but still are relevant for today,
inhibit possibly interesting principles and show clear distinctions between robots that are meant.
5
Bonaventura Model United Nations
26th, 27th and 28th of September
2014
Bibliography
-
https://en.wikipedia.org/wiki/Robot
http://www.wired.com/2014/07/moral-legal-hazards-robot-future/
http://www.links999.net/robotics/robots/robots_ethical.html
http://www.economist.com/node/21556234
https://nl.wikipedia.org/wiki/Robot
http://www.economist.com/blogs/babbage/2014/09/robot-jurisprudence
http://www.brookings.edu/blogs/the-avenue/posts/2015/04/29-robots-manufacturing-jobsandes-muro (interesting vision to look at. It’s an essay on why there is no real correlation
between net job loss and the use of robot manufacturers).
6
Download