1 - IDt

advertisement
Understanding Moral Responsibility in
Systems with Autonomous Intelligent Robots
Gordana DODIG-CRNKOVIC1 and Daniel PERSSON2
School of Innovation, Design and Engineering,
Mälardalen University, Västerås, Sweden
Abstract. According to the widespread classical view in philosophy, free
will is essential for an agent to be assigned responsibility. Pragmatic
approaches (Dennett, Strawson) on the other hand focus on social,
organizational and role-assignment aspects of responsibility. We argue that
for all practical purposes, moral responsibility in autonomous intelligent
systems is best viewed as a regulatory mechanism with the aim to assure
expected overall behavior of a multi-agent system. An intelligent agent can
thus be ascribed “responsibility” in much the same way as it is ascribed
(artificial) “intelligence”. We expect a (morally) responsible artificial
intelligent agent to behave in a way that is traditionally thought to require
human (moral) responsibility. Implementation of moral responsibility into an
intelligent agent is not a trivial task, but that is a separate question of the
field of artificial intelligence and emerging discipline of artificial morality. To
be assigned individual responsibility, technological artifact must be seen as
a part of a socio-technological system with distributed responsibilities. This
pragmatic formulation is the step towards sharing of responsibility for tasks
between humans and artificial intelligent agents. We are connecting
philosophical approach to the problem of (moral) responsibility with
practical applications in multi-agent technologies. Artificial morality is the
ground where computing and philosophy meet and inform each other.
Research in modeling, simulations and experiments with artificial intelligent
agents will help us to better understand the phenomenon of moral
responsibility in multi-agent intelligent systems, including those consisting
of humans and/or robots.
Recently, Roboethics and Machine Ethics, two new fields of applied ethics,
have been established, which have brought about many novel insights
regarding ethics and artificial intelligence. Adding ethical rules of behavior to
e.g. softbots seems to be both useful and practical. Ethics of artifact can be
based on static or dynamic rules (norms) and value systems. One of important
ethical questions for intelligent systems is whether robots can (even in
principle) be attributed (moral) responsibility.
A common argument against ascribing moral responsibility to artificial
intelligent systems is that they do not have free will and intentionality. The
problem of this argument is that it is actually nearly impossible to know what
such an intentional mental state would entail. In fact, even for humans,
intentionality is ascribed based on the observed behavior, as we have no
access to the inner workings of human minds – much less than we have
access to the inner workings of a computing system.
1
2
gordana.dodig-crnkovic@mdh.se
dpn04001@student.mdh.se
1
Arguments against ascribing moral responsibility to artificial intelligent
agents stem from a view of artificial intelligent system as an essentially isolated
entity. We argue however that to address the question of moral responsibility
we must see an agent as a part of a larger socio-technological system. From
such a perspective, ascribing responsibility to an intelligent agent has primarily
a regulatory role. In analogy with the definition of artificial intelligence as an
ability of an artifact to accomplish tasks that are traditionally thought to require
human intelligence, we can assume that artificial (moral) responsibility is an
ability of an artifact to behave in a way that is traditionally thought to
require human moral responsibility.
Having accepted above functionalist approach to moral responsibility, the
next step will be to model our understanding of a complex system of
responsibility relationships in different human groups (project organizations,
institutions, and similar) trying to simulate and implement corresponding control
patterns in artificial multi-agent systems. Delegation of tasks is followed by
distribution of responsibilities, so it is important to know how the balance of
responsibilities between different actors in the system is achieved.
Based on the experiences with safety critical systems such as nuclear power,
aerospace and transportation systems one can say that the socio-technological
structure which supports their functioning entails control mechanisms providing
their (re)production and maintenance, proper deployment and use, including
safety barriers preventing and mitigating malfunction. The central and most
important part is to assure the safe functioning of the system under normal
conditions,
which
is
complemented
by
the
preparedness
for
abnormal/accidental condition mitigation.
The deployment of intelligent systems must consequently rely on several
responsibility control loops: the awareness and preparedness for handling risks
on the side of designers, producers, implementers, users and maintenance
personnel as well as the understanding of the society at large which will
provide a feedback on the consequences of the use of robots, back to
designers and producers. This complex system of shared responsibilities
should secure safe functioning of the whole distributed responsibility system
including autonomous (morally) responsible intelligent robots (softbots).
In a distributed open application, mutual communication of autonomous
intelligent agents can be studied as an emergent property in a computational
complex system. One of promising approaches to modeling and simulation of
such communicating systems is game theory and evolutionary computing.
Psycho-social control mechanisms of approval and disapproval typical of
human groups will be replaced by functional equivalents in artificial agents.
In sum: advanced autonomous intelligent artificial agents will at certain stage
of development be capable of taking moral responsibility. In order to learn how
to implement that feature into intelligent robots (softbots), a good starting point
is to learn from humans in similar situations where sharing of moral
responsibilities takes place.
2
Download