Think Tank system should do?

advertisement
Think Tank
Humans v. Robots: where are the limits of what an autonomous
system should do?
On May 9th, 2012, UCL Centre for Ethics and Law hosted its fourth Think Tank, titled
‘Humans v. Robots: where are the limits of what an autonomous system should do?’. Mark
Serföző, Chief Counsel of Compliance & Regulation from BAE Systems opened the event
with a presentation that introduced the key concepts related to autonomous systems and
showcased autonomous vehicles developed by BAE and Google. A second presentation was
given by Professor Stephen Hailes from UCL Department of Computer Science in which
autonomous systems were discussed from a theoretical point of view, noting in particular the
potential unpredictability of such systems. The discussion session saw creators, legislators,
and users of autonomous systems engage in a lively debate, summarized under three key
words below.
Responsibility
As an autonomous system is capable of acting on its own, yet does not constitute an
independent legal or moral entity, who should be responsible when its actions lead to grave
consequences, such as when a malfunctioning self-driving car hits a pedestrian? We can try to
adapt familiar legal concepts of vicarious or product liability, but doing so often seems
inappropriate, as autonomous systems are neither products nor people. We are then forced to
ask the question not just of who should take responsibility for autonomous systems, but who
is actually responsible for what autonomous systems do.
This is a difficult and complex question. Several parties may potentially be held responsible,
including the user who operated the system, the manufacturer who produced it, the engineers
who designed it and so on, because all of them played a part in this unfortunate incident: the
user because he decided to switch the vehicle to the autonomous mode; the manufacturer and
its engineers because the behaviour of the car in the autonomous mode is controlled by
programs of their creation.
It can be argued that the driver of the car should bear the ultimate responsibility because that
driver is most closely connected to the autonomous system’s failure. It was the driver’s
decision to hand over control to the autonomous system and as such he may be seen as a
cause of the accident, one who is closer in time to the accident than any other human. The
user of an autonomous system that subsequently fails may often feel like they have caused the
accident, being so close as they are to it both in time and space. It would be unusual not to
feel responsible, even reproachable. However, situations in which we feel responsible are not
necessarily always the situations in which we are responsible. While the question of whether
someone is responsible is a legal one and has a bearing on the general public, the attempt to
dictate whether someone should feel responsible borders upon meddling with private affairs.
It may be straightforward to blame the driver if he or she was at fault for switching to
autonomous mode: if, for example, he ignored safety guidance or warnings. Equally, blaming
the manufacturer may be unproblematic if they were somehow at fault: if they provided a
2
defective product, for example. However, failures can occur without human error, and if the
user adheres to product manuals and safety guidelines, there seems to be legitimate reason to
pardon his confidence in the robustness of the product, seeing that such expectation is
generally acceptable. Autonomous systems present new and unfamiliar challenges, and we
must be careful not to expect users or producers of autonomous systems to have perfect
foresight or act perfectly.
Some autonomous systems may genuinely be said to possess a mind of their own. They may
have self-learning capacities and be able to develop independent of human involvement.
When such systems fail, they present a difficult problem of causation. It is hard to see how we
can be said to have caused the failures of such independent systems simply because we
brought them into existence, just as we often do not cause the failures of our children –
though we may nevertheless be responsible for such failures.
We may try not to get lost in the complexity of the problem and content ourselves with
blaming the system itself. However, to abandon the assignment of responsibility to humans in
such cases of misfortune would arguably undermine how we interact with one another as
humans.
Trust
Even when autonomous systems have a perfect safety record, we are still very often troubled
by relying on those systems: can we really trust them? What may come as a surprise to some
is that, consciously or not, we are already trusting our lives to systems of various degrees of
autonomy. Aircrafts, lifts, underground trains, and many more, all operate with the assistance
of autonomous systems, or even to a great extent rely on autonomous systems to operate, and
they all have become safer as a result. For instance, the accident rates of flights have declined
because of the introduction of auto-piloting systems, and most accidents today are due to
human errors.
Apart from, or perhaps instead of, relying upon faith in the expertise of the actual
manufacturers and service providers, developing the regulatory framework that governs these
industries may help us to become more confident in autonomous systems. With the culture of
professionalism deeply ingrained in us, we take it for granted that people at all stages will
abide by their code of conduct and adhere to standards; regulation by contrast offers more
concrete and widely-applicable assurances. Admittedly, though, not everyone has faith in the
power of regulation. It is almost always catching up with reality, usually complicated, not
always based on expert opinions, and never clear of loopholes.
Whatever may be giving us confidence, we do seem to believe in the whole system every time
we board an airplane or a train. We are not deterred by the risk inherent in these activities,
probably because we are familiar with them. By the same logic, our phobia of novel
autonomous systems could be due in a large part to our lack of experience with them. Most of
us do not fully understand the underlying workings of autonomous systems. To fully
understand autonomous systems may sometimes be impossible, if for example they are selflearning, or form part of a number of systems that can interact in unpredictable way. We
accordingly struggle with our psychological intuition of taking control, resulting in mistrust.
Given such skepticism, it is perhaps not surprising that few people, if any, would be willing to
put their children in the care of hypothetical robot educators in lieu of conventional human-
3
mediated education, even if it means guaranteed improvement of marks. Curiously, a sizeable
number of people would not mind being taken care of by robots themselves, or having robots
undertake the duty of looking after the elderly who otherwise would not receive quality care
at all. This discrepancy is interesting, and one is rightfully concerned about our readiness to
admit the elderly to robots’ care.
It could be said that it is an unfair comparison. For children with parents, there is obviously a
better alternative than spending the whole day with robots, and it is also the parents’ duty to
ensure their children receive proper schooling; for elderly without any family connections,
having robot caretakers is perhaps better than nothing and would seem, at least superficially,
an improvement of life quality. On the other hand, though, this attitude of compromise when
the elderly are concerned may simply reflect the degree of degradation of ethics that is
happening in our society at the moment.
We feel unease about some autonomous systems because they seem worse than the human
alternative. This may resolved to an extent by considering more advanced systems, moving
from talking in terms of ‘robots’ to the sorts of rich and interactive virtual environments that
autonomous systems can create. Even the best systems, though – the perfect virtual educators
– may not be what we want, simply because they are not human. It is generally agreed that
autonomous systems will be most beneficial if they work with humans, while total
substitution for everything human would be wholly undesirable.
Change
Though we are still some distance away from humanoid robots, autonomous systems are
already pervasive in our lives, infiltrating areas far beyond means of transport. The niche
where they really flourish is the digital realm. From trading algorithms, to tweeting bots, to
search engines, they are everywhere in the Cloud, often obscured from our sight, yet exerting
an ever-increasing amount of influence upon us.
What has attracted the attention of many is the way Google search operates. More
specifically, the search engine can anticipate what we want to see and rank search results
accordingly, so that entries most to our liking will be displayed on top. Thus, by using Google
to search for news for example, we will be shown articles that tend to agree with our take on
events, and in time, we run the danger of being shielded from alternative views and
incessantly reinforcing our own conviction. This so-called ‘Google bubble’ is worrying
because it could misrepresent reality and deepen our previously-held biases. But is this
phenomenon emergent from the self-learning algorithms of Google’s search engine?
It may be reasonably suspected that the search engine was deliberately made so to gain users’
favour, thereby encouraging loyalty, which would translate into a large user base and
eventually more profits from advertisements. Such strategy may not be so dissimilar from the
practice of newspapers. However, there are a few points to consider: firstly, Google
dominates its market to a far greater extent than any newspaper; secondly, despite its
disposition, a newspaper will strive for a relatively broad representation to cater for its
readership which must contain a certain degree of heterogeneity, but Google search results are
extraordinarily targeted and personalized; thirdly, many people may well be aware of the bias
of their favourite newspaper, but when they use search engine, they are not expecting a
selective filter at work. It is not only Google: Facebook’s algorithms influence which friends
we interact with; Twitter is ridden with bot-generated tweets; Wikipedia’s maintenance also
4
heavily features bots. The motive behind these systems is often claimed to be altruistic,
directed at providing the content we want to see, yet they remain troubling. When we consider
the possibility that malevolent administrators can also use these systems for direct
manipulation, we find even more cause for concern.
Given this reality, it is imperative to educate people about the nature of the ‘algo-world’. An
understanding of the basic concepts such as ‘system’ and ‘algorithm’ would help people make
much more sense of this digital world we are living in, although how this could be achieved
remains an open question.
Of course, the prospect of autonomous systems is not all gloomy. Even the technology that is
cause of concern can be used for good. For instance, an adaptive system similar to Google’s
search engine can be used to identify what one does not know, which would be of great use to
students. As long as we keep a firm hold on the steer, development of autonomous systems
will be a great force for advancing science and technology.
Suggested Further Reading:
The Royal Academy of Engineering (2009) Autonomous Systems: Social, Legal and Ethical
Issues. London.
Grodzinsky FS, Miller KW, Wolf MJ (2008) The ethics of designing artificial agents. Ethics
and Information Technology 10:115�121.
Anderson K, Waxman M (2012) Law and ethics for robot soldiers. Policy Review.
Download