Uploaded by oreocar3

14681175 Problem Solving Essay

advertisement
Morality and Autonomous Vehicles
I want to focus on the topic of ethics and morality when it comes to smart cities.
Without a doubt, smart cities in the future will have some ethical issues whether it
comes to information accessibility or moral dilemmas. More specifically, I want to
address the issue of autonomous vehicles cars and the morality issues they face.
Autonomous technology is amazing and it enables new efficiencies and
operations but we still need to do a substantial amount of work to ensure that this
technology is safe and comfortable to be used as an everyday experience. MIT had
recently made a project called MIT Moral Machine which is where you can choose
between various moral dilemmas such as the Trolley dilemma discussed in class (Awad,
2018). While their report indicated what individuals tend to do, it simply isn’t enough to
know what individuals should do which leads to the main problem: We can’t simply instill
our moral theory.
We already know that autonomous vehicles need
to align with our values and advocate as our proxies,
they should be able to perform better than humans and
lastly, they need to machine learning to get there which
is a key enabling feature of AI. On the right, there is a
chart about three different forms of AI when it comes to
playing the game “Go”(Mandalero, 2018). The most
interesting AI that applies to our case is Reinforcement
learning which is when the AI is given a small amount of
data and a sense of value or reward which when
combined with training allows the algorithm to
maximize that reward and make decisions that help
maximize that reward. From the small dataset, the AI
becomes very smart eventually surpassing the other types of AI and beating the
benchmark of a human expert play represented by the dashed line.
Now, we can’t use the same algorithm for autonomous vehicles because the rules
of “go” are defined pretty well but the rules for self-driving cars are not as they include
moral rules and we can’t possibly be the ones to hard-code them into the algorithm
because our morales might be wrong leading to catastrophic issues. There’s also another
issue we don’t exactly know what could happen next, even though “go” has many
different options, we can still specify them which is just not possible in the real world.
To solve the first issue of not having any defined “rules”, something needs to be
created that can be given to the algorithm innately. It needs to have some notion of
primitive value. These have to be fundamental and unwavering. They have to be some
kind of moral truth that most if not all of society would disagree with, for example, all
human life has some moral worth. From there, we can allow the machine itself to do
some of the work instead of us providing the guidelines and risk getting it incorrect.
These will act as a base for the algorithm and then allow itself to figure out what
decisions lead to good moves that align with our values and what decisions lead to
negative consequences. Regardless of this, we still need to solve the issue of setting the
correct primitive values. To resolve this, a separate system might be required that has
some knowledge of physical laws. While this is immensely sophisticated, it is not
unthinkable, automotive manufacturers have tons of data on car crashes and what
people do wrong so we already have a general sense of what driving actions are correct
and which are incorrect. Even though we have discussed that data-driven approaches
tend to be non-inclusive of certain groups in cities, it is frankly the only way we can get a
general idea of driving conditions and use that as a basis of the primitive values.
Now, it may seem that it is simply not enough to provide a tiny bit of primitive
moral truth and trust that the machine gets it right, of course not and so what we could
do instead is start with this idea and then through the process of training in the
machine’s algorithm, we can create checkpoints along the way where scenarios can be
created to test what the machine will do. From there, we can judge whether or not the
response is morally acceptable or not. If the decision is incorrect, we can provide that
feedback to the machine, and instead of specifying that change to the machine
ourselves by modifying the code, we can have the machine change its own decision up
to the point where we agree with what the machine outputs.
Self-Driving Cars are the future when it comes to mobility, especially mobility in
smart cities. I still believe that smart cities will use advanced technologies and
information to enhance citizen well-being, service levels, and economic development
and self-driving cars are a major component in achieving all the aforementioned
aspects.
References:
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., &
Rahwan, I. (2018, October 24). The moral machine experiment. Nature News.
Retrieved March 3, 2022, from
https://www.nature.com/articles/s41586-018-0637-6
Mandelaro, J. (2018, May 17). The ethics of Autonomous Vehicles. NewsCenter.
Retrieved March 3, 2022, from
https://www.rochester.edu/newscenter/josh-pachter-18-and-the-ethics-of-aut
onomous-vehicles-320582/
Download