Uploaded by Kai Clincy

KaiClincyLittmanEssay

advertisement
Kai Clincy
Professor O’Neil
WRTR 1312
September 11, 2020
In his essay “‘Rise of the Machines’ is Not a Likely Future”, Michael Littman discusses
the topic of artificial intelligence one day dominating the human race. While there has been a
large commotion and fear surrounding this issue, Littman firmly argues against the possibility of
computers becoming more powerful than humanity. He addresses certain topics, such as Moore’s
law or Nick Bostrom’s book, and elaborates on how they project a false sense of reality, as the
predicted exponential growth of artificial intelligence is often inaccurate. When referring to Nick
Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, Littman explains how Bostrom’s
predicted three-part AI takeover event is inaccurate and unrealistic. While humanity is currently
in the first stage, Littman gives reasoning as to why the second stage violates what humans know
of artificial intelligence. To further support this claim, he mentions Moore’s Law and explains
how the predicted exponential growth of technology is unlikely and inaccurate, as this growth
would soon lead to computers as powerful as the entire human race. Furthermore, he reasons as
to how much effort is needed for extreme technological advancement, as vast amounts of
resources and materials need to be sacrificed in order for progression. In return, Littman asserts
there are other valid safety precautions that should be considered when advancing in technology.
Concerns such as the economy crashing due to algorithmic traders, or fluctuations causing
sensitive power grids to overreact, should be more of a priority when confronting the issue of AI
safety. Littman believes that the progression of technology based on the fear of its impact on the
human race is greatly detrimental; instead, he suggests that artificial intelligence can be used for
the greater good of societal advancement and well-being. While there is distress surrounding the
idea of artificial intelligence one day dominating the human race, Littman concludes this turn of
events is highly unrealistic. However, artificial intelligence does pose a threat to humanity due to
its recent extreme advancements.
In the twenty-first century, artificial intelligence has made substantial accomplishments,
and this advancement could be threatening to the way human society currently lives. A group full
of advanced thinkers called the Future of Life Institute has intentions on directing the
advancement of artificial intelligence in the right way, safely. When addressing the FLI, Littman
states, “the idea of dramatically changing the AI research agenda to focus on AI “safety” is the
primary message of [the] group”(257), this quote conveys how important safety is to the FLI,
considering they are willing to revolve their entire agenda around safety. This group consists of
highly credible scientists, such as Elon Musk and formerly Stephen Hawking, who inform the
world of the possible dangers that artificial intelligence presents to humanity. Littman doesn’t
agree with this group’s idea to entirely shift the AI research agenda to focus on safety. In
response, Littman was quick to relate an AI uprising to those seen in Hollywood; but if highly
credible scientists, like Elon Musk, suggest that AI poses a substantial threat to humanity, their
input should be considered rather than mocked. Elon Musk, being an extremely powerful
technology influencer, has valid reasons to bring awareness to the computer uprising. AI is
currently entering the stage of merging with the human, specifically and most dangerously with
the human brain. Elon Musk is starting up a new corporation, called Neuralink, with the primary
focus of developing implantable brain machinery, and this invention poses many possible threats
to society. While this invention has intentions on only benefiting the human body, there are also
various risks to pursuing this goal, and Musk is aware of that. AI is reaching the point to be able
to think and learn efficiently on its own, and once AI is linked with the brain then it could be the
AI implant doing all the thinking and not our own human brain. This could lead to a
phenomenon where humans become immensely dependent on AI’s power and capabilities. So, in
response, it is inconsiderate for Littman to dismiss the concern of AI becoming a notable danger
to humans. Logically, it would be thoughtful to acknowledge the warnings of those who are
producing the AI themselves. The human brain and artificial intelligence linking can ultimately
lead to human dependence on the strengths and capabilities of AI.
Download