Conference Session: A12
Paper #6257
Disclaimer — This paper partially fulfills a writing requirement for first year (freshman) engineering students at the University of Pittsburgh Swanson School of Engineering. This paper is a student, not a professional , paper. This paper is based on publicly available information and may not be provide complete analyses of all relevant data. If this paper is used for any purpose other than these authors’ partial fulfillment of a writing requirement for first year (freshman) engineering students at the University of Pittsburgh Swanson School of Engineering, the user does so at his or her own risk.
Stefan Lambert, sdl38@pitt.edu, Mahboobin 4:00, Alex Duncan, ajd122@pitt.edu, Mena 6:00
Revised Proposal-
This paper will outline the myriad of benefits and associated risks in creating and developing artificial intelligence systems (AI). Artificial intelligence is defined as the innate intelligence in robots, machines, and internalized software. It was created under the ideology of making computers more human-like and flexible. Researchers asked,
‘What if we can develop a computer that makes its own reasonable decision without being commanded to do so?’ The result was AI, advanced software with the ability to make autonomous decisions based upon the logic of its own programming. Such technology has a place in every industry on the planet as far as improving efficiency, managing production standards, and even furthering research and science itself [1]. That is to say that at its highest level AI will be able to not only function autonomously, but to work and evolve on its own accord [2]. Creating a computer program that is not only extremely powerful, but also autonomous, is still a goal that has never quite been achieved to the extent of its potential. At the present date artificial intelligence uses tools such as logics, optimization, and statistics to deduce an output. However, the number of these functions that must be assigned in order to attain a fully autonomous program seems almost infinite [3]. The program must have the ability to deal with the millions of decisions that any independent construct deals with on a daily basis. This dilemma is the primary reason that true artificial intelligence has been so elusively difficult to achieve.
However, as technology improves every day and new scientists join the field, the time when true artificial intelligence exists approaches. With this powerful technology comes great promise to both improve the quality of everyday life and to further the bounds of science. As is always the case with revolutionary technology though, inherent dangers are aplenty. British scientist Dr. Stephen Hawking went so far as to say that AI “could spell the end of the human race.”
(Hawking, Stephen Dec 2, 2014) The majority of dangers, which will be expanded upon later, revolve around the idea of an AI that surpasses its human creators. This is the greatest threat posed by this revolution in technology, and as such AI has been compared to gunpowder and nuclear weapons in the magnitude of its ramifications on society [4]. As such, throughout this paper we will discuss the numerous betterments and dangers artificial intelligence presents on our planet. As with any powerful new technology, AI has the possibility to usher in a new age of prosperity, or to further endanger our race as a whole.
[1] A. Pannu. (2015). “Artificial Intelligence and its
Application in Different Areas” International Journal of
Engineering and Innovative Technology (scientific journal). http://www.ijeit.com/Vol%204/Issue%2010/IJEIT14122015
04_15.pdf
.
[2] Walsh, Toby. (2015). “Artificial Intelligence should benefit society, not create threats” PHYS.ORG
(online article). http://phys.org/news/2015-01-artificial-intelligence-benefitsociety-threats.html
[3] E. Yudkowsky. (2008). “Artificial Intelligence as a
Positive and Negative Factor in Global Risk” Machine
Intelligence Research Institute . (research paper). https://intelligence.org/files/AIPosNegFactor.pdf
.
[4] S. Russell. (28 May, 2015). “Take a stand on AI weapons”
Nature . (scientific journal).
Vol 521, p. 415-416.
M. Castelli, L. Vanneschi, A. Popovic. (2015). “ Predicting
Burned Areas of Forest Fires: An Artificial Intelligence
Approach
”
Fire Ecology.
(research paper). https://www.researchgate.net/publication/274889641_Predict ing_Burned_Areas_of_Forest_Fires_An_Artificial_Intelligen ce_Approach
This paper, authored by two professors and a software developer, details recent experiments involving the use of artificial intelligence in a highly practical fashion. By analyzing the results of the research paper, readers can see that the AI was effective at predicting the path of forest fires, even with limited information to go off. These results show promise for the future of AI.
University of Pittsburgh, Swanson School of Engineering 1
2016/03/04
Stefan Lambert
Alex Duncan
A. Ezrachi, M. Stucke. (2012). “Artificial Intelligence
Collusion: When Computers Inhibit Competition” The
University of Oxford Centre for Competition Law and Policy
(research paper). http://intranet.law.ox.ac.uk/ckfinder/userfiles/files/CCLP(L)4
0.pdf
This research paper, penned by the director of the Oxford
University Centre for Competition Law and Policy and an associate professor at Oxford, outlines the possible legal issues facing artificial intelligence. They primarily outline concern at the idea of AI controlled markets, but also talk at considerable length about the implications and possibilities of a future with ethical and law-abiding AI.
L. Godo, H. Prade. (2012). “Weighted Logics for Artificial
Intelligence” 20th European Conference on Artificial
Intelligence . (research paper). http://www.iiia.csic.es/wl4ai/files/wl4ai-working-notesafterworkshop.pdf
.
These are the compiled working notes of two artificial intelligence researchers at the 20th European Conference on
Artificial Intelligence. While the paper is complex and written primarily in very technical terms, it offers the reader a very enticing look into the future of AI and such systems. This is essential to the more technical aspects of this paper and helps to further clarify our understanding of more complex AI programming.
A. Pannu. (2015). “Artificial Intelligence and its Application in Different Areas”
International Journal of Engineering and
Innovative Technology (scientific journal). http://www.ijeit.com/Vol%204/Issue%2010/IJEIT14122015
04_15.pdf
.
This is an article published in the International Journal of
Engineering and Innovative Technology by Avneet Pannu, a student at M. Tech in India. The article outlines the advantages offered by artificial intelligence over natural human intelligence. This is very pertinent to our paper as we seek to outline the possible benefits and increases in scientific knowledge possible with AI’s use in our society.
G. Rifkin. (25 Jan 2016). “Marvin Minsky, Pioneer in
Artificial Intelligence, Dies at 88” The New York Times.
(online article). http://www.nytimes.com/2016/01/26/business/marvinminsky-pioneer-in-artificial-intelligence-dies-at-
88.html?_r=0
This article focuses in on Marvin Minsky, a professor in
Computer Science at the Massachusetts Institute of
Technology. In the article, the author talks of Marvin’s contributions to science and technology, but more importantly about the thoughts and theories of the man. Marvin’s theory was that artificial intelligence would work just like the human brain does, just many times faster and further in depth.
S. Russell. (28 May, 2015). “Take a stand on AI weapons”
Nature . (scientific journal).
Vol 521, p. 415-416.
This source is an article written in Nature by Stuart Russell, a professor in Computer Science at the University of
California, Berkeley. The article seeks to outline the dangerous moral and ethical consequences of Lethal
Autonomous Weapon Systems, weapons controlled by AI. In doing so, he also points out just how wide the applications of
AI reach, and just how dangerous unregulated AI could be.
M. Tegmark. (2015). “Benefits and Risks of Artificial
Intelligence” Future of Life Institute (online article). http://futureoflife.org/background/benefits-risks-of-artificialintelligence/
Authored by Max Tegmark, President of the Future of Life
Institute, this article outlines not only the benefits but also the significant risks Max sees in artificial intelligence. He argues that currently AI is being used primarily for good, and that it is essential that this trend continues. The article contains information valuable to the research topic from both perspectives on the increasing use of AI.
T. Walsh. (2015). “Artificial Intelligence should benefit society, not create threats” PHYS.ORG
(online article). http://phys.org/news/2015-01-artificial-intelligence-benefitsociety-threats.html
Written by Toby Walsh, a professor from the University of
New South Wales with a doctorate in Artificial Intelligence.
Toby seeks to allay fears surrounding AI, being quick to point out that people have a primal fear of creating something stronger than them, and the actual threat is quite low. His analysis is highly useful in our efforts to illustrate the benefits of AI to society.
E. Yudkowsky. (2008). “Artificial Intelligence as a Positive and Negative Factor in Global Risk” Machine Intelligence
Research Institute . (research paper). https://intelligence.org/files/AIPosNegFactor.pdf
.
Written by a research fellow at the Machine Intelligence
Research Institute, this paper seeks to address both the positive and negative possibilities of artificial intelligence. He begins by stating just how few the people are who truly have an understanding of AI. This is dangerous in that a weak understanding causes misinformation and both unwarranted fear and elation at the prospects that this technology brings.
University of Pittsburgh (2014). “Choosing a Topic.”
University of Pittsburgh ULS.
(video). http://www.library.pitt.edu/other/files/il/fresheng/index.html
.
2