Week17_presentation Topic

advertisement
Penny:
這篇文章是針對 Hawking 在英國獨立報上發表那篇質疑 AI
的文章所做的回應,對人工智慧這件議題提供不同的觀點
<In an Apocalyptic Mood, Stephen Hawking Overestimates
the Evolutionary Future of Smart Machines>
Stephen Hawking is a great physicist but he's dead wrong in a co-authored article in
The Independent , darkly warning against the temptation "to dismiss the notion of
highly intelligent machines as mere science fiction." This, he says, "would be a
mistake, and potentially our worst mistake in history."
The peril derives from the prospect that "machines with superhuman intelligence
could repeatedly improve their design even further, triggering what Vernor Vinge
called a 'singularity'":” One can imagine such technology outsmarting financial
markets, out-inventing human researchers, out-manipulating human leaders,
and developing weapons we cannot even understand. Whereas the short-term
impact of AI depends on who controls it, the long-term impact depends on
whether it can be controlled at all.”
A learning model (here "model" means AI algorithm) might gain performance quickly
up to, say, 70 percent accuracy on a particular task. Accuracy is called an F-measure,
a harmonic mean between precision and recall. But it will then slow, and inevitably
saturate. At that point, it's done. In industry -- say, at Google -- it then goes from the
training phase to "production," where it's used to generate results on new, previously
unseen data.
For instance, if the model was trained for "learning" what music you like based on
your music listening habits, it would be released on a music recommendation site to
suggest new music samples for you. This is roughly how the music service Pandora
works. More simply, if it was trained on, say, two groups of email data -- one spam
and one good -- after saturation it would be released to label new, previously unseen
emails as "Spam" or "Good" (or rather "Yes" or "No," as the decision is binary). And
on and on. Since Big Data makes empirical or learning methods more effective,
learning methods have effectively dominated approaches to AI.
There is a confusion, then, at the heart of the vision that Stephen Hawking has
somewhat oddly endorsed. Adding more data won't help these learning problems -performance can even go down. This tells you something about the prospects for the
continual "evolution" of smart machines.
Facebook, meanwhile, has hired NYU computer scientist Yann LeCunn to head their
new AI Lab. LeCunn spearheads a machine learning approach known as "Deep
Learning." ATMs already use LeCunn's methods to automatically read checks.
Facebook hopes his approach will help the company automatically read images in
photos posted on the site, like pictures of married couples, or pets.
Futurist and entrepreneur Ray Kurzweil, currently Director of Engineering at Google,
popularized the notion of a coming "singularity" discussed in the Independent article.
He made his fortune designing and patenting speech-to-text synthesizers, and helped
design Apple's voice recognition system, Siri. Indeed, the examples mentioned in the
article -- self-driving cars, Google Now, Siri -- were all made possible by the
application of fairly well-known learning algorithms (for example, Hidden Markov
Models for voice recognition) that have had new life breathed into them (so to speak)
by the availability of massive datasets, the terabytes of text and image data on the
Web.
An unfortunate product of hype about AI is the concession to what Kurzweil himself
has called "Narrow" AI: the abandonment of the original goals of Artificial
Intelligence to actually understand human thinking, focusing instead on big money
applications that tell us little about the mind. While Kurzweil views this as a stepping
stone toward the eventual Singularity, as all those "narrow" AI applications like
recognizing a photo of a wedding picture "scale up" to genuine, real-time
intelligence, a more direct and evidence-based conclusion is that AI is in a bubble,
and as the Big Data methods saturate, excitement will crash and result in a
hangover. That's exactly what happened in the 1960s with early efforts on natural
language understanding, and later in the 1980s with the failure of so-called expert
systems.
AI is a tough racket; the human mind isn't a computer program, it seems, though
there's never a dearth of people happy to promulgate this view.
http://www.evolutionnews.org/2014/05/in_an_apocalypt085311.html
姜岑:
計算機科學家預測 2045 年人工智能將超越人腦
http://big5.ts.cn/special/xjkxxw/2011-02/22/content_5609400.htm
什麼是人工智慧
http://scitechvista.most.gov.tw/zh-tw/Articles/C/0/9/10/1/1548.htm
人工智慧 AI
http://lingb28.myweb.hinet.net/b9091199/AI.htm
人工智慧原理與意義
http://content.edu.tw/senior/computer/ks_ks/et/ai/chap1/
人工智慧春天來了!微軟開發隱形使用者介面
http://news.cnyes.com/Content/20140408/KIUSGD8J2CFWS.shtml
Definition - What does Artificial Intelligence (AI) mean?
Artificial intelligence (AI) is an area of computer science that emphasizes the creation of
intelligent machines that work and react like humans. Some of the activities computers
with artificial intelligence are designed for include:

Speech recognition

Learning

Planning

Problem solving
Artificial intelligence is a branch of computer science that aims to create intelligent
machines. It has become an essential part of the technology industry.
Research associated with artificial intelligence is highly technical and specialized. The core
problems of artificial intelligence include programming computers for certain traits such as:

Knowledge

Reasoning

Problem solving

Perception

Learning

Planning

Ability to manipulate and move objects
Applications of AI
Q. What are the applications of AI?
A. Here are some.
game playing
You can buy machines that can play master level chess for a
few hundred dollars. There is some AI in them, but they play
well against people mainly through brute force
computation--looking at hundreds of thousands of positions. To
beat a world champion by brute force and known reliable
heuristics requires being able to look at 200 million positions per
second.
speech recognition
In the 1990s, computer speech recognition reached a practical
level for limited purposes. Thus United Airlines has replaced its
keyboard tree for flight information by a system using speech
recognition of flight numbers and city names. It is quite
convenient. On the the other hand, while it is possible to instruct
some computers using speech, most users have gone back to
the keyboard and the mouse as still more convenient.
understanding natural language
Just getting a sequence of words into a computer is not enough.
Parsing sentences is not enough either. The computer has to
be provided with an understanding of the domain the text is
about, and this is presently possible only for very limited
domains.
computer vision
The world is composed of three-dimensional objects, but the
inputs to the human eye and computers' TV cameras are two
dimensional. Some useful programs can work solely in two
dimensions, but full computer vision requires partial
three-dimensional information that is not just a set of
two-dimensional views. At present there are only limited ways of
representing three-dimensional information directly, and they
are not as good as what humans evidently use.
expert systems
A ``knowledge engineer'' interviews experts in a certain domain
and tries to embody their knowledge in a computer program for
carrying out some task. How well this works depends on
whether the intellectual mechanisms required for the task are
within the present state of AI. When this turned out not to be so,
there were many disappointing results. One of the first expert
systems was MYCIN in 1974, which diagnosed bacterial
infections of the blood and suggested treatments. It did better
than medical students or practicing doctors, provided its
limitations were observed. Namely, its ontology included
bacteria, symptoms, and treatments and did not include
patients, doctors, hospitals, death, recovery, and events
occurring in time. Its interactions depended on a single patient
being considered. Since the experts consulted by the
knowledge engineers knew about patients, doctors, death,
recovery, etc., it is clear that the knowledge engineers forced
what the experts told them into a predetermined framework. In
the present state of AI, this has to be true. The usefulness of
current expert systems depends on their users having common
sense.
heuristic classification
One of the most feasible kinds of expert system given the
present knowledge of AI is to put some information in one of a
fixed set of categories using several sources of information. An
example is advising whether to accept a proposed credit card
purchase. Information is available about the owner of the credit
card, his record of payment and also about the item he is buying
and about the establishment from which he is buying it (e.g.,
about whether there have been previous credit card frauds at
this establishment).
Future/Ethics of artificial intelligence
Roboethics
Robot rights
The threat to privacy
The threat to human dignity
Machine ethics
Unintended consequences
Many researchers have argued that, by way of an "intelligence
explosion" sometime in the next century, a self-improving AI could
become so vastly more powerful than humans that we would not be
able to stop it from achieving its goals.
Download