Today’s Topics

advertisement
Today’s Topics
• Exam (comprehensive, with focus on material since midterm),
Thurs 5:30-7:30pm, in this room, two pages and notes and
simple calculator (log, e, * / + -) allowed
• The Turing Test
• Strong vs Weak AI Hypotheses
• Searle’s Chinese Room Story
• High-Level Recap of Topics since Midterm
• Final List of Topics Covered this Term (to various levels of depth, of course)
• Review of Fall 2014 Final (Recall: another review tomorrow of Spring 2013 final)
•
Future of AI? [Not on Final]
- As a science/technology
- Its impact of society
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
1
An Informal Survey
• First, be sure to do on-line course evals!
– More programming? More (but shorter) HWs?
• Would you prefer TWICE WEEKLY classes?
• How many use/do AI (ML? Other?) at work?
– Not in the sense of simply using Google, Siri, etc
• How many expect to? Within 2 yrs? 5?
• Does the ‘singularity’ seems NEARER or farther
now than it did on Day 1 of the class?
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
2
The Turing Test
• Says intelligence is judged
based on behavior
(rather than inspecting internal data structures)
• Focus on cognition, rather than perception,
so use a simple ‘ascii’ interface
• If human judge interacting (via a teletype) with two
‘entities’ cannot accurately say which is the human
and which is the computer, then the computer is
intelligent (visualized on next slide)
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
3
The Turing Test
12/15/15
Not a serious concern of
nearly all AI researchers
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
4
Strong vs. Weak
AI Hypotheses
Weak AI Hypothesis
we can accurately simulate
animal/human intelligence in a computer
STRONG AI Hypothesis
we can create algo’s that are intelligent
(conscious?)
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
5
Searle’s Chinese Room
• What does it mean to ‘understand’?
• Assume non-Chinese speaker is in a room
with a bunch of IF-THEN rules written in
Chinese (see next slide)
– Questions come in written in Chinese
– Human inside room matches symbols,
adding intermediate deductions to some
‘scratch space’
– Some rules say (in English) in their THEN part,
‘send this Chinese symbol … out to the user’
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
6
Searle’s Chinese Room (1980)
If person inside does a great job of answering questions,
can we say he or she understands?
Even if she or he is only blindly following rules?
(Of course the ‘person inside’ is acting like an AI program)
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
7
Some Debate
• Your thoughts/comments?
• Is the room + the human intelligent?
– After all, no one part of an airplane has the property
flies, but the whole thing does
– This is called ‘the systems reply’
(see http://plato.stanford.edu/entries/chinese-room/)
• The ‘robot reply’ says that the problem is that the
person doesn’t sense/interact with the real world
– ‘symbols’ would be grounded to actual physical things
and thereby become meaningful
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
8
Main Topics Covered
Since Midterm (incomplete sublists)
• But don’t forget that ML and search
played major role in second half of class as well!
• Bayesian Networks and Bayes’ Rule
– Full joint, Naïve Bayes, odds ratios, statistical inference
• Artificial Neural Networks
– Perceptrons, gradient descent, HUs, linear separability, deep nets
• Support Vector Machines
– Large margins, penalties for outliers, kernels (for non-linearity)
• First-Order Predicate Calculus
– Representation of English sentences, logical deduction, prob logic
• Unsupervised ML, RL, ILP, COLT, AI & Philosophy
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
9
Detailed List of Course Topics:
Final Version
Reasoning probabilistically
Learning from labeled data
Experimental methodologies for choosing parameter
settings and estimating future accuracy
Decision trees and random forests
Probabilistic models
Nearest-neighbor methods
Genetic algorithms
Neural networks
Support vector machines
Reinforcement learning (reinforcements are ‘indirect’ labels)
Inductive logic programming
Computational learning theory
Variations: incremental, active, and transfer learning
Probabilistic inference
Bayes' rule, Bayesian networks, Naïve Bayes
Reasoning from concrete cases
Cased-based reasoning
Nearest-neighbor algorithm
Kernels
Reasoning logically
First-order predicate calculus
Representing domain knowledge using mathematical logic
Logical inference
Probabilistic logic
Problem-solving methods based on the biophysical world
Learning from unlabeled data
K-means
Expectation maximization
Auto association neural networks
Searching for solutions
Heuristically finding shortest paths
Algorithms for playing games like chess
Simulated annealing
Genetic algorithms
12/15/15
Genetic algorithms
Simulated annealing
Neural networks
Reinforcement learning
Philosophical aspects
Turing test
Searle's Chinese Room thought experiment
The coming singularity
Strong vs. weak AI
Societal impact and future of AI
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
10
Suggestions
•
Be sure to carefully review all the HW solutions, especially HWs 3, 4, & 5
•
Imagine a HW 6 on MLNs and RL (and see worked examples in lec notes)
•
My old cs540 exams highly predictive of my future cs540 exams
•
ILP: understand search space when predicates have k arguments
•
Some things to only know at the ‘2 pt’ level
–
Calculus: have an intuitive sense of slope in non-linear curves
(only need to know well: algebra, exp & log’s, arithmetic, (weighted) sums and products:  and )
–
Matrices and using linear programming to solve SVMs (do know dot product well)
–
Active, transfer, and incremental learning
–
‘Generalizing across state’ in RL
–
‘Covering’ algorithms for learning a set of rules (covered in ILP lecture)
–
Won’t be on final: Using variable types to control search in ILP
–
COLT: only need to understand role of epsilon and delta ( and )
–
How to build your own walking-talking robot :-)
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
11
An “On Your Own” RL HW
(Solution)
Consider the deterministic reinforcement environment drawn below. Let γ=0.5. Immediate
rewards are indicated inside nodes. Once the agent reaches the ‘end’ state the current
episode ends and the agent is magically transported to the ‘start’ state.
B
(r=5)
4
Start
(r=0)
4
4
A
(r=2)
4
End
(r=5)
4
4
C
(r=3)
4
(a) A one-step, Q-table learner follows the path Start  B  C  End. On the graph below,
show the Q values that have changed, and show your work. Assume that for all legal
actions (ie, for all the arcs on the graph), the initial values in the Q table are 4, as show
above (feel free to copy the above 4’s below, but somehow highlight the changed values).
7
Start
(r=0)
12/15/15
B
(r=5)
A
(r=2)
5
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
End
(r=5)
C
(r=3)
5
12
An “On Your Own” RL HW
(Solution)
(b) Starting with the Q table you produced in Part (a), again follow the path
Start  B  C  End and show the Q values below that have changed from Part (a).
Show your work.
7.5
Start
(r=0)
B
(r=5)
End
(r=5)
5.5
A
(r=2)
C
(r=3)
5
(c) What would the final Q values be in the limit of trying all possible arcs ‘infinitely’ often?
Ie, what is the Bellman-optimal Q table? Explain your answer.
7.75
7.75
Start
(r=0)
5.875
A
(r=2)
B
(r=5)
5.5
5.5
5
End
(r=5)
C
(r=3)
5
(d) What is the optimal path between Start and End? Explain.
Start  B  C  End
12/15/15
The policy is: take the arc with the highest Q out of each node
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
13
Another Worked MLN Example
Given these three rules, what is the prob of P?
PR
RQ
Q
wgt = 2
wgt = 3
wgt = 1
P
F
F
F
F
T
T
T
T
12/15/15
Q
F
F
T
T
F
F
T
T
R
F
T
F
T
F
T
F
T
[ same as  R  Q ]
[shorthand for the rule: ‘true  Q’ ]
Unnormalized Prob
exp(0 + 3 + 0)
To get Z, sum all the unnormalized probs
exp(0 + 0 + 0)
exp(0 + 3 + 1)
Then divide all the probs by Z to normalize
exp(0 + 3 + 1)
exp(0 + 3 + 0)
Finally sum prob’s of those cells where P is true
exp(2 + 0 + 0)
exp(0 + 3 + 1)
exp(2 + 3 + 1)
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
14
Break to Review Fall 2014 Final
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
15
Future of AI?
(Remember everyone’s bad at predicting the future!)
• Your comments/questions?
• ML, ML, and more ML?
• Ditto for Data? Scaling up even more?
Specialized h/w for AI/ML algorithms (GPUs++)?
• Personalized s/w, learns from our every action?
Our watches predict heart attacks N minutes in advance? Etc
• Will ‘knowledge’ (ever) play a bigger role?
Eg, can we train our s/w agents and robots
by talking to them, like humans teach humans? [Bonus lecture if time today]
• Robots becoming ubiquitous? (Eg, self-driving cars)
• More natural interaction with computers?
Language, gestures, sketches, images, brain waves?
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
16
Robots Teaching Robots
• How much time did we, as a group,
spend me teaching you a fraction of what I (and the
authors of assigned readings) know about AI?
– 50 hrs of class time x 70 humans  150 days
– Plus, no doubt, 10x that time outside of class :-)
• How long will it take one robot that has learned
‘a lot’ to teach 70 robots? 7M robots? 7B?
– A few seconds?
– Or will robots+owners have ‘individual differences’
that preclude direct brain-to-brain copying?
– Remember : predictions (a) for nuclear power leading to
“electricity to cheap to meter” and (b) `the war to end all wars’
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
17
Societal Impact of AI?
• When will majority of highway miles be robot driven?
• When will most of the 'exciting new' S/W come from ML?
• When will half of current jobs be fully automated?
• For every job automated by AI/ML, how many new jobs
will be created? 0.8? 1.5?
• Will there be a minimal guaranteed income? Proposed in Finland
For ‘industrialized’ countries? All countries?
• Do we really all want to retire at 30? Humanities majors victorious?
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
18
Ever watch
“Planet of the Apes”?
Societal Impact of AI? (2)
When will owning a car be a hobby?
When will communication between human speakers of any two natural
(and non-rare) languages be as easy as communication in the same one?
When will our 'digital doubles' and robots do all our
Travel planning? Entertainment planning? Financial decision making?
Medical decision making? Shopping? Cooking? Cleaning?
When will the average human life span grow faster than one year
per year? (Will AI drive med?) Robot care and engagement in nursing homes?
AI and war? AI and privacy? AI and income distribution?
Others? Comments or Questions?
What is the prob we will all look back at these questions in 25 years and see
them as naively optimistic? Seems likely (but other things will happen faster than we expect)
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
19
Final Comments
• Even if you don’t work in AI in the future,
hopefully the class helps you understand
the AI news, technological opportunities,
and social impacts
• If you do/will work in AI, seems to be an
exciting time! (Hope there’s no lurking
‘AI Winter 2’ due to over-hyped expectations)
• Good luck on the exam and keep in touch,
especially if working in AI!
12/15/15
CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15
20
Download