Core C1. Answer 2 of the following 3 questions.

advertisement
Core
1 of 4
Core
Answer 2 of the following 3 questions.
C1. A long-standing debate in Artificial Intelligence is the role of consciousness in intelligence.
This has resulted in (at least) two schools of thoughts, known as Strong AI and Weak AI. In Strong
AI, the computer becomes a conscious mind, not simply an intelligent, problem-solving device as
in Weak AI.
a) Some people argue that "strong" AI is a misnomer. Would a Strong AI machine indeed be
stronger or better at solving problems than a Weak AI machine? Explain why or why not.
b) If you were given a Weak AI machine as a black box, could you turn it into a Strong AI machine
by interfacing it to a "consciousness" module? If so, explain how this might work. If not, explain
why this is not possible. (Note: In answering this question, explain how consciousness works in a
way that justifies your answer to the question. Since this is an unsolved problem, we are not
looking for the "right" answer but for a credible computational theory based on your knowledge of
the Artificial intelligence and/or Cognitive Science fields. You may postulate such a theory or use
one you've read about.)
C2. Russell & Norvig's popular AI textbook says that their goal is to "design agents that act
rationally". They then present three possible meanings of "rationality":
1) Perfect rationality: A perfectly rational agent acts at every instant in such a way as to maximize
its expected utility, given the information it has acquired from the environment.
2) Calculative rationality: A calculatively rational agent eventually returns what would have been
the rational choice at the beginning of its execution.
3) Bounded rationality: A bounded rational agent behaves as well as possible given its
computational resources.
Part A: Which of these do you think should be the goal of Artificial Intelligence as a scientific field
of study? Justify your answer.
Part B: Which of these do you think should be the goal of Artificial Intelligence as an engineering
discipline? Justify your answer.
Part C: Which of these has been (explicitly or implicitly) has been the goal of your own research
thus far? Explain.
C3. There seem to be almost as many criticisms of AI research as there are AI researchers. This
one comes from Richard Sutton, a senior researcher in Machine Learning
(www.cs.ualberta.ca/~sutton/IncIdeas/WrongWithAI.html):
"I hold that AI has gone astray by neglecting its essential objective --- the turning over of
responsibility for the decision-making and organization of the AI system to the AI system itself. It
has become an accepted, indeed lauded, form of success in the field to exhibit a complex system
that works well primarily because of some insight the designers have had into solving a particular
Core
2 of 4
problem. This is part of an anti-theoretic, or "engineering stance", that considers itself open to any
way of solving a problem. But whatever the merits of this approach as engineering, it is not really
addressing the objective of AI. For AI it is not enough merely to achieve a better system; it matters
how the system was made. The reason it matters can ultimately be considered a practical one,
one of scaling. An AI system too reliant on manual tuning, for example, will not be able to scale
past what can be held in the heads of a few programmers. This, it seems to me, is essentially the
situation we are in today in AI. Our AI systems are limited because we have failed to turn over
responsibility for them to them."
(a) Briefly describe an AI system that leaves the responsibility for the system to the human.
(b) Briefly describe an AI system that takes responsibility for itself.
(c) Is the AI system in (b) necessarily better than the one in (a)?
Can you think of any counter-arguments to the argument given above?
Core
3 of 4
Machine Learning
Answer 2 of the following 5 questions.
L1. a) Using two clusters, draw a situation where both maximum likelihood and K nearest neighbor
would have a different result than SVM. Use only one graph.
b) Describe real-world problems where each technique (K-NN, ML, and SVM) would have an
advantage over the others.
L2. Hidden Markov models have been very successful in recognizing speech. Therefore, some
researchers tried to use the speech recognition HMMs they had learned to synthesize speech.
However, the resulting speech was not very intelligible.
a) Explain why these recognition HMMs failed to produce good speech.
b) Describe how you might modify this approach to get better results.
L3. A paper on your reading list explains loopy belief propagation as a variational approximation.
Briefly explain how that works, and how one could go about improving the variational
approximation. Explain why this view on Loopy BP is important.
L4. On generalization:
a. What is meant by 'generalization'?
b. How could we reasonably safely conclude that, for a given dataset, in practice, one classifier
generalizes better than another?
c. How is learning theory related to the concept of generalization?
d. Does the concept of generalization apply only to classification? What about regression?
e. What about density estimation?
f. What about clustering?
L5. Compare and contrast principal components analysis and independent components analysis.
When would we expect similar results from both techniques? When would they be different?
Give examples of the behavior one would expect from each on at least two concrete problems.
Core
4 of 4
Planning
Please answer the following two questions:
P1. One of the primary reasons for the resurgence of AI interest in planning in the
mid-nineties was the advent of planning graphs as a knowledge representation.
Planning graphs led to planners such as GRAPHPLAN which, for some problems, were
orders of magnitudes faster than earlier planners such as POP (or partial-order
planning). However, planning graphs can handle only propositional actions, e.g.,
Have(Cake), Eaten(Cake), which limits the class of problems for which they can be
used.
Planning graphs are a specific example of knowledge representations that work very
well for a small class of problems but do not generalize well for larger classes of
problems. Some AI researchers say that this is just fine: the goal, they say, is to build
knowledge representations that work well for different classes of problems. Other AI
researchers look for more general-purpose representations that work well for large
classes of problems. Which school do you think has it right, and why?
P2. What are the differences between planning and MDP policy iteration? In particular,
how does one formally define a classical planning problem?
How does one formally define an MDP? What are the advantages and
disadvantages of each approach?
Download