Uploaded by 佩明月

artificial intelligence exercise

advertisement
Chapter 1
1.1 Define in yo盯 own word: (a) intelligence, (b) artificial intelligence, (c) agen t.
• lntelligence 在fíit : Dictiona ry defmitions of intelligence talk about "the capacity to acquire and apply
knowledge" or "the faculty of thought and reason" or "the ab诅ty to ∞mprehend and profit 仕om
experience." These are all reasonable answers, but if we want some也ing quantifiable we would use
something like "the ability to apply knowledge in order to perform better in an environment."
智能的 字 典定义有 一种学习或应用知识的能力 , 一种思考和推理的本领 , 领会并且得益于经验的能
力,这些都是有道理的答案 , 但如果我们想量化 一 些东西 ,我们 将用到 一 些东西像为了在环境中更
好的 完成任务使能力适应知 识
• Artific切1 intelligenceλ 工在fíili: We define art出cial intelligence as the study and construction of agent
programs 阳t perform well in a given environment, for a given agent archit民ture.
作为 一学习和构造智能体程序 ,为 了一个智能体结构,在被给的环境中可 以 很好的完 成任务。
• Agen 1JfíiIi体 t: We define an agent as an entity 实体 that takes action in response to percepts from an
environment. 在一个环境中对一个对象做出 反应的 实 体
1.4 Ther e a re well-known classes of problem that ar e intractably difficnlt for computer s, and other
classes that are provably undecidable. Does this mean that AI is impossible?
No. It means 也at AI systems should avoid 位ying to solve intractable problems. Usually,由is m巳ans they can
only approximate optimal behavior. Notice that humans don't solve NP complete problems either. Sometimes
由ey are good at solving spec诅c instances wi也 a lot of structure, perhaps with the aid of background
knowledge. AI systems should attempt to do the same.
1.11 "surely computer s cannot be intelligent-they can do only wh at their programmers tell them." Is
the latter statement true, and does it imply the former ?
This depends on your definition of "intelligent" and "tell." In one sense computers 0世y do what 自己
programmers command them to do , but in another sense what 也e programmers consciously tells the
computer to do often has very little to do with what the computer actually do巳s. Anyone who has written a
program with an orneη bug knows this , as does anyone who has written a successful machine learning
program. So in one sense Samuel "told" 也e computer "learn to play checkers better than 1 do , and then play
that way," but in another sense he told the computer "follow this learning algorithm" and it learned to play.
So we' re left in the situation where you may or may not consider learning to play checkers to be s sign of
intelligence (or you may 由ink 白at learning to play in 也e right way requ让es intelligence, but not in 也is way) ,
andyou may 吐血汰出e intelligence resides in the programmer or in the computer
Chapter 2
2.1 Defme in yo町 own words the following terms: agent, agent function, agent program , rationality,
reflex agent, model-b ased agent, goal-based agent, utility-based agent, learning agent.
The following are just some of the many possible defmitions that can be written:
• Agentli'舷佯: an entity (实体) tbat perceives (感知) and acω行为 ; or, one that can be viewed as
perceiving and acting. Essentially本质上 any object qualifies 限定 ; the key point is the way the object
implements an agent function. (Note: some autbors restrict the term to p rograms that operate on behaif of a
buman, or to programs that can cause some or all of their code to run on other machines on a network, as in
mobile agents. MOBILE AGENT)
一个具有感知和行文的实体 , 或者是一个可以观察到感觉的实体 , 本质上 , 任何限定对象 , 只要的观
点是一种对象执行智能体函数的方法。(注意 , 一些作者〉
可 以 感知环境 , 并在环境中 行动的某种 东西。
• Agent function 暂黯体函数: a function 也at specifies the agent' s action in response 归巳very possible
p町cept sequence智能体相应任何感知序列所采取的行动
• Agent program tf:f fif体程序: that program whicb, combined with a machine architecture, implements an
agent function. In 0世 simple designs , tbe program takes a new percept on eacb invocation and returns an
ac挝on. 实现了智能函数。有各种基本的智能体程序设计 , 反应出现实表现的 一级用于决策过程的 信息
种类。 设计可能在效率 、 压缩性和灵活性方面有变化 。 适 当 的智能体程序设计取决于环境的本性
• Rationali句; 王军放 : a property of agents that choose actions that maximize tbeir expected u创坷, given the
percepts to date.
• Autonomy fJ主: a property of agenωwhose bebavior is determined by tbeir own experience rather than
solely by their initial programming.
.R伪'x agent反射却在FSE体: an agent whose action depends only on the current percept.
一个智能体的行为仅仅依赖于 当前的知觉。
• Model-based agent基于茹苦型的主FifS 体: an agent wbose actioD is derived directly from an internal model
ofthe c田rent world state that is updated over time.
一个智能体的行为直接得自于 内 在模型的状态, 这个状态 是 当 前世界通用的不断更新。
• Goal-based agen基 'FfU萃的苟能俐: an agent that selects actions that it believes will acbieve explicitly
represented goals .智能体选择它相信能明确达到目标的行动。
• Utility-bωed agen基于效用的主F磁仰: an agent that sel饵ts actions that it bel即es will maximize 也e
expected util即 of the outcome state.试图最大化他们自己期望的快乐
• Learning agent学习智能做: an agent whose behavior improves over time based on its experience.
2.2 Both the performance m阅sure and the utility function me描盯e how well an agent is doing.
Explain the difference between the two.
A performance measure (性能度量) is used by an outside observer to evaluate ( 评估) bow successful an
agent is. It i s a function from bistories to a real number. A ut诅ity function (效用函数) is used by an agent
itself ωevaluate how desirable (令人想要) states or bistories are. In our framework, the ut让ity function
may not be the same as the performance measure; fl町由ermore, an agent may have no expliαt u创ity
function at all, whereas there is always a perfo口nance measure.
2.5 For each of following agents, develop a PEAS description of the task environment:
a. Robot soccer player;
b. Internet book-shopping agent;
c. Autonomous Mars rover;
d. Mathema创cian's theorem-proving assistant.
Some representative, but not e对laustive, answers are given in Figure S2. 1.
Agent 巧!pe
Performance
Measme
Env:iroll1丑ent
Acmators
Sensors
Robot scccer
player
wi1111itw
g: game,
goals for/against
Field, ball. 0\\'11
leam. 。由er te am.
own body
Dcyjce,s (c 各 ·
legs) for
locomotion and
kicking
CameI毡,
touch
.sensors.
accderomcters
‘
ode且tatiöu
5cnsors.
wheelljoint
。lcoders
Iutemet
book-shopping
agellt
AutonolUOUS
M创'S rover
Obrain requestedlinteresting
books. miuimize
expendinu'c
lntemet
Tell~♂ o explorcd
LalUlch ,'e!ùcle.
lander. Mars
and repo此时.
S创nples gatherecl
and analyzed
Follow link.
data
iu fìelcl~. display
to user
enteI九ubmit
Wb-Elul-p.
sample collection
Web pages. user
reqUe5ts
devices, radio
Call1era. tOt比b
sensol'S.
accelerometers.
onentaÌlon
r1'a0$1))1 忧er
s eo~ol吉、 .
cle,,-jce气 aoalysis
wheelJioint
en∞ders. radio
.
recerveJ
Mathematician's
theorem-provil1g
asslstant
Figtll'(' 8 2.1
Agent 可pes
and their PEAS descriptions. fo l' Ex. 2.5.
智能体类型
性 能度量
环境
机器人足
赢得比赛,
裁判 ,
1
r
执行器
传感器
自 己队伍,
其他队伍,
相机 , 触摸传感器
自己身体装置 (腿)行走踢球
于|
球运动员
打败对手
因特网购
获得请求/感兴趣
书智能体
的书,最小 支出
因特网
交数据,用户显示器
网页 , 用户请求
自 主火星
地形探测,汇报,
火星 , 运行装置,
轮子/腿 , 简单手机装置 ,
相机 , 触摸传感器
漫步者
样本来集分析
登 陆器
分析装置,无线电发射装置
方向传感器
飞._../飞~
向下连接,输入提
加速器
y
数学家的定理
证明助手
2.6 For each of the agent typ臼 listed in Exercise 2.5, characterize the environment according to the
properti臼 given in Section 2.3,and select a suitable agent dωign. The following exercises all concern
the implementation of environment and agents for the vacuum-cleaner world.
Environment properties are given in Fig盯e S2.2. Suitable agent types:
a. A model-based reflex agent would suffice for most aspects; for tactical play, a ut诅itybased agent with
lookahead would be usefu1.基于模型的映射能够满足大多数要求 , 对于战术游戏 , 向前效用智能体将会
有用
b. A goal-based agent wou1d be appropriate for speci且c book requests. For more openended tasks--e.g.,
中'ind me something interesting ωread"一tradeoffs (权衡折中) are involved (棘手的) and the agent must
compare utilities for various (不同的) possible purchases.基于目标的智能体将适当的明确书的请求,
为更多 开放的任务 , 例如查找我有兴趣读的书 , 智能体必须 比较各种可能的购买方式之 间 的效用
c. A model-based reflex agent wou1d suffice for low-level navigation and obstacle avoidance; for route
planning, exploration planning, experimentation, etc. , some combination of goal-based and utility-based
agents would be needed.基于模型的映射智能体能够满足低水平的航线和避免障碍 ,为 了 路 由 计划,探
测计划,实验等 。 这需要基于 目 标和效用 的智能休 。
d . For specific proof tasks, a goal-based agent is needed. For "exploratory" tasks--e .g. , "Prove some use如I
lemmata concerning operations on s位mgs"一a utility-based arcbitecture migbt be needed.
为了明确的检验任务,需要基于 目 标的智能体,为探测任务 ,
Task EnvironmetlÌ
Observable Detellllinistic
Robot soccel'
Pmtially
Intemet book-shopping
Partially
Autonomons 1-1a1's rove1'
Pattially
Matbematiciall 's assistant
Fully
FiglU'l' S2.2
任务环境
Stochastic
Episodic
Static
Seqllential
Dy咀amic
Dete1ministic. Sequential
Stochastic
Static咏
Discrete
Agents
Continuous Multi
Discrete
Sin21e
Seqllential Dynamic ContinllollS Single
DeteIministic Sequential
SelUi
Disc1'ete
Multi
Euvi.rollllleut p 1'operties fol' Ex. 2.6.
可观察性
确定性
片段性
静态性
离散型
智能体数
部分
随机的
连续的
动态的
连续的
多
部分
确定的
连续的
静态的
离散的
单
部分
随机的
连续的
动态的
连续的
完全
确定的
连续的
静态的
离散的
机器人足
球运动 员
因特网购
书智能体
自主 火星
漫步者
-
单
数学家的定理
证明助手
多
Chapter 3
3.1 Define in yo盯 own words the following terms: state, state space, search
action, successor function, and branching factor.
tr悦,
search node, goal,
• state : A state is a situation 由at an agent can frnd itself in. We distinguish two types of states: wor1d states
(也e actua1 concrete situations in the real wor1 d) and representational states (the abstract descriptions of the
rea1 wo r1d that are used by the agent in de1iberating about what to do).
• state space : A state space is a graph whose node吕 are the set of all stat邸, and whose links are
actions that transform one stat巳 into another.
• search tree : A search tree is a tr四 (a graph with no undirected loops) in which the root node is 也已 start
state and the set of children for each node consists of the states reachable by taking any action.
• search node : A search node is a node in the search tree.
• goal :A go a1 is a state that the agent is trying to reach.
• action : An action is something that the agent can choose to do.
• successor function : A successor function described 也e agent's options: given a state , it returns a set of
(action, state) pairs , where each state is the state reachable by ta挝ng the action.
• branching factor :The branching factor in a search 衍臼 is 出e number of actions available to the agent.
3.7 Give the initial state, goal test , succ臼isOr function, and cost function for each of the following.
Choose a formulation that is precise enough to be implemented.
a. You have to color a planar map using only four colors, in such a way that no two adjacent r egions
have the same color.
Initia1 state: No regions colored
Go a1 tes t: A11 regions colored, and no two a句 acent regions have the same color.
Successor function: Assign a color to a region.
Cost function: Number of assignments. 路径耗损
b . A 3-foot-tall monkey is in a room where som e bananas are
suspended 仕om
the 8-foot ceiling. He
would like to get the b ananas. The room contains two stackable, movable, climbable 3-foot-high
crates.
Initi a1 state: As described in the text.初始状态:
Go a1 test: Monkey has bananas. 目 标测试 : 猴子拿到香蕉
后继函数 : Hop on crate; Hop off crate; Push crate from one spotωanother; Walk from one spotωanother;
grab bananas (if standing on cra时.挪动箱子 , , 把箱子叠起 , 走到箱子上拿香蕉
Cost function: Number of actions. 行动数量
c. You have a program that outputs the m essage "illegal input record" when fed a certain Ï1l e of input
records. You know that processing of each r ecord is independent of the other records. You want to
discover what record is illegal.
Initi a1 state: considering all input records.
Go a1 test: considering a single record , and it gives "山ega1 input" message.
Successor function: run again on the first half of 由e records; run again on the second
half of the records.
Cost function: Number of runs.
Note: This is a contingency problem; you need to see whether a run gives an error
message or not to decide what to do next.
d. You have three jugs, measuring 12 gallons, 8 gallons, and 3 gallons, and a water faucet. You Can fill
the jugs up or empty them out from one to another or onto the ground. You need to measure out
exactly one gallon.
Initia1 state: jugs have va1ues [0, 0 , 0].
Successor function: given values [x, y, z] , generate [12, y , z] , [x, 8, z], [x , y, 3] (by f11ling); [0, y, z] , [x, 0 , z],
[x , y, 0] (by emptying); or for any two jugs with current va1ues x and y , pour y inωx; 也is changes the jug
with x to the mini mum of x + y and the
g但ned by the frrst jug.
c叩acity
of the j吨, and decrements the jug
w地 y
by the amount
Cost function: Number of actions.
3.8 Consider a state space where the start state is number 1 and the successor function for state n
returo two states, numbers 20 and 2n+1.
a . Draw the portion of the state space for
See Figure S3. 1.
Figw'(' S3 .1
stat臼 1
to 15.
The state sp.ace for the problemdefinedin E孔 3.8.
b . Suppose the goal state is 1l.List the order in which oodes will
be 世si也d
for breadth-frrst search ,
depth-limited se盯ch with linIit 3 , and iterative deepeniog search.
Breadth-fust: 12345678910 11
Depth-limited: 1 248 9 5 10 11
Iterative deepening: 1; 1 23; 1 245367; 1 2489510 11
c. Would bidirectiooal search be appropriate for this problem? H so, describe in detail how it would
work.
Bidirectiona1 search is very usefu1, because the on1y successor of n in the reverse direction is L(时2)]. This
helps focus the search.
d . What is the braoching factor in each directioo of the bidirectional search?
2 in the forward direction; 1 in 也e reverse direction.
e. Does the answer to (c) suggest a r eformulation of the problem that would allow you to solve the
problem of get伽g from state 1 to a giveo goal state with almost 00 search?
Yes; start at 由e goa1, and apply 由e single reverse successor action unti1 you reach 1.
Chapter4
4.2 The heuristic path algorithm is a b四Mirst search in which the objective function is
f(n)=(2斗甲)g(n)+wh(n) .For what values of w is this algorithm guaranteed to be optimal?(You may
assume that h is admissible.) What kind of search does this perform when w = O? When w = 1? When
w=2?
w = 0 gives f(o) = 2g(n). This behaves exact1y like uniform-cost search一伽 facωr of two makes 00
differeoce in the ordering of the nodes. w = 1 gives A* search. W = 2 gives f(n) = 2h(0), i.e., greedy
best-first search. We a1so have f(n) = (2 - w)[g(叫 +w2 - wh(n)]which behaves exactly like A* search with a
heuristic w2-wh(n). For w 三 1 , this is always less than h(时 and hence admissible, pro叽ded h(n) is itself
admissible.
4.11 Give the name of the algorithm that results from each of the following special cas臼:
a. Local beam search with k=1.
Local beam search with k = 1 is hill-climbing search.
b . Local beam search with one initial state and no limit on the number of states retained.
Local beam search with k = ∞: s位ictly speaking, this doesn't make sense. (Exercise may be modified in
future printings.) The idea is that if eveηr successor is retained (because k is unbounded) , then the search
resembles breadth-first search in 出at it adds one complete layer of nodes before adding 也e next lay巳r.
Starting from one state, 出e algorit趾n would be essenti a1l y identical to breadth-first search except that each
layer is generated a1l at on臼.
c. Simulated annealing with T = 0 at all tim臼 (and omitting the termination test).
Simulated annealing with T = 0 at all times: ignoring the fact 也at the terrnination step would be triggered
immediate1y , the search would be identical to first-choice hill c1imbing because every downward successor
would be rejected with probability 1. (Exercise may be modified in future printings.)
d . Genetic algorithm with population size N = 1.
Genetic algorithm with population size N = 1: if the population size is 1, then the two selected parents w山
be the same individual;αossover yields an exact copy of 也已 individual; then there is a sma1l chance of
mutation.τnus, the algorithm executes a random walk in the space of individuals.
Chapter 5
5.1 Define in yonr own words the terms constraint satisfaction problem , constraint, backtracking
search, arc consistency, backjumping and min-conflicts.
• constraint sati.咖ction problem : A constraint satisfaction problem is a problem in which the goal is to
choose a value for each of a set of variables, in such a way that the values all obey a set of constraints.
.constraint :A constraint is a restriction on the possible values of two or more variables. For example, a
constraint rnight say 阳t A = a is not allowed in c。可 unction with B = b
• Backtracking search 丑 acktracking search is a form dep也-flIst search in which there is a single
representation of the state that gets updated for each successor, and then must be restored when a dead end is
reached.
• arc consistent : A directed arc from variable A to variable B in a CSP is arc consistent if, for every value in
the current domain of A,由ere is some consistent value of B.
.Brc苟umping :Bac均 urnping
is a way of making backtracking search more efficient , by jumping back more
than one level when a dead end is reached.
• Min-co码flicts : Min-conflicωis a heuristic for use witb local search on CSP problerns. The heuristic says
that, when given a
variableωmo哟,
choose the value that
conflicωwith
the fewest number of
0也er
variables.
5.2 How many solutions 盯e there for the map-coloring problem in Figure 5.1?
There are 18 solutions for coloring Australia with three colors. Start with SA , which can have any of three
colors. Then moving c1ockwise, WA can bave either of the otber two colors, and everything e1se is strictly
deterrnined; that makes 6 possibiliti臼 for the mainland, times 3 for Tasmania yields 18.
5.6 Solve the cryptarithmetic problem in Figure 5.2 b y hand, using
and the MRV and least-constraining-vague heuristics.
backtracki吨,
forward checking,
a . Cboose the X3 variable. Its domain is {O, 1}.
b . Choose the value 1 for X3. (We can' t choose 0; it wouldn't survive forward checking, because it would
force F to be 0, and the leading digit of the surn must be non-zero.)
c. Choose F , because it bas only one remaining value.
d . Cboose the value 1 for F .
e. Now X2 and X1 are tied for minimurn remaining values at 2; 1悦, s choose X2.
f. Either value survives forward check:ing , l et's choose 0 for X2.
g. Now X1 has the mi nimurn remaining values.
h . Ag但n , arbitrarily cboose 0 for the value of X1 .
i . The variable 0 must be an even nurnber
φecause
it is the surn of T + T less than 5 (because 0 + 0 = R +
10 x 0). That makes it most constrained.
j . Arbitrarily choose 4 as the value of O.
k. R now has only 1 remaining value.
1. Cboose the value 8 for R.
m. T now has only 1 remaining value.
n . Cboose the value 7 for T .
o. U must b巳 an even number less than 9; choose U.
p . Tbe only value for U 也at survives forward checking is 6.
q . Tbe only variable left is W .
r. The only value left for W is 3.
s. Tbis is a solution.
This is a rather easy (under-constrained) puzzle, so it is not surprising that we arrive at a solution with no
backtracking (given that we are allowed to use forward cbecking).
Chapter 6
6.1 This problem
exercis四 the
basic concepts of g缸ne playing, using tic-tac-toe(noughts and
cross创
as an example .We define Xnas the number of rows, columns, or diagonals with exactly n X's and no
O's. Similarly, On is the number of rows , colllmns, or diagonals with just n O's. The utility function
ass igns+1 to any position with X 3 =1 and -1
to 缸ly
position with 0 3 = 1.AU other terminal positions
have utility O. For nonterminal positions, we use a linear evaluation function defined as Eval(s) = 3X2
(s) + X 1 (s) - (30 2 (s) +01 (s)).
a.
Appr。对mately
how many possible games of tic-tac- toe are there?
b. Show the whole game tree starting from an empty board down to depth 2 (i.e., one X and one 0 on the
board) , taking symmetry into account.
c. M ark on your tree the evaluations of all the positions at depth 2.
d. Using the minimax algorithm, mark on your tree the backed-up values for the positions at depths 1 and
O,and use those values the choose the best starting move.
e. Circle the nodes at depth 2 that would not be evaluated if alpha-beta pruning were applied, assuming the
nodes are generated in 也e optimal order for alpha-beta pnming.
Figure S6.1 shows the game tree, with the ev a1uation function v a1ues below the termi nal nodes and the
backed-up values to the right of the non-terminal nodes. The values imply that the best starting move for X
is to take the center. The tβrmi na1 Dodes with a bold outline are the one吕 that do DOt need to be eva1uated ,
assuming 也e
optimal ordering.
.咱
•
~
固自回国
•
回
国
回
国
,
-,
...,
'
国国
Fi部ll't'
S6.1
Part of the g.缸ne 位ee for tic-tac-roe, f01" Exercise 6.1.
6.15 Describe how the minimax and alpha-beta algorithms change for two-players, nonzero-sum
games in which each player has his or her own utility function. You may assume that each player
knows the other's utility function. H there are no constr必nts on the two terminal utilities, is it possible
for any node to be pruned by alpha-beta?
The m:inimax a1gorithm for non-zero-s田n games works exactly as for multiplayer games, describ巳d on
p , 165- 6; that is, the ev a1uation function is a vector of values, one for each player, and the backup step
selects whichever vector has 也e highest va1ue for the player whose turn it is to move , The example at the
end of Section 6.2 (p.167) shows that alpha-beta pruning is not possible in genera1 non-zero-sum games ,
because an unexamined leaf node might be optima1 for bo由 players.
Chapter7
7.3 Consider the problem of deciding whether a propositionallogic sentence is true in a given model.
a. Write a recursive algorithm PL - TRUE? ( s , m) that returns true if and only if the sentence s is
true in the model m (where m assigns a truth value for every symbol in s). The algorithm should run
in time linear in the size of the sentence.
code repository.)
(Alternative峙,
use a version of this function from the online
Thereis a p l tru e in the 时由on code, and a version of ask in the Lisp code that serves 也e same purpose.
白le Java code did not have this function as of May 2003 , but it should be added soon.)
b. Give three examples of sentences that can be determined to be true or false in a partial model that
does not specify a true value for some of the symbols.
The sentences True , P V -'P , and P ^ -'P can all be determined to be true or false in a partial model
that does not specify the truth value for P.
c. Show that the truth value 但 any) of a sentence in a partial model cannot be determined efficiently
in general.
It is possible to create two sentences, each with k variables that are not instantiated in 由e partia1 mode1, such
that one of them is true for all 2k possible va1ues of the variables, while the other sentence is false for one of
the 2k values. This shows that in gener a1 one must consider a11 2k possibilities. Enumerating them takes
exponential time.
d. Modify yo町 PL - TRUE? Algorithm so that it can sometimes judge truth from partial models,
while ret剑ning its recursive structure and linear runtime. Give three exampl臼 of sentencωwhose
truth in a partial model is not detected by your algorithm.
Thep灿on
co町unct
implementation of pl true returns true if any disjunct of a disjunction is true, and false if any
of a conjunction is false. It wi11 do this even if other di叮uncts/co叮 uncts contains uninstantiated
variables. Thus, in the partial model where P is true, P V Q retums 位ue, and
truth va1ues of Q V -'Q, Q V True, and Q
-'Q are not detected .
^
e. Inv四tigate
-'P 八 Q
returns f a1se. But the
whether the modified algorithm makes TT - ENT ALLS ? More efficient.
Our versionof t t ent a i l s
if it did not.
a1readyuses 也is
modified pl true. It would be slower
7.5 Consider a vocabulary with only four propositious, A, B, C, and D. How ma ny models are ther e
for the following sentences?
a. (A 八 B) V (B ^ C)
b.A V B
c. A φ B ∞ c
These can be computed by counting the rows in a truth table that come out true. Remember to count the
propositions 出at are not mentioned; if a sentence mentions only A and B, then we multiply the number of
models for 仙, B} by 22 to account for C and D.
a.6
b .12
c. 4
Chapter 8
8.6 Represent the foUowing sentences in first-order logic, using a consistent vocabulary (which you
must define):
a. Some students took Prench in spring 200 1.
b. Every student who takes Preuch passes i t.
c. Ouly one stud巳nt took Greek iu spring 2001 .
d. The best score in Greek is always higher than the best score in Prench.
e. Every person who buys a policy is smart.
f. No person buys an expensive policy.
g. There is an agent who sells policies on1 y to people who are not insured.
h. There is a barber who shaves all men in town who do uot shave themselves.
i. A persou born in the UK, each of whose parents is a UK citizen or a UK resident, is a UK citizen by bir由.
j. A person born outside the UK, one of whose parents is a UK citizen by birth , is a UK citizen by decent.
k. Politicians can fool some of the people all of the time, and they can fool all of the people some of the time,
but 也可 can't fool all of the people all of the tim巳.
In this exercise, it is best not to
woαy
about details of tense and larger concerns with consistent
ontologies and so on.τ'he main poiut is to make sure students understand connectives and
quantifiers and the use of predicates , functions , constants, and equali叩. Let the basic vocabulary be
as follows:
Takes(x, c , s): student x takes course c in semester s;
Passes(x , c , s): student x passes course c in semester s;
Score(x, c, s): 由e score obtained by student x in ∞田se c in semester s;
x > y: x is greater 由an y;
F and G: specific French and Greek co田ses (one could also in也甲ret these sentences as referriug to
any such course, in which case one could use a predicate Subject(c, f) meaning 出at the subject of
course c is field f;
Buys(x, y , z): x buys y from z (using a binary predicate with unspec出ed seller is OK but less
felicitous);
Sells(x, y , z): x sells y to z;
Shaves(x, y): person x shaves person y
Bom(x, c): person x is bom in country c;
P町ent仗, y): x is a p缸ent ofy;
Citizen(x, c , r): x is a citizen of country c for reason r;
Resident(x, c): x is a resident of coun町 c;
Fools(x, y , t): person x fools person y at time t;
Student(x), P町son(x) , Man(x) , Barber(吟, Expensive(功 , Agent(x), Insured(x) ,
Smart(x) , Politician(x): predicates satisfied by members of the corresponding categories.
a. Some students took French in spring 200 1.
:3 x Student(x) ^ Takes(x, F, Spri ng2001).
b. Every student who takes French passes i t.
V x, s Student(x) ^
Takes(x, F, s) =辛 Passes(x, F, s).
c. On1y one student took Greek in spring 200 1.
:3 x Student(x) ^ Takes(x, G,
Spring2001 ) 八 V
y y 6= x =争 -' Takes(y, G, Spri ng2001).
d. The best score in Greek is always higher than the best score in French.
V s :3 x V y Score(x, G, s ) > Score(y, F, s).
e. Every person who buys a policy is smart.
V x Person(x) ^
(:3 y, z Policy(y) ^ Buys(x, y, z))
=字 Smart(x).
f. No person buys an expensive policy.
V x, y, z Person(x) 八 Policy(y) ^ Expens i ve(y) 斗 -' Buys(x, y, z).
g. There is an agent who sells policies only to people who
:3 x Agent(x) ^
缸e not
insured.
V y, z Policy(y) ^ Sells(x , y, z) =争 (Person(z) ^
-'Insured(z)).
h. There is a barber who shaves all men in town who do not shave themselves.
:3 x Bar ber(x) ^
V y Man(y) ^
-'Shaves(y , y) =争 Shaves(x, y).
i. A person bom in 由e UK, each of whose p盯'ents is a UK citizen or a UK resident, is a UK citizen
byb让th.
V x Person(x) ^Born(x, UK) ^(V y Parent(y, x) =争 (( :3 r Ci tizen(y, UK, r)) V
Res i dent(y,旺。)) 斗 Ci tizen(x,
UK, Birth).
j. A person bom outside the U K, one of whose p缸ents is a UK citizen by b让出, is a UK citizen by
descen t.
\:1 x Person(x) ^ -'Born(x, UK) ^ (:3 y Parent(y, x) ^ Ci t i zen(y, UK, Birth))
功 C i t i zen(x, UK, Descent).
k. Politicians can fool some of the people all of the time, and they can fool all of the people some of the time ,
but they can' t fool all of the people all of the time.
\:1 x Po li tician(x) 珍
(:3 y \:1 t Person(y) ^ Fools(x, y, t)) 八
(:3 t \:1 y Person(吵 吵 Fools(x, y, t)) ^
-'(\:1 t \:1 y Person(吵 吵 Fools(x, y, t))
Chapter9
9.4. For each pair of atomic sentences, give the most general nnifier if it exists:
a. P(A,B ,B) ,P(x, y, z).
b. Q(y, G(A,B)),Q(G(x, x),y).
c.Older(Fa出er(y),y) ,01der(Fa也仅(x) ,John) .
d. Knows(Father(y),y),Knows帜, x).
a. {x/A, y店, z1B } (or some permutation of this).
b . No Il ojfier (x cannot bind to both A and B).
c. {y/John, xlJohn} .
d . No uni丑er (because 也e occurs-check prevents unification of y with Pather(y)).
9.9. Write down Iogical representations for the following sentences, suitable for use with Generalized
岛10dus Ponens:
a. Horse, cows , and pigs are m创丑maIs.
b. An offspring of a horse is a horse.
c. Bluebeard is a horse.
d. Bluebeard is Charlie's p缸ent.
e. Offspring and parent are inverse relations.
f. Every mammaI has a parent.
a.
Horse (x) 功 Mamma l (x)
Cow(x) 珍 Mamma l (x)
Pig(吟
吟 Mamma l (x)
b . Offspring(x, y)
^
Horse(吵 吵 Horse(x)
c. Horse(Bluebeard)
d .Parent (B luebeard, Charlie)
e.Offspring(x,
吵 吵 Parent(y,
Parent(x,吵 吵 Offspring(y,
x)
x)
(Note we cou1d.n't do Offspring(x, y) 仲 Parent(y, x) because that is not in the form
expected by Generalized Modus Ponens.)
f. Mamma l(x) =争 Parent(G(吟, x) (here G is a Skolem function).
Download