Uploaded by Anandita Chalamalasetti

What is AI

advertisement
What is AI?
----------AI is a branch of computer science that deals with creation of machines capable of
performing tasks and typically require human intelligence.(to think ,manipulate ,
adapt, percieve, respond).
This can be defined in 8 definitions classified to 2 dimensions.
* THINK HUMANLY- follows cognitive modelling approach.
- to say a program to think like humans we must learn how humans think this can
be done in 3 ways
i.Introspection - catch ur own thoughts.
ii.Psychological experiment - observe other person in action.
iii. Brain Imaging - medical experiment to observe a brain in action.
- after understanding human thoughts, we incorporate it into a program. if
programs i/0 behaviour matches with human behaviour , it is a evidence that
some program mechanisms could also be operated by humans.
* THINK RATIONALLY - follows law of thought approach
- this approach is more concerned on acheiving optimal solution based on logical
reasoning that is emulating human thought process , this can be acheived using
LOGIC - i.e a pattern of arguments that yeild correct conclusions when supplied
with correct promises.
*ACT HUMANLY-follows Turing Test approach.
- involves creating AI systems that can simulate human behaviour . for this
computer would need the following capabilites i. NLP - to communicate in human
language.
ii. Knowledge representation - store what it knows and hears.
iii. Automated reasoning - answer any question , and draw new
conclusions.
iv. ML
v. Computer Vision - percieve the world.
vi. Robotics - manipulate objects and move about.
*ACT RATIONALLY- f rational agent approach.
- agent - something that acts - programs do something , but agents are expected
to do more than expected.
- ratioal agent - acts so as to acheive best outcome or when here is uncertaininty ,
give best expected outcome.
Foundations of AI.
-----------------AI draws upon various disciplines to build a comprehensive understanding and
implementation of intelligent systems.
Key foundations include:
*Philosophy: Examines fundamental questions about existence, knowledge,
values, reason, mind, and language to understand the nature of intelligence.
*Mathematics: Provides the theoretical basis for algorithms, probability, statistics,
and optimization, crucial for designing intelligent systems.
*Economics: Explores decision-making and resource allocation, contributing to
the understanding of rational behavior in AI agents.
*Psychology: Studies human cognition and behavior, influencing the design of AI
systems to replicate aspects of human intelligence.
*Computer Engineering: Develops hardware and software to implement AI
algorithms and systems efficiently.
*Neuroscience: Investigates the brain's structure and function, inspiring models
for neural networks in AI.
*Control Theory: Provides frameworks for regulating and optimizing system
behavior, essential in AI applications like robotics.
*Linguistics: Contributes to natural language processing, enabling machines to
understand and generate human language.
State of ART (applications of AI):this refers to the applications of AI , here are few of them are* Robotics:
Example: STANLEY, a driverless robotic car, won the 2005 DARPA Grand
Challenge, showcasing AI's ability to navigate rough terrains.
Significance: Autonomous vehicles demonstrate the potential for AI to enhance
transportation safety and efficiency, with applications in various industries.
* Speech Recognition:
Example: Automated speech recognition in the airline industry, where travelers
can book flights through a conversation with a computerized system.
Significance: Speech recognition systems streamline customer interactions,
improving user experience and reducing the need for human intervention in
routine tasks.
*Autonomous Planning and Scheduling:
Example: NASA's Remote Agent autonomously plans spacecraft operations,
monitoring and adapting to issues during execution.
Significance: AI-driven planning enhances efficiency in complex, remote
operations, minimizing human intervention and responding to unforeseen
challenges.
*Game Playing:
Example: IBM's DEEP BLUE defeating Garry Kasparov in chess, marking a
significant milestone in AI's ability to outperform human champions.
Significance: AI's success in strategic games demonstrates its capacity for complex
decision-making and problem-solving, with potential applications in various
strategic domains.
*Spam Fighting:
Example: Learning algorithms classifying over a billion messages as spam daily,
efficiently filtering out unwanted content.
Significance: AI in spam detection reduces the burden on users, enhancing email
communication by automatically identifying and isolating irrelevant or malicious
content.
*Logistics Planning:
Example: DART aiding U.S. forces in the Persian Gulf crisis, automating logistics
planning for a large number of vehicles, cargo, and personnel.
Significance: AI-driven logistics planning optimizes resource utilization,
particularly in high-stakes scenarios, demonstrating substantial returns on
investment for organizations.
*Robotics in Hazardous Environments:
Example: iRobot Corporation's deployment of PackBot in conflict zones for
handling hazardous materials, explosives, and sniper detection.
Significance: AI-driven robotic systems enhance safety by executing tasks in
dangerous environments, showcasing the potential for automation in high-risk
scenarios.
*Machine Translation:
Example: Automatic translation from Arabic to English using statistical models,
enabling cross-language communication.
Significance: AI-driven translation facilitates global communication, breaking
down language barriers and showcasing the power of machine learning in
language processing.
AGENTS AND ENVIRONMENT:-----------------------AGENTS
-A agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
diagram:-A human agent has eyes, ears, and other organs for sensors and hands, legs,
vocal tract, and so on for actuators. -A robotic agent has cameras and infrared
range finders for sensors and various motors for actuators.
-A software agent receives file contents, network packets, and human
input(keyboard/mouse/touchscreen/voice) as sensory inputs and acts on the
environment by writing files, sending network packets, and displaying information
or generating sounds.
PERCEPT
- the input that an intelligent agent is perceiving at any given moment/instant.
PERCEPT SEQUENCE
- complete history of everything the agent has ever perceived.
AGENT FUNCTION
- a mathematical function that maps a sequence of perceptions into action.
- an agent to experiment can, be brought down to a table by trying out all possible
percept sequences and recording which actions the agent does in response. The
table is, of course, an external characterization of the agent.
AGENT PROGRAM
-agent function for an artificial agent will be implemented by an agent program
- a software program designed to interact with its environment, perceive the
data it receives, and take actions based on that data to achieve specific goals.
lets understand all the above with an example of a Vaccum cleaner:environment - square/tile.
agent - vaccum cleaner itself.
percept - information it receives about the state of the floor(dirty/clean).
percept sequence - series of information it has gathered about the state of the
floor over time.
agent function - determine how the vacuum cleaner should move(left/right) and
clean based on the percept sequence
agent program - would be the set of instructions and algorithms that guide its
movements and cleaning actions based on the information it receives from its
sensors.
GOOD BEHAVIOUR CONCEPT OF RATIONALLY:------------------------------------RATIONAL AGENT
-that does the right thing—conceptually speaking, every entry in the table for
the agent function is filled out correctly.
PERFORMANCE MEASURE
- a metric or criteria used to evaluate the effectiveness or success of an AI system
in solving a specific task or problem. It is a way to quantify how well the system is
performing and achieving its objectives.
-As a general rule, it is better to design performance measures according to what
one actually wants in the environment, rather than according to how one thinks
the agent should behave.
RATIONALITY
-expected performance
-The ability to make decisions based on logical reasoning and optimize behavior to
achieve its goals, considering its perception of the environment and the
performance measure.
- What is rational at any given time depends on four things:
The performance measure that defines the criterion of success.
The agents prior knowledge of the environment.
The actions that the agent can perform.
The agents percept sequence to date.
OMNISCIENCE , LEARNING , AUTONOMY
OMNISCIENCE - the idea of having complete and unlimited knowledge or
awareness.
- An omniscient (perfect) agent knows the actual outcome of its actions and can
act accordingly; but perfection is impossible in reality.
- can be acheived by doing actions in order to modify future percepts sometimes
called information gathering(is an important part of rationality) . A second
example of information gathering is provided by the exploration.
LEARNING - ability of an intelligent agent to improve its performance over time
by adapting to its environment or learning from experience(percept sequence).
AUTONOMY(prior knowledge)- refers to the degree of independence and selfgovernance exhibited by an intelligent agent. An autonomous agent is capable of
making decisions and taking actions without direct human intervention.
lets understand all the above with an example of a Vaccum cleaner:environment - square/tile.
rational agent - vaccum cleaner itself.
performance measure - the cleanliness of the floor it achieves over time.
rationality - the vacuum cleaner consistently cleaning the floor effectively based
on the information it receives.
omniscience - No agent, including a vacuum cleaner, is omniscient, as they can
only perceive a limited portion of their environment. The vacuum cleaner has
knowledge limited to what its sensors can detect, and it cannot have complete
awareness of the entire environment.
learning - involve adjusting its cleaning strategy based on past experiences, such
as finding more efficient paths or optimizing cleaning patterns.
autonomy - navigate and clean without constant human intervention, relying on
its programming and sensors to make decisions.
NATURE OF ENVIRONMENTS
----------------------For designing a AI system the rationality of the agent, has to be specified that
includes the performance measure, the environment, and the agent s actuators
and sensors. We group all these under the heading of the task
environment.
For the acronymically minded, we call this the PEAS (Performance, Environment,
Actuators, Sensors) description. In designing an agent, the first step must always
be to specify the task environment as fully as possible.
TASK ENVIRONMENT
- refers to the external system or surroundings in which an intelligent agent
operates and interacts. The task environment defines the conditions and
challenges that the agent faces while performing its designated tasks.
Lets specify all the needs in a task environment for a Vaccum cleaner:Agent - Vaccum cleaner
Performance measure: cleanness , efficiency,battery life , security .
Environment: room , table , wood floor , carpet , various obstacles.
Actuators: wheels , brushes , vaccum extracter.
Sensors: camera , dirt detection sensor, cliff,bump sensor . infrared wall sensor
etc.
PROPERTIES OF TASK ENVIRONMENT
1.Fully Observable vs. Partially Observable:
Fully Observable: The environment is fully observable if the agent's sensors
provide complete information about the state at each point in time.
Partially Observable: The environment is partially observable if the sensors are
noisy, inaccurate, or if some aspects of the state are missing from the sensor data.
2.Single-Agent vs. Multiagent:
Single-Agent: In a single-agent environment, there is only one entity making
decisions, e.g., solving a crossword puzzle.
Multiagent: In a multiagent environment, multiple entities make decisions, and
their behavior affects each other, e.g., playing chess against an opponent.
3.Deterministic vs. Nondeterministic:
Deterministic: The environment is deterministic if the next state is entirely
determined by the current state and the agent's actions.
Nondeterministic: The environment is nondeterministic if outcomes are not
entirely determined, introducing uncertainty.
4.Episodic vs. Sequential:
Episodic: In an episodic environment, the agent's experience is divided into
atomic episodes, and actions in one episode do not influence others.
Sequential: In a sequential environment, current decisions can affect future
decisions, introducing a dependency on past actions.
5.Static vs. Dynamic:
Static: The environment is static if it does not change while the agent is
deliberating.
Dynamic: The environment is dynamic if it can change while the agent is making
decisions.
6.Discrete vs. Continuous:
Discrete: The environment is discrete if it has a finite number of distinct states,
percepts, and actions.
Continuous: The environment is continuous if states, time, percepts, and actions
involve a range of continuous values.
7.Known vs. Unknown:
Known: The environment is known if the agent has complete knowledge of the
outcomes and probabilities of actions.
Unknown: The environment is unknown if the agent needs to learn about it to
make good decision
All these proprties explained with reference to a vaccum cleaner.
1.Fully Observable vs. Partially Observable:
Fully Observable: The environment is fully observable if the vacuum cleaner's
sensors provide complete and accurate information about the entire floor at all
times. For example, if the vacuum cleaner has a camera that captures the entire
floor, it operates in a fully observable environment.
Partially Observable: If the vacuum cleaner relies on sensors that only detect dirt
in the local area and cannot see the entire floor, the environment becomes
partially observable.
2.Single-Agent vs. Multiagent:
Single-Agent: In a single-agent scenario, the vacuum cleaner operates alone and
makes decisions independently. For instance, if the vacuum cleaner is the only
device cleaning the room, it functions in a single-agent environment.
Multiagent: If there are multiple vacuum cleaners cleaning the same space, and
their actions may affect each other, it becomes a multiagent environment.
3.Deterministic vs. Nondeterministic:
Deterministic: A vacuum cleaner operates in a deterministic environment if,
based on its current state and action (cleaning a specific area), the state of the
floor is entirely predictable. In a deterministic scenario, the vacuum cleaner
knows the outcome of its actions.
Nondeterministic: If there are uncertain elements, such as unpredictable changes
in the floor state or the vacuum's performance, the environment becomes
nondeterministic.
4.Episodic vs. Sequential:
Episodic: If the vacuum cleaner's cleaning process is divided into separate,
independent episodes where each cleaning action is based solely on the current
state of a localized area (no consideration of past actions or future
consequences), it operates in an episodic environment.
Sequential: In a sequential environment, the vacuum cleaner's current cleaning
action might depend on past actions, and its decisions could influence the overall
cleanliness of the entire floor over time.
5.Static vs. Dynamic:
Static: The environment is static if the layout of the room and the characteristics
of the floor remain constant while the vacuum cleaner is cleaning.
Dynamic: If, for instance, people are moving around, objects are being
rearranged, or new dirt is being introduced while the vacuum cleaner is in
operation, the environment becomes dynamic.
6.Discrete vs. Continuous:
Discrete: The environment is discrete if the vacuum cleaner operates in a setting
with distinct, countable states. For example, the vacuum cleaner moves from one
discrete location to another.
Continuous: If the vacuum cleaner can move smoothly and continuously across
the floor, adjusting its speed and direction in a continuous manner, the
environment is continuous.
7.Known vs. Unknown:
Known: The environment is known if the vacuum cleaner has complete
information about the layout of the room, the types of surfaces it will encounter,
and the performance characteristics of its sensors and actuators.
Unknown: If the vacuum cleaner is introduced to a new environment with
unfamiliar obstacles, different floor types, or unknown sensor characteristics, it
operates in an unknown environment and may need to learn about it
STRUCTURE OF AGENT
------------------The job of AI is to design an agent program that implements the agent function
the mapping from percepts to actions. We assume this program will run on some
sort of computing device with physical sensors and actuators we call this the
architecture:
agent = architecture + program .
Architecture
Obviously, the program we choose has to be one that is appropriate for the
architecture. If the program is going to recommend actions like Walk, the
architecture had better have legs. The architecture might be just an ordinary PC,
or it might be a robotic car with several onboard computers, cameras, and other
sensors. In general, the architecture makes the percepts from the sensors
available to the program, runs the program, and feeds the program s action
choices to the actuators as they are generated.
Agent program
The agent programs that we design all have the same skeleton: they take the
current percept as input from the sensors and return an action to the actuators.
Here agent program, takes the current percept as input, wheras the agent
function depends on the entire percept history. The agent program has no choice
but to take just the
current percept as input because nothing more is available from the environment;
if the agent s actions need to depend on the entire percept sequence, the agent
will have to remember the percepts in the Agent memory.
function skeleton - Agent(percept) , return action , static memory (agents
memory of world);
memory - UPDATE-MEMORY(memory , percept) *current i/p memory updated*
action - CHOOSE-BEST(memory) - *perform action by choosing best from i/p
memory*
memory - UPDATE-MEMORY(memory , action) *update memory with choosen i/p
for next time use of same program"
return action ; *change in environment*
Important for a taxi driver
percept -the information gathered from the surrounding environment and the
current state of the taxi. This could include data from sensors, cameras, GPS, and
other devices that provide real-time information about the road, traffic, weather,
and the interior of the vehicle.
action - involve steering, acceleration, braking, and other necessary actions to
navigate the vehicle safely to the destination. May also include actions such as
route planning and traffic prediction, to optimize the driver's actions.
goal - to efficiently and safely transport passengers from their starting point to
their destination. Making sure passengers comfort and also time taken for the
journey .
environment - includes the road network, traffic conditions, weather, and the
interior of the taxi.
TYPES OF AGENTS
--------------each one has a diagram please refer to the notes.
1.Simple Reflex Agent: these are the simplest agents, the agents take decisions on
the basis of the current percepts and ignore the rest of the percept history. These
agents only succeed in the fully observable environment.
does not consider any part of percepts history during their decision and action
process.works on Condition-action rule, which means it maps the current state to
action. Such as a Vaccum Cleaner agent, it works only if there is dirt in the square.
2.Model Based Reflex Agent: this agent can work in a partially observable
environment, and track the situation.
A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on
the model they perform actions.
3.Goal Based Agent: These agents may have to consider a long sequence of
possible actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and planning, which
makes an agent proactive.
4.Utility Based Agent: these agents act based not only goals but also the best way
to achieve the goal.
This agent is useful when there are multiple possible alternatives, and an agent
has to choose in order to perform the best action.
KNOWLEDGE REPRESENATION ISSUES:-------------------------------TYPES OF KNOWLEDGE:*declarative knowledge
*procedural knowledge
*meta knowledge
*heuristic knowledge
*structural knowledge
- process of presenting about real world in a way that a system can comprehend
and use. NLP is required for ineraction between computer and human using
natural language .
- 2 different kind of entities are use
i. Facts - truths in some relevant world . things we want to represent.
ii. representation of facts - the things we will actually be able to manipulate.
Represent - can be done in two levels
i. knowledge level - at which facts are described.(agent behavior and goals)
ii.symbol level - rep of objects at knowledge level are defined using symbols.
*Diagram*
example : spot is a dog ......
Representation of facts : - *diagram*.
A good system for the representation of knowledge in a particular domain should
posses the following properties.
1.RepresentationalAccuracy:KR system should have the ability to represent all
kind of required knowledge.
2.InferentialAdequacy:KR system should have ability to manipulate the
representational structures to produce newknowledge corresponding to
existing structure.
3.InferentialEfficiency:The ability to direct the inferential knowledge mechanism
into the most productive directions bystoring appropriate guides. 4. Acquisitional
efficiency- The ability to acquire the new knowledge easily using
automaticmethods.
ISSUES IN KNOWLEDGE REPRESENATION
--------------------------------The issues that arise while using Knowledge Representation techniques are many.
Some of these are explained below.
1.Important Attributed:
There are two attributed “instance― and “isa―, that are general
significance. These attributes are important because they support property
inheritance.
2.Relationship among attributes:
The attributes we use to describe objects are themselves entities that we
represent. The relationship between the attributes of an object, independent of
specific knowledge they encode, may hold properties like:
Inverse This is about consistency check, while a value is added to one attribute.
The entities are related to each other in many different ways.
Existence in an isa hierarchy This is about generalization-specification, like,
classes of objects and specialized subsets of those classes, there are attributes
and specialization of attributes. For example, the attribute height is a
specialization of general attribute physical-size which is, in turn, a specialization of
physical-attribute. These generalization-specialization relationships are important
for attributes because they support inheritance.
Technique for reasoning about values This is about reasoning values of attributes
not given explicitly. Several kinds of information are used in reasoning, like,
height: must be in a unit of length, Age: of a person cannot be greater than the
age of persons parents. The values are often specified when a knowledge base is
created.
Single valued attributes This is about a specific attribute that is guaranteed to
take a unique value. For example, a baseball player can at time have only a single
height and be a member of only one team. KR systems take different approaches
to provide support for single valued attributes.
3.Choosing Granularity:
Regardless of the KR formalism, it is necessary to know: should there be a small
number or should there be a large number of low-level primitives or High-level
facts.
High-level facts may not be adequate for inference while Low-level primitives may
require a lot of storage.
Example of Granularity:
Suppose we are interested in following facts:
" John spotted Sue."
This could be represented as
==> Spotted (agent(John),object (Sue))
Such a representation would make it easy to answer questions such are:
Who spotted Sue?
Suppose we want to know:
Did John see Sue?
Given only one fact, we cannot discover that answer.
We can add other facts, such as
Spotted(x, y) -> saw(x, y)
We can now infer the answer to the question.
4.Set of objects:
There are certain properties of objects that are true as member of a set but not as
individual;
Example: Consider the assertion made in the sentences:
“there are more sheep than people in Australia―, and
“English speakers can be found all over the world.―
To describe these facts, the only way is to attach assertion to the sets
representing people, sheep, and English.
The reason to represent sets of objects is: if a property is true for all or most
elements of a set, then it is more efficient to associate it once with the set rather
than to associate it explicitly with every elements of the set.
5.Finding Right structure:
This is about access to right structure for describing a particular situation. This
requires, selecting an initial structure and then revising the choice. While doing
so, it is necessary to solve following problems:
How to perform an initial selection of the most appropriate structure.
How to fill in appropriate details from the current situations.
How to find a better structure if the one chosen initially turns out not to be
appropriate.
What to do if none of the available structures is appropriate.
When to create and remember a new structure.
Representing Knowledge Using Rules
*· Procedural Versus Declarative Knowledge
*· Logic Programming
*Procedural Versus Declarative Knowledge
*· A declarative representation is one in which knowldge is specified, but the use
to which that knowledge is to be put is not given
*· A procedural representation is one in which the control information that is
necessary to use the knowledge is considered to be embedded in the knowledge
itself.
*Logic Programming
n x: pet(x) L small(x) -> apartmentpet(x)
n x: cat(x) n dog(x) -> pet(x)
n x: poodle(x) -> dog(x) L small(x)
poddle(fluffy)
*A Representation in Logic
apartmentpet(X) :- pet(X) , small(X) .
pet(X) :- cat(X).
pet(X) :- dog(X).
dog(X) :- poodle(X).
small(X) :- poodle(X).
poodle(fluffy).
Production System
----------------*Rules
An unordered set of user-defined "if-then" rules of the form:
if P1, P2....Pm are facts then A1,A2....An are actions.
where facts determine the conditions when this rule is applicable. Each Action
adds or deletes a fact from the Working Memory.
*Working Memory
A set of "facts" consisting of positive literals defining what's known to be true
about the world
*Inference Engine
Procedure for inferring changes (additions and deletions) to Working Memory.
do:
– Construct Conflict Set
The Conflict Set is the set of all possible (rule, listof-facts) pairs such that rule is
one of the rules and list-of-facts is a subset of facts in WM that unify with the
antecedent part (i.e., Left-hand side) of the given rule.
– Apply Conflict Resolution Strategy
Select one pair from the Conflict Set.
– Act Phase
Execute the actions associated with the consequent part of the selected rule,
after making the substitutions used during unification of the antecedent part with
the list-of-facts
*Conflict Resolution Strategy
The following are some of the commonly used conflict resolution strategies.
These are often combined as well to
define hybrid strategies.
==>Refraction - A rule can only be used once with the same set of facts in WM.
Whenever WM is modified, all rules can again be used. This strategy prevents a
single rule and list of facts from be used over and over again, resulting in "infinite
firing" of the same thing.
==>Recency - Use rules that match the facts that were added most recently to
WM. Hence, each fact in WM has a time-stamp indicating when that fact was
added. Provides a kind of "focus of attention" strategy.
==>Specificity - Use the most specific rule, i.e., if one rule's LHS is a superset of the
facts in the LHS of a second rule, then use the first one because it is more specific.
In general, select that rule that has the largest
number of preconditions.
Example
· Let WM = {A, D}
· Let Rules =
1. if A then Add(B)
2. if A then Add(C), Delete(A)
3. if A ^ E then Add(D)
4. if D then Add(E)
5. if A ^ D then Add(F)
· Conflict Set = {(R1, (A)), (R2, (A)), (R4, (D)), (R5,(A,D))}
· Using Specificity Conflict Resolution Strategy, select (R5,(A,D)) because it
matches two facts(3,5) from WM while the others match only one fact each. And
hence WM can be modified as WM ={A, D, F} by adding 'F'.
USING PREDICATE LOGIC
--------------------1.REPRESENTING SIMPLE FACTS IN LOGIC
-there are various knowldedge representation schemas one of them is lofival
representation -(propositional and predicate logic)
- facts can be represented using propositional logic converting them to well
formed formula.
example :It is raining.(RAINING)
It is sunny.(SUNNY)
If it is not raining, it is sunny
RAINING-->SUNNY.
the drawback here is that it does'nt give the acctual realationship. to overcome
this we need variables and quantification that is acheived using first order
predicate logic (or) just predicate logic.
PREDICATE LOGIC- adds by introducing quantifiers to the exisiting proposition.
lets understand with an example:marcus and caeser. example in notes.
1-9 stattments and represented facts.
2. REPRESENATION OF INSTANCE AND ISA RELATIONSHIP
-the predicate instance and isa explicitly captured the relationship they are used
to express, namely class membership and class inclusion.
- need not be represented using instance and isa but represented based on
deductive mechanism/techniques.
example 1-9 statements marcus and caeser example only.
Resolution
used to compare the facts and deduce them by matching them or by
substitution.
conversion to clause form :steps involved are:1.Eliminate →, using the fact that a → b is equivalent to ¬ a V b.
2.Reduce the scope of each ¬ to a single term, using the fact that ¬ (¬ p) = p,
deMorgan s laws [which say that ¬ (a ^ b) = ¬ a V ¬ b and ¬ (a V b) = ¬ a ^
¬ b ]
3. Standardize variables so that each quantifier binds a unique variable.
4. Move all quantifiers to the left of the formula without changing their relative
order.
5.Eliminate existential quantifiers For example, in the formula ∀x: Ǝy: fatherof(y, x) The value of y that satisfies father-of depends on the particular value of x.
Thus we must generate functions with the same number of arguments as the
number of universal quantifiers in whose scope the expression occurs
6. Drop the prefix
7.Convert the matrix into a conjunction of disjuncts
8.Create a separate clause corresponding to each conjunction
9.Standardize clause
Download