22/04/15 Knowledge in Learning Andreas Wichert DEIC (Página da cadeira: Fenix) Machine Learning n “Statistical Learning” n n n n n n n n Clustering, Self Organizing Maps (SOM) Artificial Neural Networks, Kernel Machines Neural Networks (Biological Models) Bayesian Network Inductive Learning (ID3) Knowledge Learning Analogical Learning Reinforcement Learning 1 22/04/15 Knowledge and Learning n Current best hypotheses (Differneces, Correct) n Version n Last search space commitment search n EBL n RBL n ILP n Analogical Reasoning Case Based An Example (Patrick Winston-1975) 2 22/04/15 An Example (Patrick Winston-1975) An Example (Patrick Winston-1975) EA C461 Artificial Intelligence 6 3 22/04/15 An Example (Patrick Winston-1975) 4 22/04/15 ID3 Decision Tree EBL and Reinforcement learning n S.P.Vimal n CSIS Group n BITS-Pilani n http://discovery.bits-pilani.ac.in/ discipline/csis/vimal/ 5 22/04/15 Explanation Based Learning (EBL) n ID3 generalize on the basis of training data n Similarity based algorithm n Generalization is a function of similarities across training examples n No assumptions about the semantics of the domain n EBL uses domain (prior) knowledge to guide generalization EBL n Prior domain knowledge in learning n Similarity based algorithms relies large amount of training data … n A set of training examples can support an unlimited number of generalizations 6 22/04/15 EBL n EBL uses explicitly represented domain theory to construct an explanation of a training example n n It’s a proof that the example logically follows from the theory Benefits Filters noise n Selects relevant aspects of experience n Organizes training data into a systematic and coherent structure n EBL Starts with… n A target concept n n A training example n n An instance of target A domain theory n n The concept to which the definition is sought Set of rules and facts that are used to explain how the training example is an instance of goal concept Operational Criteria n Some means of describing the form that the definition may take 7 22/04/15 Learn when an object is a cup n Target concept: • premise (x)à cup(x) n Domain • • • • • Theory: liftable(x)∧holds_liquid(x)àcup(x) part(z,w) ∧concave(w) ∧points_up(w)àholds_liquid(z) light(y) ∧part(y,handle)àliftable(y) small(a)àlight(a) made_of(a,feathers)àlight(a) EBL n Training Example: • cup(obj1) • small(obj1) • part(obj1,handle) • owns(bob,obj1) • part(obj1,bottom) • part(obj1,bowl) • points_up(bowl) • concave(bowl) • color(obj1,red) 8 22/04/15 EBL n Operational criteria: n Target concept to be defined in terms of observable, structural properties of objects EBL n Construct an explanation of why the example is an instance of the training concept 9 22/04/15 EBL n n Generalize the explanation to produce a concept definition that may be used to recognize other cups Substitute variables for those constants in the proof tree that depend solely on particular training instance EBL n Based on the generalized tree, EBL a new rule whose conclusion is the root of the tree and whose premise is the conjunction of leaves small(X)∧part(X,handle) ∧part(X,W) ∧concave(W)∧points_up(W) àcup(X) 10 22/04/15 EBL- An example (contd…) n Alternate way of constructing generalized proof tree: n Explanation structure (Dejong, Moone -1986) n Maintains two parallel list of substitutions • To explain the training example • To explain the generalized goal EA C461 Artificial Intelligence 21 EBL (two parallel lists) 11 22/04/15 EBL, in short… n From the training example, the EBL algorithm computes a generalization of the example that is consistent with the goal concept and that meets the operational criteria • a description of the appropriate form of the final concept) EBL, An Issue… n Require domain theory needs to be complete and consistent. n Complete (knowledge): n n n An agent knows all possibly relevant information about its domain Closed world assumption, under which any fact not known to the agent can be taken to be false Consistent (knowledge): n The environment can be assumed to change much slower (if at all) with respect to the speed of the reasoning and learning mechanisms 12 22/04/15 EBL, some more of it… n n Inferred rules could have been inferred from the KB without using training instance at all The function of a training instance is to focus the theorem prover on relevant aspects of the problem domain n n Speedup learning / knowledge based reformulation Takes information implicit in a set of rules and makes it explicit EBL some more …. n EBL extracts general rules from single examples by explaining the examples and generalizing the explanation n RBL uses prior knowledge to identify the relevant attributes n KBIL finds inductive hypotheses that explain sets of observations with background knowledge n ILP techniques perform KBIL using knowledge expressed in first-order logic 13 22/04/15 Analogical Reasoning n Assumption n n If two situations known to be similar in some respects, it is likely that they are similar in others Useful in applying existing knowledge to new situations n Ex: Teacher tells the student, that • • • • • Electricity is analogous to water Voltage corresponding to pressure Amperage corresponding to the amount of flow Resistance to the capacity of pipe ………….. Model for Analogical Reasoning n Source of analogy n n Target of analogy n n A theory that is relatively well understood A theory that is not completely understood • Switch ⇔ Valve • Amperage ⇔ Quantity of flow • Voltage ⇔ Water Pressure • ________ ⇔ Capacity of a pipe Analogy n Mapping between corresponding elements of the target and source 14 22/04/15 Model for Analogical Reasoning n Framework for computational model for analogical reasoning consists of stages n Retrieval • Identify those corresponding features in source & target • Index knowledge according to those features n Elaboration • Identify those additional features of the source n Mapping and Inference • Map corresponding features in source & target n Learning • Learn the target & store it for further learning An Example n Systematicity n Analogy should emphasize those systematic, structural features over a superficial analogies • The atom is like solar system • The sunflower is like sun n n Source domain predicates n yellow(sun) n blue(earth) n hotter-than(sun,earth) n causes(more-massive(sun,earth), attract(sun,earth)) n causes(attract(sun,earth), revolves-around(earth,sun)) Target domain that the analogy is intended to explain n more-massive(nucleus, electron) n revolves-around(electron,nucleus) 15 22/04/15 An Example n Some constraints n Drop properties from source n Relations map unchanged from the source to target, arguments may differ n Focus higher order relations in mapping An Example 16 22/04/15 An Example n Resultant mapping n n n n sunàneucleus earthàelectron Extending the mapping leads to the inference n causes(more-massive(neucleus, electron), attract(neucleus, electron)) n causes(attract(neucleus, electron), revolves-around(electron, neucleus)) Structure mapping described is not a complete theory of analogy 17 22/04/15 Verbal Categories n Objects can be described by a set of discrete features, such as red, round and sweet n The similarity between them can be defined as a function of the features they have in common n An object is judged to belong to a verbal category to the extent that its features are predicted by the verbal category n The similarity of a category Ca and of a feature set B is given by the following formula Simc(Ca, B) = n Not |Ca ⇤ B| |Ca| |(Ca (Ca ⇤ B)| 2 = · |Ca ⇤ B| |Ca| |Ca| 1 ⇥ [ 1, 1] symmetrical 18 22/04/15 n Psychological experiments suggest that unfamiliar categories are judged more equal to familiar then the other way around • For example, pomegranate is judged more similar to apple by most people than apple is to pomegranate n n n The category bird is defined by the following features: flies, sings, lays eggs, nests in trees, eats insects The category bat is defined by the following features: flies, gives milk, eat insects The following features are present: flies and gives milk Following features are present: flies and gives milk 1 5 2 Simc(bat, present f eatures) = 3 Simc(bird, present f eatures) = 2 n Simc(C, f ) = · C j f j c j=1 4 2 3 = ·1 1 = 5 5 5 1 2 1 = ·2 1 = 3 3 3 1 19 22/04/15 Visual categories EBL vs. Similarity Starts with… n A target concept n n A training example n n An instance of target A domain theory (Similarity, Geometry?) n n The concept to which the definition is sought Set of rules and facts that are used to explain how the training example is an instance of goal concept Operational Criteria (Similarity, Geometry?) n Some means of describing the form that the definition may take 20 22/04/15 Reinforcement Learning n Human learn from interacting with our environment n Get feedback for our actions from the world ! (soon or later) n We can understand cause-effect, consequences of our actions & it even help to achieve complex goals n For us, the world is the teacher, hut her lessons are often subtle & sometimes hard won Reinforcement Learning n Use a computational model to transform the world situations into actions in a manner that maximizes a reward measure n Agent is not told directly what to do and what actions to take n Agent discovers it through exploration which actions offer more reward (trial-and-error) n Agents actions might not just affect immediate rewards, but subsequent actions and eventual rewards (search and delayed reinforcement) 21 22/04/15 Start S2 S4 S3 S8 S7 S5 Arrows indicate strength between two problem states Start maze … Goal Start S2 S4 S3 S8 S5 S7 The first response leads to S2 … The next state is chosen by randomly sampling from the possible Goal 22 22/04/15 Start S2 S4 S3 S8 S7 S5 Goal Start S2 S4 S3 S8 S5 Suppose the randomly sampled response leads to S3 … At S3, choices lead to either S2, S4, or S7. S7 was picked (randomly) S7 Goal 23 22/04/15 Start S2 S4 S3 S8 S7 S5 Goal Start S2 S4 Next response is S4 S3 S8 S5 By chance, S3 was picked next… S7 Goal 24 22/04/15 Start S2 S4 S3 S8 S7 S5 Goal Start S2 S4 And the goal is reached … S3 S8 S5 And S5 was chosen next (randomly) S7 Goal 25 22/04/15 Start S2 S4 S3 S8 S7 S5 Next time S5 is reached, part of the associative strength is passed back to S4... Goal Start S2 S4 Start maze again… S3 S8 S5 Goal is reached, strengthen the associative connection between goal state and last response S7 Goal 26 22/04/15 Start S2 S4 S3 S8 S7 S5 Goal Start S2 S4 S3 S8 S5 Let’s suppose after a couple of moves, we end up at S5 again S7 S5 is likely to lead to GOAL through strenghtened route In reinforcement learning, strength is also passed back to the last state This paves the way for the next time going through maze Goal 27 22/04/15 Start S2 S4 The situation after lots of restarts … S3 S8 S7 S5 Goal Reinforcement Learning n No specific learning methods Actions within & responses from the environment n Any learning method that address this interaction is reinforcement learning n n Is reinforcement learning same as supervised learning? No n Absence of a designated teacher to give positive and negative examples n 28 22/04/15 Next n Reinforcement n Cognitive n State Learning architecture and Learning space n Analogy n 21/28 May 4/June? 29