Combining agents into societies Luís Moniz Pereira Centro de Inteligência Artificial – CENTRIA Universidade Nova de Lisboa DEIS, Università di Bologna, 22 Marzo 2004 Summary Goal and motivation Overview of MDLP (Multi-Dimensional LP) Combining inter- and intra-agent societal viewpoints An architecture for evolving Multi-Agent viewpoints A logical framework for modelling societies Future work and conclusion Goal Explore the applicability of MDLP to represent multiple agents’ view of societal knowledge dynamics and evolution The representation is the core of the agent architecture and system MINERVA. It was designed with the aim of providing a common agent framework based on the strengths of Logic Programming. Motivation - 1 The notion of agency has claimed a major role in modern AI research LP and non-monotonic reasoning are appropriate for rational agents: Utmost efficiency is not always crucial Clear specification and correctness are crucial LP provides a general, encompassing, rigorous declarative and procedural framework for rational functionalities Motivation - 2 Till recently, LP could be seen as good for representing static non-contradictory knowledge. In the agency paradigm we need to consider: Ways of integrating knowledge from different sources evolving in time Knowledge expressing state transitions Knowledge about environment and societal evolution, and each agent’s own behavioural evolution LP declaratively describes states well. But LP must describe state transitions too. MDLP overview DLP synopsis MDLP motivation MDLP semantics Multiple representational dimensions in a multiagent system Representation prevalence Overview conclusions Dynamic LP DLP was introduced to express LP’s linear evolution in dynamic environments, via updates DLP gives semantics to sequences of GLPs Each program represents a distinct state of knowledge, where states may specify: different time points, different hierarchical instances, different viewpoints, etc. Different states may have mutually contradictory or overlapping information, and DLP determines the semantics for each state sequence MDLP Motivating Example Parliament issues law L1 at time t1 A local authority issues law L2 at time t2 > t1 Parliamentary laws override local laws, but not vice-versa: L2 L1 More recent laws have precedence over older ones: L1 L2 How to combine these two dimensions of knowledge precedence? DLP with Multiple Dimensions (MDLP) MDLP In MDLP knowledge is given by a set of programs Each program represents a different piece of updating knowledge assigned to a state States are organized by a DAG (Directed Acyclic Graph) representing their precedence relation MDLP determines the composite semantics at each state, according to the DAG paths MDLP allows for combining knowledge updates that evolve along multiple dimensions Generalized Logic Programs To represent negative info in LP updates, we need LPs allowing not in heads Programs are sets of generalized LP rules: A B1,…, Bk, not C1,…,not Cm not A B1,…, Bk, not C1,…,not Cm The semantics is a generalization of SMs MDLP - definition Definition: A Multi-Dimensional Dynamic Logic Program, P, is a pair (PD, D) where: D= (V, E) is an acyclic digraph PD= { PV : v V} is a set of generalized logic programs indexed by the vertices of D MDLP - semantics 1 Definition: Let P=(PD,D) be a MDLP. An interpretation Ms is a stable model of the multi- dimensional update at state sV iff, where Ps= is Ms= least( [Ps – Reject(s, Ms)] Defaults (Ps, Ms) ) j1 j2 j3 s Pi : j1 MDLP - semantics 2 j2 j3 s Ms= least( [Ps – Reject(s, Ms)] Defaults (Ps, Ms) ) where: Reject(s, Ms) = {r Pi | r’ Pj , ijs, head(r)=not head(r’) Ms |= body(r’)} Defaults (Ps, Ms) = {not A | r Ps : head(r)=A Ms |= body(r)} MDLP for Agents Flexibility, modularity, and compositionality of MDLP makes it suitable for representing the evolution of several agents’ combined knowledge How to encode, in a DAG, the relationships among every agent ’ s evolving knowledge along multiple dimensions ? Two basic dimensions of a multi-agent system Hierarchy of agents º ¯ ¹ Temporal evolution of one agent ¯ ¯0 ¯1 ... ¯c ° ® How to combine these dimensions into one DAG ? Equal Role Representation Assigns equal role to the two dimensions: 0 1 º0 ¯0 º1 ¹0 °0 c ¯1 ºc ¹1 °1 ®0 ... ¯c ¹c °c ®1 ®c ®' Equal Role - 2 0 1 º0 ¯0 º1 ¹0 °0 c ¯1 ... ¹1 °1 ®0 ºc ¯c ¹c °c ®1 ®c ®' In legal reasoning: Lex Superior : rules issued by a higher authority override those of a lower one Lex Posterior : more recent rules override older ones It potentiates contradiction: There are many pairs of unrelated programs Time Prevailing Representation Assigns priority to the time dimension: ®0 ®1 º0 ¯0 º1 ¹0 °0 ®c ¯1 ... ºc ¹1 ¯c ¹c °1 °c ®0 ®1 ®0 ®1 ®c ... ®c ®' Time Prevailing - 2 ®0 ®1 º0 ¯0 º1 ¹0 °0 ®c ¯1 ... ºc ¹1 ¯c ¹c °1 °c ®0 ®1 ®0 ®1 ®c ... ®c Useful in very dynamic situations, where competence is distributed, i.e. agents normally provide rules about literals Drawback: It requires all agents to be fully trusted, since all newer rules override older ones irrespective of their mutual hierarchical position ®' Hierarchy Prevailing Representation Assigns priority to the hierarchy dimension: ¯ ¯0 ¯1 ... ¯c ° °0 º °1 ... °c ¯ ¹ ° ® ®' Hierarchy Prevailing - 2 ¯ ¯0 ¯1 ... ¯c ° °0 º °1 ... °c ¯ ¹ ° ® ®' Useful when some agents are untrustworthy Drawback: One has to consider the whole history of all higher ranked agents in order to accept/reject a rule from a lower ranked agent However, techniques are being developed to reduce the size of a MDLP (garbage collection). Inter- and Intra- Agent Relationships The above representations refer to a community of agents But they can be used as well for relating the several sub-agents of an agent A sub-agent Hierarchy ®a ®b ®e ®d Intra- and Inter- Agent Example ¯ ¯0 ¯1 ... ¯c ° °0 Prevailing hierarchy for inter-agents ¯ °1 ¹ ... Prevailing time for sub-agents º ° °c ® ®' ® ®0 ®1 ®a ®a ®b 0 ®e ®d ®b 1 ®c ... ... ®a ®e ®d ®b ... c ®e ®d Current work of overview A MINERVA agent: Is based on a modular design It has a common internal KB (a MDLP), concurrently manipulated by its specialized sub-agents Every agent is composed of specialized subagents that execute special tasks, e.g. reactivity planning scheduling belief revision goal management learning preference evaluation strategy MDLP overview conclusions We’ve explored MDLP to combine knowledge from several agents and multiple dimensions Depending on the situation, and relationships among agents, we’ve envisaged several classes of DAGs for their encoding Based on this work, and on a language (LUPS) for specifying updates by means of transitions, we’ve launched into the design of an agent architecture MINERVA Evolving multi-agent viewpoints – one more overview Our agents Framework references Mutually updating agents MDLP synopsis Agent language: projects and updates Agent knowledge state and agent cycle Example An implemented example architecture Future work Our agents We propose a LP approach to agents that can: Reason and React to other agents Update their own knowledge, reactions, and goals Interact by updating the theory of another agent Decide whether to accept an update depending on the requesting agent Capture the representation of social evolution Updating agents Updating agent: a rational, reactive agent that can dynamically change its own knowledge and goals makes observations reciprocally updates other agents with goals and rules thinks (rational) selects and executes an action (reactive) Multi-Dimensional Logic Programming In MDLP knowledge is given by a set of programs. Each program represents a different piece of updating knowledge assigned to a state. States are organized by a DAG (Directed Acyclic Graph) representing their precedence relation. MDLP determines the composite semantics at each state according to the DAG paths. MDLP allows for combining knowledge updates that evolve along multiple dimensions. New contribution 1 To extend the framework of MDLP with integrity constraints and active rules. 2 To incorporate the framework of MDLP into a multi-agent architecture. 3 To make the DAG of each agent updatable. DAG A directed acyclic graph DAG is a pair D = (V, E) where V is a set of vertices and E is a set of directed edges. Agent’s language Atomic formulae: objective atoms A i:C iC updates not A default atoms Formulae: generalized rules A L1 Ln not A L1 Ln integrity constraint projects Li is an atom, an update or a negated update Zj is a project false L1 Ln Z1 Zm L1 Ln => Z active rule Projects and Updates A project j:C denotes the intention of some agent i of proposing the updating the theory of agent j with C. iC denotes an update proposed by i of the current theory of some agent j with C. fredC wilma:C Agents’ knowledge states Knowledge states represent dynamically evolving states of agents’ knowledge. They undergo change due to updates. Given the current knowledge state Ps , its successor knowledge state Ps+1 is produced as a result of the occurrence of a set of parallel updates. Update actions do not modify the current or any of the previous knowledge states. They only affect the successor state: the precondition of the action is evaluated in the current state and the postcondition updates the successor state. Agent’s language A project i:C can take one of the forms: i : ( A L1 Ln ) i : ( not A L1 Ln ) i : ( false L1 Ln Z1 Zm ) i : ( L1 Ln => Z ) i : ( ?- L1 Ln ) i : edge(u,v) i : not edge(u,v) Initial theory of an agent A multi-dimensional abductive LP for an agent is a tuple: T = D, PD, A, RD - D = (V, E) is a DAG s.t. ´V (inspection point of ). - PD = {PV | vV} is a set of generalized LPs. - A is a set of atoms (abducibles). - RD = {RV | vV} is a set of set of active rules. The agent’s cycle Every agent can be thought of as an abductive LP equipped with a set of inputs represented as updates. The abducibles are (names of) actions to be executed as well as explanations of observations made. Updates can be used to solve the goals of the agent as well as to trigger new goals. Happy story - example DAG of Alfredo alfredo´ inspection point of Alfredo judge mother father The goal of Alfredo is to be happy alfredo girlfriend state 0 Happy story - example alfredo´ judge mother father alfredo hasGirlfriend not happy => father : (?-happy) not happy => mother : (?-happy) getMarried hasGirlfriend => girlfriend : propose moveOut => alfredo : rentApartment custody(judge,mother) => alfredo : edge(father,mother) girlfriend {moveOut, getMarried} state 0 abducibles Happy story - example alfredo´ judge mother father alfredo hasGirlfriend not happy => father : (?-happy) not happy => mother : (?-happy) getMarried hasGirlfriend => girlfriend : propose moveOut => alfredo : rentApartment custody(judge,mother) => alfredo : edge(father,mother) girlfriend {moveOut, getMarried} state 0 Agent theory The initial theory of an agent is a multi-dimensional abductive LP. Let an updating program be a finite set of updates, and S be a set of natural numbers. We call the elements sS states. An agent at state s, written s , is a pair (T,U): - T is the initial theory of . - U={U1,…, Us} is a sequence of updating programs. Multi-agent system A multi-agent system M={1s ,…, ns } at state s is a set of agents 1,…,n at state s. M characterizes a fixed society of evolving agents. The declarative semantics of M characterizes the relationship among the agents in M, and how the system evolves. The declarative semantics is stable models based. Happy story - 1st scenario Suppose that at state 1, Alfredo receives from the mother: mother (happy moveOut) mother (false moveOut not getMarried) mother (false not happy) and from the father: father (happy moveOut) father (not happy getMarried) Happy story - 1st scenario alfredo´ false not happy happy moveOut false moveOut not getMarried judge mother father happy moveOut not happy getMarried alfredo girlfriend state 1 In this scenario, Alfredo cannot achieve his goal without producing a contradiction. Not being able to make a decision, Alfredo is not reactive at all. Happy story - 2nd scenario Suppose that at state 1 Alfredo’s parents decide to get divorced, and the judge gives custody to the mother. judge custody(judge,mother) Happy story - 2nd scenario alfredo´ custody(judge,mother) judge mother father alfredo girlfriend state 1 hasGirlfriend not happy => father : (?-happy) not happy => mother : (?-happy) getMarried hasGirlfriend => girlfriend : propose moveOut => alfredo : rentApartment custody(judge,mother) => alfredo : edge(father,mother) Happy story - 2nd scenario alfredo´ Note that the internal update produces a change in the DAG of Alfredo. judge mother father alfredo girlfriend state 2 Suppose that when asked by Alfredo, the parents reply in the same way as in the 1st scenario. Happy story - 2nd scenario alfredo´ false not happy happy moveOut false moveOut not getMarried judge mother father happy moveOut not happy getMarried alfredo girlfriend state 2 Now, the advice of the mother prevails over and rejects that of his father. Happy story - 2nd scenario alfredo´ Thus, Alfredo gets married, rents an apartment, moves out and lives happily ever after. judge mother father alfredo girlfriend state 2 hasGirlfriend not happy => father : (?-happy) not happy => mother : (?-happy) getMarried hasGirlfriend => girlfriend : propose moveOut => alfredo : rentApartment custody(judge,mother) => alfredo : edge(father,mother) Syntactical transformation The semantics of an agent at state s, s=(T,U), is established by a syntactical transformation that maps s into an abductive LP: s = P,A,R 1. s P´,A,R P´ is a normal LP, A and R are a set of abducibles and active rules. 2. Default negation can then be removed from P´ via the abdual transformation (Alferes et al. ICLP99, TCLP04): P´ P P is a definite LP. Agent architecture s = P,A,R Java CC InterProlog (Declarativa) can abduce Rational P XSB Prolog InterProlog (Declarativa) Reactive P+R XSB Prolog cannot abduce Agent architecture ActionH External Interface UpdateH Updates s = P,A,R ext.project int.project CC projects Rational P Reactive P+R Future work of overview At the agent level: How to combine logical theories of agents expressed over graph structures. How to incorporate other rational abilities, e.g., learning. At the multi-agent system level: Non synchronous, dynamic multi-agent system. How to formalize dynamic societies of agents. How to formalize the notion of organisational reflection. A logical framework for modelling eMAS Motivation To provide control over the epistemic agents in a MultiAgent System (eMAS) the need arises to: - explicitly represent its organizational structure, - and its agent interactions. We introduce a logical framework F, suitable for representing organizational structures of eMAS. - we provide its declarative and procedural semantics. - F having a formal semantics, it permits us to prove properties of eMAS structures. MDLPs revisited We generalize the definition of MDLP by assigning weights to the edges of a DAG. In case of conflictual knowledge, incoming into a vertex v by two vertices v1 and v2, the weights of v1 and v2 may resolve the conflict. If the weights are the same both conclusions are false. (Or, two alternative conclusions can be made possible.) v 0.2 v1 {a} [a] 0.1 v2 {not a} Weighted directed acyclic graphs Def. Weighted directed acyclic graph (WDAG) A weighted directed acyclic graph is a tuple D = (V, E, w) : - V is a set of vertices, - E is a set of edges, - w : E R+ maps edges into positive real numbers, - no cycle can be formed with the edges of E. We write v1 v2 to indicate a path from v1 to v2. WMDLPs Def. WMDLP – Weighted Multi-Dimensional Logic Program A WMDLP is a pair (D,D), where: D = (V,E,w) is a WDAG - Weighted directed acyclic graph and, D = {Pv : v V} is a set of generalized logic programs indexed by the vertices of D. Path dominance Def. Dominant path Let a1 an be a path with vertices a1,a2,…,an. a1 an is a dominant path if there is no other path b1,b2,…,bm such that: - b1= a1, bm= an, and - i, j such that ai= bj and w((ai-1,ai)) < w((bj-1,bj)). Example: path dominance a4 a3 Let w((a5,a4)) < w((a3,a4)) . Then, a5 a2 a1 a1, a2 , a3, a4 is a dominant path. Prevalence Def. Prevalence wrt. a vertex an Let a1 an be a dominant path with vertices a1,a2,…,an. Then, - every vertex ai prevails a1 wrt. an (1< i n). - if there exists a path b1 ai with vertices b1,…,bm,ai and w((ai-1,ai)) < w((bm,ai)), then every vertex bj prevails a1 wrt. an. a1 ai an a. n .. a2 a1 a. n .. ai a1 bj an b. m .. b1 ai-1 .. . a1 Example: formalizing agents Epistemic agents can be formalized via WMDLPs. Example: Formalize three agents A, B, and C, where: • B and C are secretaries of A • B and C believe it is not their duty to answer phone calls • A believes it the duty of a secretary to answer phone calls Example: formalizing agents A B C A = (DA,DA) DA = ({v1},{},wA) Pv1 = {answerPhone secretary phoneRing} A v1 B = (DB,DB) DB = ({v3,v4},{(v4,v3)},wB) wB((v4,v3)) = 0.6 Pv3 = {} Pv4 = {phoneRing, secretary, not answerPhone} C = (DC,DC) DC = ({v5,v6},{(v6,v5)},wC) wC((v6,v5)) = 0.6 Pv5 = {} and Pv6 = Pv4 B v3 v4 C v5 v6 Logical framework F Def. Logical framework F A logical framework F is a tuple (A, L, wL ) where: • A={1,…,n} is a set of WMDLPs • L is a set of links among the i • and wL : L R+. Semantics of F Declarative semantics of F is stable model based. Idea: The knowledge of a vertex v1 overrides the knowledge of a vertex v2 wrt. a vertex s iff v1 prevails v2 wrt. s. Example: s Pv1 = {answerPhone} Pv2 = {not answerPhone} if v2 v1 then Ms={answerPhone} s v2 v1 Proof theory The operational semantics for WMDLPs is based on a syntactic transformation (P, s). • (P, s) extends the syntactic transformation for MDLPs. (P, s) is based on the (strong) prevalence relation Cs . • Given a WMDLP P and a state s, the transformation (P, s) produces a generalized logic program. Correctness of the transformation. • The stable models of (P, s) coincide with the stable models of P at state s. Modelling eMAS Multi-agent systems can be understood as computational societies whose members co-exist in a shared environment. A number of organizational structures have been proposed: - coalitions, groups, institutions, agent societies, etc. In our approach, agents and organizational structures are formalized via WMDLPs, and glued together via F. Modelling eMAS: groups A group is a system of agents constrained in their mutual interactions. A group can be formalized in F in a flexible way: - the agents’ behaviour can be restricted to different degrees. - formalizing norms and regulations may enhance trustfulness of the group. Example: formalizing groups Secretaries example: Formalize group G, of agents A, B, and C, where: • B must operate (strictly) in accordance with A, while • C has a certain degree of freedom. Example: formalizing groups G G = (DG,DG) DG = ({v2},{},wG) Pv2 = {} F G v2 A v1 F = (A,L,wL) A = {A,B,C,G ) L = {(v1,v2), (v2,v3), (v2,v5)} wL((v1,v2)) = wL((v2,v5)) = 0.5 wL((v2,v3)) = 0.7 C v5 B v3 v4 v6 Example: semantics G v2 0.5 0.5 A v1 C v5 0.6 0.7 B v3 v6 v1 v6 v5 0.6 v4 v 4 v1 v3 Model of agent B: Mv3 = {phoneRing, secretary, answerPhone} Model of agent C: Mv5 = {phoneRing, secretary, not answerPhone} Adding roles to agents A role is a set of obligations and rights that governs the behaviour of an agent occupying a particular position in the society. The adoption of roles as tools for description and modelling in multi-agent systems has several benefits: • Formal roles allow for generic models of agents with unknown internal states to derive information to predict agent behaviour. • The use of roles promotes flexibility since different modes of interaction become possible among agents. • Roles can adapt and evolve within the course of interactions to reflect the learning process of the agents. This allows for dynamic systems where the modes of interactions change. Adding roles to agents When an agent plays a role, the overall behaviour of the agent obeys the personality of the agent as well as its role. We call actor an agent playing a role: actor := < role, agent > actor := < role, actor > The notion of actor is important to define situations where an agent plays some role by virtue of playing another role. Adding roles to agents Actors can be expressed in our framework in a modular, flexible way. i w1 role w2 agent By assigning different weights w1 and w2, different types of behaviour can be modelled: • w1 > w2: the actor will obey the norms of its role. • w2 > w1: the personality of the actor will prevail its role. • w1 = w2: the actor will operate in accordance to both its personality and role. Adding roles to agents An actor can fulfill several roles depending on the context. i w1 w2 i2 w3 i w4 Two agents playing the same role. i2 w1 w2 w3 w4 One agent playing two distinct roles. Adding roles to agents i2 i w1 w5 i2 w2 w1 w1 w2 w2 i i w3 w3 An actor w4 An actor playing two roles w3 w4 Hierarchy of actors Engineering social agent societies Roles are associated with a default context that defines the different social relationships and specifies the behaviour of the roles amongst each other. Agents may interact in several, different contexts. Therefore, there is a need to consider different abstraction levels of contexts. • More specific contexts can overturn the orderings between the roles of more general contexts, and establish a social relation among them. Engineering social agent societies Def. Context Let Ag a set of agents and R a set of roles. A context is a pair (Ac,T) where Ac is a set of actors defined over Ag and R, and T is a theory defining the normative relations of the context. T i1 i2 Context i3 Engineering social agent societies Def. Social agent society An agent society is modelled as a tuple: = (Ag,R,C) where Ag is a set of epistemic agents, R is a set of roles, and C is a set of contexts over Ag and R. Modelling agent societies by means of the notion of contexts is general and flexible: several organizational structures can be expressed in terms of contexts. Engineering social agent societies Agent society Agents may form subgroups inside a greater society of agents. These subgroups usually inherit the constraints of the greater society, override some of them and add their own constraints. Agent societies based on confidence factors A society whose agents have the ability to associate a confidence factor: • to the information incoming from other agents, • to the information outgoing to other agents, and • to its own information. Confidence factors can be used: • to indicate the level of trust/confidence of an agent towards another agent, • the relevance of the information of a source agent, • the confidence that an agent has about its own information, • the strength with which an agent supports its information towards another agent. Agent societies based on confidence factors The structure of agent societies based on confidence factors is expressed by CDAGs. A directed acyclic graph with confidence factors (CDAG) is a tuple (V, E, ws, wi, wt, w) where: • V is a set of vertices, • E a set of edges containing the edge (v,v) for any vertex v V, • ws: VR self-confidence • wi: E R confidence given to the outgoing edge • wt: E R confidence given to the incoming edge • w: E R final weight of the edge Agent societies based on confidence factors CDAGs can be formalized via WDAGs as follows: a 0.7 0.3 0.6 b 0.9 0.6 a 0.8 0.7 b c CDAG Suppose, for any edge, that w(e) = ( wi(e) + wt(e) ) / 2 0.75 0.7 c a+ 0.9 0.6 0.9 c+ b+ WDAG Future work Other notions of prevalence can be accommodated in our framework: • A voting system – based on the incoming edges of a certain node. Rules can be rejected because they are outweighed or outvoted by opting for the best positive or negative average. The logical framework can be represented within the theory of the agent members of the society. • Doing so will empower the agents with the ability to reason about and to modify the structure of their own graph together with the general group structure comprising the other agents. Conclusion We have introduced a novel powerful and flexible logical framework to model structures of epistemic agents: The declarative semantics is stable models based The procedural semantics relies on a sequence of syntactical transformations into normal programs The End ! MORE ... J. A. Leite, J. J. Alferes, L. M. Pereira, Combining Societal Agents' Knowledge, in L. M. Pereira, P. Quaresma (eds.), Procs. of the APPIA-GULP-PRODE'01 Joint Conf. on Declarative Programming (AGP'01), pp. 313-327, Évora, Portugal, September 2001. P. Dell'Acqua, J. A. Leite, L. M. Pereira, Evolving Multi-Agent Viewpoints - an architecture, in P. Brazdil, A. Jorge (eds.), Progress in Artificial Intelligence, 10th Portuguese Int. Conf. on Artificial Intelligence (EPIA'01), pp. 169-182, Springer, LNAI 2258, Porto, Portugal, December 2001. P. Dell'Acqua, L. M. Pereira, A Logical Framework for Modelling eMAS, in V. Dahl, P. Wadler (eds.), Procs. Practical Aspects of Declarative Languages (PADL'03), New Orleans, Louisiana, USA, pp. 241-255, Springer, LNCS 2562, 2003. P. Dell'Acqua, Mattias Engberg, L. M. Pereira, An Architecture for a Rational Reactive Agent, in F. Moura-Pires, S. Abreu (eds.), Progress in Artificial Intelligence, Procs. 11th Portuguese Int. Conf. on Artificial Intelligence (EPIA'03), pp.379-393, Springer, LNAI, Beja, Portugal, December 2003. Links Def. Link Given two WDAGs, D1 and D2, a link is an edge between a vertice of D1 and a vertice D2. Joining WDAGs Def. Link Given two WDAGs D1 and D2, a link is an edge between vertices of D1 and D2. Def. WDAGs joining Given n WDAGs Di = (Vi,Ei,wi), a set L of links, and a function wL : L R+, the joining ({D1,…, Dn},L,wL) is the WDAG D=(V,E,w) obtained by the union of all the vertices and edges, and w(e) = wi(e) wL(e) if eEi if eL Joined WMDLP Def. Joined WMDLP Let F=(A,L,wL ) be a logical framework. Assume that A={1,…,n} and every i=(Di,Di). The joined WMDLP induced by F is the WDAG =(D,D) where: - D= ({D1,…, Dn},L,wL) - D= i Di and Stable models of WMDLP Def. Stable models of WMDLP Let =(D,D) be a WMDLP, where D=(V,E,w) and D={Pv : vV}. Let s V. An interpretation M is a stable model of at s iff: M = least( X Default(X, M) ) where: Q = v s Pv Reject(s,M) = { r Pv2 : r’ Pv1, head(r)=not head(r’), M |= body(r’),v2 v1 } s X = Q - Reject(s,M) Default(X,M) = {not A : (ABody) in X and M |= Body }