Auctions 1

advertisement
Auctions
1
Auction Protocols
 English
auctions
 First price sealed-bid auctions
 Second best price sealed-bid auctions
(Vickery auctions)
 Dutch auctions
2
3
The Contract Net
R. G. Smith and R. Davis
4
DPS System Characteristics
and Consequences
Communication
is slower than
computation
— loose coupling
— efficient protocol
— modular problems
— problems with large grain size
5
More DPS System Characteristics
and Consequences
Any
unique node is a potential bottleneck
— distribute data
— distribute control
— organized behavior is hard to
guarantee (since no one node has
complete picture)
6
The Contract Net
An
approach to distributed problem solving,
focusing on task distribution
Task
distribution viewed as a kind of contract
negotiation
“Protocol”
specifies content of
communication, not just form
Two-way
transfer of information is natural
extension of transfer of control mechanisms
7
Four Phases to Solution,
as Seen in Contract Net
1. Problem Decomposition
2. Sub-problem distribution
3. Sub-problem solution
4. Answer synthesis
The contract net protocol deals
with phase 2.
8
Contract Net
The
collection of nodes is the “contract net”
Each node on the network can, at different
times or for different tasks, be a manager or
a contractor
When a node gets a composite task (or for
any reason can’t solve its present task), it
breaks it into subtasks (if possible) and
announces them (acting as a manager),
receives bids from potential contractors,
then awards the job (example domain:
network resource management, printers, …)
9
Node Issues Task Announcement
Task Announcement
Manager
10
Idle Node Listening to
Task Announcements
Manager
Potential
Contractor
Manager
Manager
11
Node Submitting a Bid
Bid
Manager
Potential
Contractor
12
Manager listening to bids
Bids
Potential
Contractor
Manager
Potential
Contractor
13
Manager Making an Award
Award
Manager
Contractor
14
Contract Established
Contract
Manager
Contractor
15
Domain-Specific Evaluation
Task
announcement message prompts
potential contractors to use domain
specific task evaluation procedures; there
is deliberation going on, not just selection
— perhaps no tasks are suitable at present
Manager
considers submitted bids using
domain specific bid evaluation procedure
16
Types of Messages
Task
announcement
Bid
Award
Interim
Final
report (on progress)
report (including result description)
Termination
message (if manager wants
to terminate contract)
17
Efficiency Modifications
Focused addressing — when general
broadcast isn’t required

Directed contracts — when manager
already knows which node is appropriate

Request-response mechanism — for
simple transfer of information without
overhead of contracting

Node-available message — reverses
initiative of negotiation process

18
Message Format
Task Announcement
Slots:
— Eligibility specification
— Task abstraction
— Bid specification
— Expiration time
19
Task Announcement Example
(common internode language)
To:
*
From:
25
Type:
Task Announcement
Contract: 43–6
Eligibility Specification: Must-Have FFTBOX
Task Abstraction:
Task Type Fourier Transform
Number-Points 1024
Node Name 25
Position LAT 64N LONG 10W
Bid Specification: Completion-Time
Expiration Time: 29 1645Z NOV 1980
20
The existence of a common
internode language allows
new nodes to be added to the
system modularly, without
the need for explicit linking
to others in the network
(e.g., as needed in standard
procedure calling).
21
Applications of the contract Net
 Sensing
 Task Allocation
(Malone)
 Delivery companies (Sandholm)
 Market-oriented programming (Wellman)
22
Bidding Mechanisms for Data
Allocation
A user sends its query directly to
the server where the needed
document is stored.
23
Environment Description
a client
a document
a query
distance
serverj
serveri
area i
area j
24
Utility Function
Each
server is concerned only whether
a dataset is stored locally or remotely,
but is indifferent with respect to
different remote location of the dataset.
25
The Trading Mechanism
 Bidding
sessions are carried on during
predefined time periods.
 In each bidding session, the location of the
new datasets is determined and the location
of each old dataset can be changed.
 Until a decision is reached, the new datasets
are stored in a temporary buffer.
26
The Trading Mechanism - cont.
 Each
dataset has an initial owner (called
contractor(ds)), according to the static
allocation.


For an old dataset - the server which stores it.
For a new dataset - the server with the nearest
topics (defined according to the topics of the
datasets stored by this server).
27
The Bidding Steps
 Each
server broadcasts an announcement for
each new dataset it owns, and also for some of
its old local datasets.
 For each such announcement, each server sends
the price it is willing to pay in order to store the
dataset locally.
 The winner of each dataset is determined by its
contractor. It broadcasts a message, including:
the winner, the price it has to pay, and the server
which bids this price.
28
Cost of Reallocating Old Datasets

move_cost(ds,bidder):
the cost for contractor(ds) for moving ds from its
current location to bidder.
(for new datasets, move_cost=0)

obtain_cost(ds,bidder):
the cost for bidder for moving ds from its current
location to bidder.
29
Protocol Details


winner(ds) denotes the winner of dataset ds.
winner(ds)=
arg max bidder
move(ds)=true
price_suggested(bidder,ds) move_cost(ds,bidder)
none
otherwise
30
Protocol Details - cont.
price(ds) denotes the price paid by the winner for
dataset ds.
 price(ds)=
second_max bidder‫־‬SERVERS
{ price_suggested(s,ds) -move_cost(ds,bidder) }
+ move_cost(ds,winner).

31
Bidding Strategies

Attribute:
move(ds)=true if
second_max bidder‫־‬SERVERS
{price_suggested(bidder,ds) move_cost(ds,bidder)} 
Ucontractor(ds)(ds,contractor(ds)).
32
Bidding Strategies - cont.

Lemma
If the winner server had bid its true value of
storing the dataset locally, then it will have a
nonnegative utility from obtaining it.

Lemma
Each server will bid its utility from obtaining the
dataset:
price_suggested(bidder,ds)=
Ubidder(ds,bidder) - obtain_cost(ds,bidder).
33
Bidding Strategies - cont.
 Theorem
If announcing and bidding are free, then the
allocation reached by the bidding protocol leads to
better or equal utility for each server than does the
static policy.
The utility function is evaluated according to the
expected profits of the server from the allocation.
34
Usage Estimation
 Each
server knows only the usage of
datasets stored locally.
 For new datasets and remote datasets, the
server has no information about past usage.
 It estimates the future usage of new and
remote datasets, using the past usage of local
datasets, which contain similar topics.
35
Queries Structure
 We
assume that a query sent to a server
contains a list of required documents.
 This is the situation if the search mechanism
to find the required documents is installed
locally by the client.
 In this situation, the server has to learn from
the queries about its local documents, to the
expected usage of other documents, in order
to decide whether it needs them or not.
36
Usage Prediction
 We
assume that a dataset contains several
keywords (k1..kn).
 For each local dataset ds, and each server d,
the server saves the past usage of ds by d, in
the last period
 Then, it has to predict the future usage of ds
by d. It assumes the same behavior than in
the past.
37
Usage Prediction - cont.
 It
is assumed that the users are interested in
keywords, so the usage of a dataset is a
function of the keywords it contains.
 The simplest model is: when a dataset usage
is the sum of the the usage of each of its
keywords. However, the relationship
between the keywords and the dataset may
be different.
38
Usage Prediction - cont.
 The
server has to learn about usage of
datasets not stored locally:
 We suggest that it will build a Neural
Network for learning the usage template of
each area.
39
What is Neural Network
•A neural network is composed of a number of
nodes, or units, connected by links.
•Each link has numeric weight associated with
it.
•The weight are modified so as to try to bring
the network’s input/output behavior more into
line with that of the environment providing the
input.
40
Neural Network - Cont.
Output unit
Output layer
Hidden layer
Input layer
Input unit
...
41
Structure of the Neural Network
 For
each area d, we build a neural network.
 Each dataset stored by the server in area d, is
one example for the neural network of d.
 The inputs of the examples contain, for each
possible keyword, whether it exist in this
dataset, or not.
42
Structure of the Neural Network cont.
 The
output unit of the Neural Network for
area d, is its past usage of this dataset.
 In
order to find the expected usage of
another dataset, ds2, by d, we provide the
network with the keywords of ds2.
 The output of the network is its predicted
usage of ds2 by area d.
43
Structure of the NN
Output unit: the usage of
the dataset by a certain
area.
Hidden layer
...
For a certain dataset, for each keyword k there is an input unit:
1 if the dataset contains k.
44
0 otherwise.
Experimental Evaluation - Results
Measurement
 vcosts(alloc)
- the variable cost of an
allocation, which consists of the
transmission costs due to the flow of
queries.
 vcost_ratio: the ratio of the variable costs
when using the bidding mechanism and the
variable costs of the static allocation.
45
Experimental Evaluation

Complete information concerning previous queries
(still uncertainty):

The bidding mechanism reaches results near to
that of the optimal allocation (reached by a
central decision maker).
 The bidding mechanism yields a lower
standard deviation of the servers utilities than
the optimal allocation.
 Incomplete information:
 The results of the bidding mechanism are
better than those of static allocation.
46
Influence of Parameters
Complete Information, no movements of old datasets

As the standard deviation of the distances
increases, vcost_ratio decreases.
47
Influence of Parameters - cont.



When increasing the number of servers and the
number datasets, vcost_ratio is not influenced.
query_price, answer_cost, storage_cost,
dataset_size and retrieve_ cost do not influence
vcost_ratio.
usage, std. usage, distance do not influence
vcost_ratio.
48
Influence of Learning on the System
 As
epsilon decreases, vcost ratio increases:
the system behaves better.
49
Conclusion
We have considered the data allocation problem in
a distributed environment.
 We have presented the utility function of the
servers, which expresses their preferences over the
data allocation.
 We have proposed using a bidding protocol for
solving the problem.

50
Conclusion - cont.
 We
have considered complete as well as
incomplete information.
 For the complete information case, we have
proved that the results obtained by the bidding
mechanism are better than those of the static
allocation, and closed to the optimal results.
51
Conclusion - cont.
For the incomplete information environment: We
have developed a neural-network based learning
mechanism.
 For each area d, we build a neural network, trained
by the server of d.
 By this network, we find expectation for other
datasets, not currently stored by d.
 We found, by simulation, that the results obtained
are still significantly better than the static
allocation.

52
Future Work
 Future



Work:
Datasets can be stored in more than one server.
Bounded rationality.
Repeated game.
53
Reaching Agreements Through
Argumentation
Collaborator: Katia Sycara, Madhura
Nirkhe, Amir Evenchik, and
Ariel Stolman
54
Introduction
 Argumentation--an
iterative process emerging
from exchanges among agents to persuade each
other and bring about a change in intentions.
 A logical model of the mental states of the
agents: beliefs, desires, intentions, goals.
 The logic is used to specify argument
formulation and a basis for Automated
Negotiation Agent.
55
Agents as Belief, Desire, Intention
systems

Belief:
 information
about the current world state
 subjective

Desire
 preferences
 can

over future world states
be inconsistent (in contrast to goals)
Intentions
 set
of goals the agent is committed to achieve
 the agent’s “runtime stack”

Formal models:
mostly modal logics with possible-worlds semantics
56
Logic Background
 Modal
logics; Kripke structures
 Syntactic Approaches
 Baysen Networks
57
Modal Logics
 Language:
there is a set of n agents
Kia --- i knows a Bia ----i believes a
P -- a set of primitive propositions: P,Q
Examples: K1a
 Semantics: A Kripke Structure consists of
four elements:
A set of possible worlds
p (w): P-----> {True,False}
n binary relations on the worlds ~1, ~2,….
58
Example of a Kripke Structure
 W={w1,w2,w3}
w1
M,w1|= p & K1p
w2
P,Q
Q
1 2
2
1
P
w3
59
Axioms
 Kia
& Ki(a-->b) ---> Kib
 If |-a then |-Kia
 Kia -->a
 Kia-> KiKia
 ~Kia --> Ki ~ Kia
 ~Bi false
 Each axiom can be associated with a
condition on the binary relation.
60
Problems in using Possible
Worlds Semantics
 Logical
omniscience--the agent believes all
the logical consequences of its belief.
 The
agent believes in all tautologies.
 Philosophers:
possible worlds do not exist.
61
Minimal Models: partial
solution
The intension of a sentence: the set of possible
worlds in which the sentence is satisfied
 Note: if two sentences have the same intensions
then they are semantically equivalent.

 A sentence
is a belief at a given world if its
intension is belief-accessible.
 According to this definition, the agent's
beliefs are not closed under inferences; the
agent may even believe in contradictions. 62
Minimal model: example
PQ
P ~Q
P ~Q
~P ~Q
63
Beliefs, Desires, Goals and
Intentions







We use time lines rather than possible worlds.
An agent's belief set includes beliefs concerning the world
and beliefs concerning mental states of other agents.
An agent may be mistaken in both kinds of beliefs and
beliefs may be inconsistent.
The beliefs are used to generate arguments in the
negotiations.
Desires: may be inconsistent.
Goals: a consistent subset of the set of desires.
Intentions serves to contribute to one or more of the agent's
desires.
64
Intentions
Two types: Intention-To and Intention-That
 Intention-to: refer to actions that are within the
direct control of the agent.
 Intention-that: refer to propositions that are not
directly within the agent's realm of control, that it
must rely on other agents for satisfying-- can be
achieved through argumentation.

65
Argumentation Types
 A promise
of a future reward.
 A threat.
 An
appeal to past promise.
 Appeal to precedents as “counter example.”
 Appeal to “prevailing practice.”
 Appeal to self-interests
66
Example: 2 Robots
Two mobile robots on Mars each built to maximize
its own utility.
 R1 requests R2 to dig for a mineral. R2 refuses. R1
responds with a threat: ``If you do not dig for me, I
will break your antenna''. R2 needs to evaluate this
threat.
 Another possibility: R1 promises a reward: ``If you
dig for me today, I will help you move your
equipment tomorrow.'' R2 needs to evaluate the
promise of future reward.
67

Usage of the logic
 Specification
for agent design: the model
constraints certain planning and negotiation
processes. Axioms for argumentation types
 The logic is used by the agents
themselves: ANA (Automated Negotiation
Agent)
68
ANA

Complies with the definition of an Agent Oriented
Programming (AOP) system (Shoham):
 The agent is represented using notions of
mental states;
 The agent's actions depend on these mental
states;
 The agent's mental state may change over time;
 Mental state changes are driven by inference
rules.
69
The Block World Environment
2
1
1
1
2
3
4
5‫ח‬
70
Mental State Model
 Beliefs
 b(agent1,world_state([blockE / 5 / 1,blockD / 4 / 1,blockC / 3 / 1,blockB / 2
/ 1,blockA / 1 / 1]),[0,2,t]).
 Desires
 desire(desire1,0,agent1 ,[blockB / 6 / 2], 39,0).
 desire(desire2,0,agent1, [blockB / 1 / 1], 29,0).
 desire(desire3,0,agent1, [blockB / 6 / 1], 35,0).
 desire(desire4,0,agent1, [blockE / 2 / 1], 38,0).
 Goals
 goal(agent1, 0, [[blockB / 6 / 2 / [desire1], blockE / 2 / 1 / [desire4]])
71
Mental State Model
 Desired World
 desired_world([ [blockC / 3 / 1 /[unused_block],
blockA / 1 / 1 /[unused_block], blockD / 6 / 1 /[supporting],
blockB / 6 / 2 /[desire1], blockE / 2 / 1 /[desire4]]).
 Intentions
 intention(1,agent1,0,that,intention_is_done(agent1,0), [blockB /
2 / 1 / 7 / 1],0, [ towards_goals]).
 intention(2,agent1,0,to,intention_is_done(agent1,1), [blockD / 4
/ 1 / 6 / 1],0, [supporting]).
 intention(3,agent1,0, that,intention_is_done( agent1,2), [blockB
/ 7 / 1 / 6 / 2],0, [desire1]).
 intention(4,agent1,0,to,intention_is_done(agent1,3), [blockE / 5
/ 1 / 2 / 1],0, [desire4]).
72
Agent Infrastructure: Agent Life
Cycle
First Plan
Reading Messages
Performing next
intentions
Dealing with the
agent’s own
threats
Planning next
steps
73
The Agent Life Cycle: Reading
Messages
Negotiation
Aspect
Read
Message
World
Change
Aspect
Figure 3.2 - Reading Messages Stage
Types of messages
 Queue
 Waiting for answers
 Negotiation and world change aspects
 Inconsistency recovery

74
The Agent Life Cycle:
Dealing with the agent’s own threats
Regular Threat
Abstract Threat
Dealing with the agent’s own threats
 Detection
 Make
abstract threats concrete
 Execute evaluation
75
The Agent Life Cycle: Planning
next step
Goal
Selection
Desired
World
Selection
Intentions
Generator
Planning next step
 Mental
states usage
 Backtracking
 Better than current state
 New state or dead end
 Achievable plan
76
The Agent Life Cycle: Performing
next intention
Perform Intention-to
Generate and send an argument
Figure 3.5 - Perform next intention
 Intention
to - intention that
 Other agent listening?
 One argument per cycle.
77
Agent Definition Examples
Agent Type
agent_type(robot_name, memory-less).
 Agent Capability
capable(robot, blockC / 3 / 1 / 4 / 1).
 Agent Beliefs
b(first_robot, capable(second_robot,
AnyAction), [[0, t]]).
 Agent Desires
desire(first_desire, 0, robot, [blockA/3/1],
15, 1).

78
Agent Infrastructure: Agent
Parameters List

Cooperativeness

Reliability (promises keeping)

Assertiveness

Performance threshold (Asynchronous action)

Usage of first argument

Argument direction

Knowledge about other desires

Knowledge about other capabilities

Measurement of other agent promises keeping

Execution of threats by another agent
79
A promise of a future reward


Application conditions
 Opponent agent can perform the requested action.
 The reward action will help the opponent achieve a goal (requires knowledge
of opponent desires).
 Argument not used in the near past.
Implementation
 Generate opponent’s expected intentions.
 Offer one of the intentions as a reward:
– Mutual intention which opponent cannot perform by itself (requires
knowledge of opponent capabilities).
– Opponent’s intention which it cannot perform.
– Any mutual intention.
– Any Opponent’s intention.
80
A threat


Application conditions
 Opponent agent can perform the requested action.
 The threat action will interfere with the opponent’s achieving some
goals (requires knowledge of opponent desires).
 Argument not used in the near past.
Implementation
 Agent chooses best cube (requires knowledge of opponent
capabilities).
 Agent chooses best desire.
 Agent chooses a threshing action:
– Moving out.
– Blocking.
– Interfering.
81
Request Evaluation MechanismParameters List

DL
(Doing Length)

NDL
(Not Doing Length)

DTL
(Doing That Length)

NDTL
(Not Doing That Length)

PL (Punish Length)

PTL

DP (Doing Preference)

NDP
(Punish That Length)
(Not Doing Preference)
82
Request Evaluation Mechanism- Agent
Parameters

CP: The agent’s cooperativness.

AS: The agent’s assertiveness.
 RL:
The agent’s reliability.

ORL: The Other agent’s reliability for keeping
promises.

OTE: The Other agent’s percentage of threat
executing.
83
Request Evaluation Mechanism- The
Formulas
A simple request
NDTL  1
DP  1
 NDL  1
Acceptance Value  
 2
 3  CP

 DL  1
DTL  1  NDP  1
An appeal to past promise
NDTL  1
DP  1
 NDL  1
Acceptance Value  
 2
 3  CP  RL

 DL  1
DTL  1  NDP  1
A promise of a future reward
Acceptance Value 
NDL  1
NDTL  1
DP  1


 2
 3  CP  RL


 DL  1  1 ORL  RD  1
DTL  1  1 ORL  RD  1 NDP  1
When an action is considered to be a reward, RD (Reward) is equal to 1, and 0, if not .
84
Request Evaluation Mechanism- The
Formulas
A threat
Acceptance value =
NDTL ( PTL  NDTL)  OTE 1
DP 1
 NDL ( PL  NDL)  OTE 1
 2
 3 AS



 NDP  OTE  ( NDP  PP) 1
DL 1
DTL 1
An abstract threat
NDTL  1
DP  1
 NDL  1
Acceptance Value  
 2
 3  AS

 DL  1
DTL  1  NDP  1
85
Experiments Results
 Negotiating
is better than not negotiating only where
each agent has particular expertise.
 Negotiating is better than not negotiating only where the
agents have complete information.
 Negotiating is better than not negotiating only for
mutually cooperative agents or for an aggressive agent
with a cooperative opponent.
 Environment (game time, resources) effects the
negotiations results.
86
Negotiations vs. no negotiations

When the agents that do not negotiate succeed in
obtaining only 29.8% of their desires preference values,
the negotiating agents succeed in obtaining 40.4%, on the
average. (F=5.047, p<0.024, df=79).
87
Complete information vs. no
information

Agents that had no information succeed in obtaining a
success rate of only 30.8%, while agents that had full
information succeed in obtaining 40.4% on the average.
(F=4.326,p<0.04,df=38).
88
Using the first argument vs. using
the best found

Agents that used the first argument succeed in obtaining a
success rate of only 34.8%, while agents that used the best
argument succeed in obtaining 40.4% on the average, but
this result is not significant. (F=2.28,p<0.138,df=38).
89
Cooperative vs. Aggressive agent
23.5% : 41.4% (F=10.78,p<0.001,df=63).
90
Cooperative and Aggressive Agents vs.
No Negotiations
38.3% : 29.8% (F=6.01, p<0.019,df=35).
91
No negotiations VS. Aggressive
Negotiation
38.3% : 20.8% (F=10.03, p<0.002,df=50).
92
Cooperative vs. aggressive
 Aggressive
vs. cooperative 41.4%
 Two cooperatives
38.3%
 No negotiation
29.8%
 Cooperative vs. aggressive 23.5%
 Two aggressive
20.8%
93
Environment Constraints
Number of desires
94
Environment Constraints
Time for game
16.7% : 21.7% (F=2.41, p<0.122, df=139).
95
Is it worth it to use formal
methods for Multi-Agent
Systems in general and
Negotiations in particular?
96
Game-theory Based Frameworks
(Non-cooperative Models)
 Strategic-negotiation
model
based on: alternating offers model of
rubinstein.
Applications:

Data allocation (schwartz & kraus AAAI97),
 Resource allocation , task distribution
(kraus wilkenfeld zlotkin AIJ95,
kraus AMAI97),
hostage crisis (kraus wilkenfeld TSMC93).
97
Advantages and Difficulties:
Negotiation on Data Allocation
 Beneficial
results; proved to be better than
current methods; simple strategies.
 Problems:

Need to develop utility functions;
 Finding possible action: identifying optimal
allocations is NP complete;
 Incomplete information: game-theory
provides limited solutions.
98
Game-theory Based Frameworks
(Non-cooperative Models)
 Auctions
applications:
Data allocation (schwartz & kraus ATAL97), 
Electronic commerce. 
 Subcontracting
based on: principle agent models.
Applications:
Task allocation (kraus, AIJ96). 
99
Advantages and Difficulties:
Auctions for Data Allocation
 Beneficial
results; proved to be better than
current methods.
 Problems:

Utility functions,
 Applicable only when a server is concerned
only about the data stored locally,
 Difficult to find bidding when there is
incomplete information and the evaluations are
dependant on each other: no procedures.
100
Game-theory Based Frameworks
(Cooperative Models)
 Coalition
theories
applications:
Group and teams formation (shehory &kraus CI99). 
Benefits: well-defined concepts of stability;
mechanisms to divide benefits.
 Difficulties: utility functions, no procedures
for coalition formation; exponential problems.
 DPS model: combinatory theories &
operations research (shehory &kraus AIJ98). 101

Decision-theory Based
Frameworks
 Multi-attributed
decision making:
application:

Intentions reconciliation in SharedPlans
(grosz & kraus, 98).
 Benefits:
using results of MADM, e.G.,
Specific method is not so important,
standardization techniques.
 Problems: choosing attributes; assigning
values, choosing weights.
102
Logical Models
 Modal
logic: BDI models:
applications:

Automated argumentation's (kraus, sycara &
eventchick AIJ99).
 Specification of sharedplans (grosz & kraus AIJ96).
 Bounded agents (nirkhe, kraus, perlis JLC97).
 Agents reasoning about other agents (kraus &
lehmann TCT88 kraus & subrahmanian IJIS95).
103
Advantages and Difficulties:
Logical Models
 Formal
models with well studied properties:
excellent for specification.
 Problems:

Some assumptions are not valid (e.g., omnicience).
 Complexity problems.
 There are no procedures for actions: required a lot of
programming; decision making; developing
preferences.
104
Physics Based Models
 Physical
models of particle-dynamics
Applications: Cooperation in large-scale
multi-agent systems: freight deliveries
within a metropolitan area.
(Shehory & Kraus ECAI96 Shehory,
Kraus & Yadgar ATAL98).
 Benefits:
efficient; inherits the physics
properties.
 Problems: adjustments; potential functions
105
Summary
 Benefits:
formal models which have already
been studied; lead to efficient results. No
need to invent the wheel.
 Problems:

Restrictions and assumptions made by gametheory are not valid in real world MAS
situations: extensions are needed.
 It is difficult to develop utility functions.
 Complexity problems.
106
Download