Auctions - PowerPoint PPT Presentation

1 / 106
About This Presentation
Title:

Auctions

Description:

Kia --- i knows a Bia ----i believes a. P -- a set of primitive propositions: P,Q. Examples: K1a ... If |-a then |-Kia. Kia -- a. Kia- KiKia ~Kia -- Ki ~ Kia ... – PowerPoint PPT presentation

Number of Views:89
Avg rating:3.0/5.0
Slides: 107
Provided by: ValuedGate1519
Category:
Tags: auctions | kia

less

Transcript and Presenter's Notes

Title: Auctions


1
Auctions
Automated Negotiation
2
Auction Protocols
  • English auctions
  • First price sealed-bid auctions
  • Second best price sealed-bid auctions (Vickery
    auctions)
  • Dutch auctions

3
(No Transcript)
4
The Contract Net
  • R. G. Smith and R. Davis

5
DPS System Characteristicsand Consequences
  • Communication is slower than computation loose
    coupling efficient protocol modular
    problems problems with large grain size

6
More DPS System Characteristicsand Consequences
  • Any unique node is a potential bottleneck
    distribute data distribute control organized
    behavior is hard to guarantee
    (since no one node has complete picture)

7
The Contract Net
  • An approach to distributed problem solving,
    focusing on task distribution
  • Task distribution viewed as a kind of contract
    negotiation
  • Protocol specifies content of communication,
    not just form
  • Two-way transfer of information is natural
    extension of transfer of control mechanisms

8
Four Phases to Solution,as Seen in Contract Net
  • 1. Problem Decomposition
  • 2. Sub-problem distribution
  • 3. Sub-problem solution
  • 4. Answer synthesis

The contract net protocol deals with phase 2.
9
Contract Net
  • The collection of nodes is the contract net
  • Each node on the network can, at different times
    or for different tasks, be a manager or a
    contractor
  • When a node gets a composite task (or for any
    reason cant solve its present task), it breaks
    it into subtasks (if possible) and announces them
    (acting as a manager), receives bids from
    potential contractors, then awards the job
    (example domain network resource management,
    printers, )

10
Node Issues Task Announcement
Task Announcement
Manager
11
Idle Node Listening toTask Announcements
Manager
Potential Contractor
Manager
Manager
12
Node Submitting a Bid
Bid
Manager
Potential Contractor
13
Manager listening to bids
Bids
Potential Contractor
Manager
Potential Contractor
14
Manager Making an Award
Award
Manager
Contractor
15
Contract Established
Contract
Manager
Contractor
16
Domain-Specific Evaluation
  • Task announcement message prompts potential
    contractors to use domain specific task
    evaluation procedures there is deliberation
    going on, not just selection perhaps no tasks
    are suitable at present
  • Manager considers submitted bids using domain
    specific bid evaluation procedure

17
Types of Messages
  • Task announcement
  • Bid
  • Award
  • Interim report (on progress)
  • Final report (including result description)
  • Termination message (if manager wants to
    terminate contract)

18
Efficiency Modifications
  • Focused addressing when general broadcast
    isnt required
  • Directed contracts when manager already knows
    which node is appropriate
  • Request-response mechanism for simple
    transfer of information without overhead of
    contracting
  • Node-available message reverses initiative of
    negotiation process

19
Message Format
  • Task Announcement Slots Eligibility
    specification Task abstraction Bid
    specification Expiration time

20
Task Announcement Example(common internode
language)
  • To
  • From 25
  • Type Task Announcement
  • Contract 436
  • Eligibility Specification Must-Have FFTBOX
  • Task Abstraction
  • Task Type Fourier Transform
  • Number-Points 1024
  • Node Name 25
  • Position LAT 64N LONG 10W
  • Bid Specification Completion-Time
  • Expiration Time 29 1645Z NOV 1980

21
  • The existence of a common internode language
    allows new nodes to be added to the system
    modularly, without the need for explicit linking
    to others in the network (e.g., as needed in
    standard procedure calling).

22
Applications of the contract Net
  • Sensing
  • Task Allocation (Malone)
  • Delivery companies (Sandholm)
  • Market-oriented programming (Wellman)

23
Bidding Mechanisms for Data Allocation
  • A user sends its query directly to the server
    where the needed document is stored.

24
Environment Description
a client
a document
a query
distance
serverj
serveri
area i
area j
25
Utility Function
  • Each server is concerned only whether a dataset
    is stored locally or remotely, but is indifferent
    with respect to different remote location of the
    dataset.

26
The Trading Mechanism
  • Bidding sessions are carried on during predefined
    time periods.
  • In each bidding session, the location of the new
    datasets is determined and the location of each
    old dataset can be changed.
  • Until a decision is reached, the new datasets are
    stored in a temporary buffer.

27
The Trading Mechanism - cont.
  • Each dataset has an initial owner (called
    contractor(ds)), according to the static
    allocation.
  • For an old dataset - the server which stores it.
  • For a new dataset - the server with the nearest
    topics (defined according to the topics of the
    datasets stored by this server).

28
The Bidding Steps
  • Each server broadcasts an announcement for each
    new dataset it owns, and also for some of its old
    local datasets.
  • For each such announcement, each server sends the
    price it is willing to pay in order to store the
    dataset locally.
  • The winner of each dataset is determined by its
    contractor. It broadcasts a message, including
    the winner, the price it has to pay, and the
    server which bids this price.

29
Cost of Reallocating Old Datasets
  • move_cost(ds,bidder)the cost for contractor(ds)
    for moving ds from its current location to
    bidder. (for new datasets, move_cost0)
  • obtain_cost(ds,bidder)the cost for bidder for
    moving ds from its current location to bidder.

30
Protocol Details
  • winner(ds) denotes the winner of dataset ds.
  • winner(ds) arg max bidder move(ds)true
    price_suggested(bidder,ds) -
    move_cost(ds,bidder) none otherwise

31
Protocol Details - cont.
  • price(ds) denotes the price paid by the winner
    for dataset ds.
  • price(ds) second_max bidder?SERVERS
    price_suggested(s,ds) -move_cost(ds,bidder)
    move_cost(ds,winner).

32
Bidding Strategies
  • Attributemove(ds)true ifsecond_max
    bidder?SERVERS
  • price_suggested(bidder,ds) -
  • move_cost(ds,bidder) ³ Ucontractor(ds)(ds,con
    tractor(ds)).

33
Bidding Strategies - cont.
  • LemmaIf the winner server had bid its true value
    of storing the dataset locally, then it will
    have a nonnegative utility from obtaining it.
  • LemmaEach server will bid its utility from
    obtaining the datasetprice_suggested(bidder,ds)
    Ubidder(ds,bidder) -
    obtain_cost(ds,bidder).

34
Bidding Strategies - cont.
  • TheoremIf announcing and bidding are free, then
    the allocation reached by the bidding protocol
    leads to better or equal utility for each server
    than does the static policy.The utility
    function is evaluated according to the expected
    profits of the server from the allocation.

35
Usage Estimation
  • Each server knows only the usage of datasets
    stored locally.
  • For new datasets and remote datasets, the server
    has no information about past usage.
  • It estimates the future usage of new and remote
    datasets, using the past usage of local datasets,
    which contain similar topics.

36
Queries Structure
  • We assume that a query sent to a server contains
    a list of required documents.
  • This is the situation if the search mechanism to
    find the required documents is installed locally
    by the client.
  • In this situation, the server has to learn from
    the queries about its local documents, to the
    expected usage of other documents, in order to
    decide whether it needs them or not.

37
Usage Prediction
  • We assume that a dataset contains several
    keywords (k1..kn).
  • For each local dataset ds, and each server d, the
    server saves the past usage of ds by d, in the
    last period
  • Then, it has to predict the future usage of ds by
    d. It assumes the same behavior than in the past.

38
Usage Prediction - cont.
  • It is assumed that the users are interested in
    keywords, so the usage of a dataset is a function
    of the keywords it contains.
  • The simplest model is when a dataset usage is
    the sum of the the usage of each of its
    keywords. However, the relationship between the
    keywords and the dataset may be different.

39
Usage Prediction - cont.
  • The server has to learn about usage of datasets
    not stored locally
  • We suggest that it will build a Neural Network
    for learning the usage template of each area.

40
What is Neural Network
  • A neural network is composed of a number of
    nodes, or units, connected by links.
  • Each link has numeric weight associated with it.
  • The weight are modified so as to try to bring the
    networks input/output behavior more into line
    with that of the environment providing the input.

41
Neural Network - Cont.
Output unit
Output layer
Hidden layer
Input layer
Input unit
42
Structure of the Neural Network
  • For each area d, we build a neural network.
  • Each dataset stored by the server in area d, is
    one example for the neural network of d.
  • The inputs of the examples contain, for each
    possible keyword, whether it exist in this
    dataset, or not.

43
Structure of the Neural Network - cont.
  • The output unit of the Neural Network for area d,
    is its past usage of this dataset.
  • In order to find the expected usage of another
    dataset, ds2, by d, we provide the network with
    the keywords of ds2.
  • The output of the network is its predicted usage
    of ds2 by area d.

44
Structure of the NN
Output unit the usage of the dataset by a
certain area.
Hidden layer
For a certain dataset, for each keyword k there
is an input unit 1 if the dataset contains k. 0
otherwise.
45
Experimental Evaluation - Results Measurement
  • vcosts(alloc) - the variable cost of an
    allocation, which consists of the transmission
    costs due to the flow of queries.
  • vcost_ratio the ratio of the variable costs when
    using the bidding mechanism and the variable
    costs of the static allocation.

46
Experimental Evaluation
  • Complete information concerning previous queries
    (still uncertainty)
  • The bidding mechanism reaches results near to
    that of the optimal allocation (reached by a
    central decision maker).
  • The bidding mechanism yields a lower standard
    deviation of the servers utilities than the
    optimal allocation.
  • Incomplete information
  • The results of the bidding mechanism are better
    than those of static allocation.

47
Influence of ParametersComplete Information, no
movements of old datasets
  • As the standard deviation of the distances
    increases, vcost_ratio decreases.

48
Influence of Parameters - cont.
  • When increasing the number of servers and the
    number datasets, vcost_ratio is not influenced.
  • query_price, answer_cost, storage_cost,
    dataset_size and retrieve_ cost do not influence
    vcost_ratio.
  • usage, std. usage, distance do not influence
    vcost_ratio.

49
Influence of Learning on the System
  • As epsilon decreases, vcost ratio increases the
    system behaves better.

50
Conclusion
  • We have considered the data allocation problem in
    a distributed environment.
  • We have presented the utility function of the
    servers, which expresses their preferences over
    the data allocation.
  • We have proposed using a bidding protocol for
    solving the problem.

51
Conclusion - cont.
  • We have considered complete as well as incomplete
    information.
  • For the complete information case, we have proved
    that the results obtained by the bidding
    mechanism are better than those of the static
    allocation, and closed to the optimal results.

52
Conclusion - cont.
  • For the incomplete information environment We
    have developed a neural-network based learning
    mechanism.
  • For each area d, we build a neural network,
    trained by the server of d.
  • By this network, we find expectation for other
    datasets, not currently stored by d.
  • We found, by simulation, that the results
    obtained are still significantly better than the
    static allocation.

53
Future Work
  • Future Work
  • Datasets can be stored in more than one server.
  • Bounded rationality.
  • Repeated game.

54
Reaching Agreements Through Argumentation
  • Collaborator Katia Sycara, Madhura Nirkhe, Amir
    Evenchik, and
  • Ariel Stolman

55
Introduction
  • Argumentation--an iterative process emerging from
    exchanges among agents to persuade each other and
    bring about a change in intentions.
  • A logical model of the mental states of the
    agents beliefs, desires, intentions, goals.
  • The logic is used to specify argument formulation
    and a basis for Automated Negotiation Agent.

56
Agents as Belief, Desire, Intention systems
  • Belief
  • information about the current world state
  • subjective
  • Desire
  • preferences over future world states
  • can be inconsistent (in contrast to goals)
  • Intentions
  • set of goals the agent is committed to achieve
  • the agents runtime stack
  • Formal models mostly modal logics with
    possible-worlds semantics

57
Logic Background
  • Modal logics Kripke structures
  • Syntactic Approaches
  • Baysen Networks

58
Modal Logics
A
  • Language there is a set of n agents Kia
    --- i knows a Bia ----i believes a P -- a
    set of primitive propositions P,QExamples K1a
  • Semantics A Kripke Structure consists of four
    elements
  • A set of possible worlds
  • p (w) P----- True,False
  • n binary relations on the worlds 1, 2,.

59
Example of a Kripke Structure
  • Ww1,w2,w3 M,w1 p K1p

w1
w2
P,Q
Q
1
2
2
1
P
w3
60
Axioms
  • Kia Ki(a--b) --- Kib
  • If -a then -Kia
  • Kia --a
  • Kia- KiKia
  • Kia -- Ki Kia
  • Bi false
  • Each axiom can be associated with a condition on
    the binary relation.

61
Problems in using Possible Worlds Semantics
  • Logical omniscience--the agent believes all the
    logical consequences of its belief.
  • The agent believes in all tautologies.
  • Philosophers possible worlds do not exist.

62
Minimal Models partial solution
  • The intension of a sentence the set of possible
    worlds in which the sentence is satisfied
  • Note if two sentences have the same intensions
    then they are semantically equivalent.
  • A sentence is a belief at a given world if its
    intension is belief-accessible.
  • According to this definition, the agent's beliefs
    are not closed under inferences the agent may
    even believe in contradictions.

63
Minimal model example
P Q
P Q
P Q
P Q
64
Beliefs, Desires, Goals and Intentions
  • We use time lines rather than possible worlds.
  • An agent's belief set includes beliefs concerning
    the world and beliefs concerning mental states of
    other agents.
  • An agent may be mistaken in both kinds of beliefs
    and beliefs may be inconsistent.
  • The beliefs are used to generate arguments in the
    negotiations.
  • Desires may be inconsistent.
  • Goals a consistent subset of the set of desires.
  • Intentions serves to contribute to one or more of
    the agent's desires.

65
Intentions
  • Two types Intention-To and Intention-That
  • Intention-to refer to actions that are within
    the direct control of the agent.
  • Intention-that refer to propositions that are
    not directly within the agent's realm of
    control, that it must rely on other agents for
    satisfying-- can be achieved through
    argumentation.

66
Argumentation Types
  • A promise of a future reward.
  • A threat.
  • An appeal to past promise.
  • Appeal to precedents as counter example.
  • Appeal to prevailing practice.
  • Appeal to self-interests

67
Example 2 Robots
  • Two mobile robots on Mars each built to maximize
    its own utility.
  • R1 requests R2 to dig for a mineral. R2 refuses.
    R1 responds with a threat If you do not dig
    for me, I will break your antenna''. R2 needs to
    evaluate this threat.
  • Another possibility R1 promises a reward If
    you dig for me today, I will help you move your
    equipment tomorrow.'' R2 needs to evaluate the
    promise of future reward.

68
Usage of the logic
  • Specification for agent design the model
    constraints certain planning and negotiation
    processes. Axioms for argumentation types
  • The logic is used by the agents themselves ANA
    (Automated Negotiation Agent)

69
ANA
  • Complies with the definition of an Agent Oriented
    Programming (AOP) system (Shoham)
  • The agent is represented using notions of mental
    states
  • The agent's actions depend on these mental
    states
  • The agent's mental state may change over time
  • Mental state changes are driven by inference
    rules.

70
The Block World Environment
2
11
1 2 3 4 5?
71
Mental State Model
  • Beliefs
  • b(agent1,world_state(blockE / 5 / 1,blockD / 4 /
    1,blockC / 3 / 1,blockB / 2 / 1,blockA / 1 /
    1),0,2,t).
  • Desires
  • desire(desire1,0,agent1 ,blockB / 6 / 2, 39,0).
  • desire(desire2,0,agent1, blockB / 1 / 1, 29,0).
  • desire(desire3,0,agent1, blockB / 6 / 1, 35,0).
  • desire(desire4,0,agent1, blockE / 2 / 1, 38,0).
  • Goals
  • goal(agent1, 0, blockB / 6 / 2 / desire1,
    blockE / 2 / 1 / desire4)

72
Mental State Model
  • Desired World
  • desired_world( blockC / 3 / 1 /unused_block,
    blockA / 1 / 1 /unused_block, blockD / 6 / 1
    /supporting, blockB / 6 / 2 /desire1,
    blockE / 2 / 1 /desire4).
  • Intentions
  • intention(1,agent1,0,that,intention_is_done(agent1
    ,0), blockB / 2 / 1 / 7 / 1,0,
    towards_goals).
  • intention(2,agent1,0,to,intention_is_done(agent1,1
    ), blockD / 4 / 1 / 6 / 1,0, supporting).
  • intention(3,agent1,0, that,intention_is_done(
    agent1,2), blockB / 7 / 1 / 6 / 2,0,
    desire1).
  • intention(4,agent1,0,to,intention_is_done(agent1,3
    ), blockE / 5 / 1 / 2 / 1,0, desire4).

73
Agent Infrastructure Agent Life Cycle
74
The Agent Life Cycle Reading Messages
  • Types of messages
  • Queue
  • Waiting for answers
  • Negotiation and world change aspects
  • Inconsistency recovery

75
The Agent Life Cycle Dealing with the agents
own threats
  • Detection
  • Make abstract threats concrete
  • Execute evaluation

76
The Agent Life Cycle Planning next step
Planning next step
  • Mental states usage
  • Backtracking
  • Better than current state
  • New state or dead end
  • Achievable plan

77
The Agent Life Cycle Performing next intention
  • Intention to - intention that
  • Other agent listening?
  • One argument per cycle.

78
Agent Definition Examples
  • Agent Type agent_type(robot_name,
    memory-less).
  • Agent Capability capable(robot, blockC / 3 /
    1 / 4 / 1).
  • Agent Beliefs b(first_robot, capable(second_robot
    , AnyAction),
    0, t).
  • Agent Desiresdesire(first_desire, 0, robot,
    blockA/3/1, 15, 1).

79
Agent Infrastructure Agent Parameters List
  • Cooperativeness
  • Reliability (promises keeping)
  • Assertiveness
  • Performance threshold (Asynchronous action)
  • Usage of first argument
  • Argument direction
  • Knowledge about other desires
  • Knowledge about other capabilities
  • Measurement of other agent promises keeping
  • Execution of threats by another agent

80
A promise of a future reward
  • Application conditions
  • Opponent agent can perform the requested action.
  • The reward action will help the opponent achieve
    a goal (requires knowledge of opponent desires).
  • Argument not used in the near past.
  • Implementation
  • Generate opponents expected intentions.
  • Offer one of the intentions as a reward
  • Mutual intention which opponent cannot perform by
    itself (requires knowledge of opponent
    capabilities).
  • Opponents intention which it cannot perform.
  • Any mutual intention.
  • Any Opponents intention.

81
A threat
  • Application conditions
  • Opponent agent can perform the requested action.
  • The threat action will interfere with the
    opponents achieving some goals (requires
    knowledge of opponent desires).
  • Argument not used in the near past.
  • Implementation
  • Agent chooses best cube (requires knowledge of
    opponent capabilities).
  • Agent chooses best desire.
  • Agent chooses a threshing action
  • Moving out.
  • Blocking.
  • Interfering.

82
Request Evaluation Mechanism- Parameters List
  • DL (Doing Length)
  • NDL (Not Doing Length)
  • DTL (Doing That Length)
  • NDTL (Not Doing That Length)
  • PL (Punish Length)
  • PTL (Punish That Length)
  • DP (Doing Preference)
  • NDP (Not Doing Preference)

83
Request Evaluation Mechanism- Agent Parameters
  • CP The agents cooperativness.
  • AS The agents assertiveness.
  • RL The agents reliability.
  • ORL The Other agents reliability for keeping
    promises.
  • OTE The Other agents percentage of threat
    executing.

84
Request Evaluation Mechanism- The Formulas
85
Request Evaluation Mechanism- The Formulas
86
Experiments Results
  • Negotiating is better than not negotiating only
    where each agent has particular expertise.
  • Negotiating is better than not negotiating only
    where the agents have complete information.
  • Negotiating is better than not negotiating only
    for mutually cooperative agents or for an
    aggressive agent with a cooperative opponent.
  • Environment (game time, resources) effects the
    negotiations results.

87
Negotiations vs. no negotiations
  • When the agents that do not negotiate succeed in
    obtaining only 29.8 of their desires preference
    values, the negotiating agents succeed in
    obtaining 40.4, on the average. (F5.047,
    p

88
Complete information vs. no information
  • Agents that had no information succeed in
    obtaining a success rate of only 30.8, while
    agents that had full information succeed in
    obtaining 40.4 on the average.
    (F4.326,p

89
Using the first argument vs. using the best found
  • Agents that used the first argument succeed in
    obtaining a success rate of only 34.8, while
    agents that used the best argument succeed in
    obtaining 40.4 on the average, but this result
    is not significant. (F2.28,p

90
Cooperative vs. Aggressive agent
23.5 41.4 (F10.78,p
91
Cooperative and Aggressive Agents vs. No
Negotiations
38.3 29.8 (F6.01, p 92
No negotiations VS. Aggressive Negotiation
38.3 20.8 (F10.03, p 93
Cooperative vs. aggressive
  • Aggressive vs. cooperative 41.4
  • Two cooperatives 38.3
  • No negotiation 29.8
  • Cooperative vs. aggressive 23.5
  • Two aggressive 20.8

94
Environment Constraints
Number of desires
95
Environment Constraints
Time for game
16.7 21.7 (F2.41, p 96
Is it worth it to use formal methods for
Multi-Agent Systems in general and Negotiations
in particular?
97
Game-theory Based Frameworks(Non-cooperative
Models)
  • Strategic-negotiation model based on
    alternating offers model of rubinstein.
    Applications
  • Data allocation (schwartz kraus AAAI97),
  • Resource allocation , task distribution
    (kraus wilkenfeld zlotkin AIJ95,
    kraus AMAI97), hostage crisis (kraus wilkenfeld
    TSMC93).

98
Advantages and DifficultiesNegotiation on Data
Allocation
  • Beneficial results proved to be better than
    current methods simple strategies.
  • Problems
  • Need to develop utility functions
  • Finding possible action identifying optimal
    allocations is NP complete
  • Incomplete information game-theory
    provides limited solutions.

99
Game-theory Based Frameworks(Non-cooperative
Models)
  • Auctions applications
  • Data allocation (schwartz kraus ATAL97),
  • Electronic commerce.
  • Subcontracting based on principle agent
    models. Applications
  • Task allocation (kraus, AIJ96).

100
Advantages and DifficultiesAuctions for Data
Allocation
  • Beneficial results proved to be better than
    current methods.
  • Problems
  • Utility functions,
  • Applicable only when a server is concerned only
    about the data stored locally,
  • Difficult to find bidding when there is
    incomplete information and the evaluations are
    dependant on each other no procedures.

101
Game-theory Based Frameworks(Cooperative Models)
  • Coalition theories applications
  • Group and teams formation (shehory kraus CI99).
  • Benefits well-defined concepts of stability
    mechanisms to divide benefits.
  • Difficulties utility functions, no procedures
    for coalition formation exponential problems.
  • DPS model combinatory theories operations
    research (shehory kraus AIJ98).

102
Decision-theory Based Frameworks
  • Multi-attributed decision making application
  • Intentions reconciliation in SharedPlans
    (grosz kraus, 98).
  • Benefits using results of MADM, e.G., Specific
    method is not so important, standardization
    techniques.
  • Problems choosing attributes assigning values,
    choosing weights.

103
Logical Models
  • Modal logic BDI modelsapplications
  • Automated argumentation's (kraus, sycara
    eventchick AIJ99).
  • Specification of sharedplans (grosz kraus
    AIJ96).
  • Bounded agents (nirkhe, kraus, perlis JLC97).
  • Agents reasoning about other agents (kraus
    lehmann TCT88 kraus subrahmanian IJIS95).

104
Advantages and DifficultiesLogical Models
  • Formal models with well studied
    propertiesexcellent for specification.
  • Problems
  • Some assumptions are not valid (e.g.,
    omnicience).
  • Complexity problems.
  • There are no procedures for actions required a
    lot of programming decision making developing
    preferences.

105
Physics Based Models
  • Physical models of particle-dynamics
    Applications Cooperation in large-scale
    multi-agent systems freight deliveries within a
    metropolitan area. (Shehory
    Kraus ECAI96 Shehory, Kraus Yadgar ATAL98).
  • Benefits efficient inherits the physics
    properties.
  • Problems adjustments potential functions

106
Summary
  • Benefits formal models which have already been
    studied lead to efficient results. No need to
    invent the wheel.
  • Problems
  • Restrictions and assumptions made by game-theory
    are not valid in real world MAS situations
    extensions are needed.
  • It is difficult to develop utility functions.
  • Complexity problems.
Write a Comment
User Comments (0)
About PowerShow.com