Prospective Logic Agents - PowerPoint PPT Presentation

1 / 44
About This Presentation
Title:

Prospective Logic Agents

Description:

Preferring a posteriori. Quantitative Preferences ... during scenario generation and used as an element for choice a posteriori. ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 45
Provided by: glo90
Category:

less

Transcript and Presenter's Notes

Title: Prospective Logic Agents


1
Prospective Logic Agents
  • Luís Moniz Pereira
  • Gonçalo Lopes

2
Broad Outlook
  • Logic programming (LP) languages and semantics
    enabling a program to talk about its own
    evolution through time are already well
    established.

3
Broad Outlook
  • Deliberative agents already have powerful means
    for extracting inferences, given their knowledge
    at time t.
  • How can they take advantage of an evolving logic
    programs capabilities?
  • The idea to logically model agents capable of
    making inferences based not only on their current
    knowledge, but on knowledge that they might
    expect to have at time tn.

4
Broad Outlook
  • Agents can use abductive reasoning to produce
    hypothetical scenarios given partial observations
    from the environment and current knowledge,
    possibly subject to a set of integrity
    constraints.

5
Broad Outlook
  • The set of possible future scenaria can be
    exponentially large due to combinatorial
    explosion, so a means to efficiently prune the
    search space is required, a priori preferring
    some predictions over others.

6
Broad Outlook
  • Once possible scenaria are finally generated, a
    means to prefer a posteriori is also required, so
    that the agent can commit to a final choice of
    action.
  • The choices of the agent should of course be
    dynamically subject to revision, so that it may
    backtrack on previous assumptions and even change
    its preferences based on past experience.

7
Agent Cycle
8
Language
  • Let L be a first order language. A domain rule in
    L is of the form
  • A ? L1,...,Lt (t 0)
  • An integrity constraint is a rule of the form
  • ? ? L1,...,Lt (t gt 0)
  • A is a domain atom in L, and L1,...,Lt are domain
    literals. ? is a domain atom denoting falsity.

9
Language
  • A (logic) program P over L is a set of domain
    rules and integrity constraints, standing for all
    their ground instances.
  • Every program P has an associated set of
    abducibles A ? L.
  • Abducibles are hypotheses which can be assumed to
    extend the theory, and therefore they do not
    appear in any rule heads of P.

10
Preferring Abducibles
  • Abducibles can have preconditions so that they
    can be considered, represented by the following
    rule
  • consider(A) ? expect(A), not expect_not(A)
  • These preconditions represent domain-specific
    knowledge and are used a priori to constrain
    generation of abducibles.

11
Preferring Abducibles
  • Preferences between abducible literals can also
    be expressed via the binary relation is more
    relevant than, expressed by the operator ?
  • A ? B ? L1,...,Lt (t 0)
  • This relation has been extended to sets of
    abducibles in pla07.

12
Prospective Logic Agents
  • The problem of prospection can be enounced in
    this framework as one of finding abductive
    extensions to the current knowledge theory of the
    agents which are both
  • Relevant under the agents current goals
  • Preferred extensions w.r.t. the preference rules
  • The basic problem can be arbitrarily complicated
    by introducing belief revision requirements,
    utility theory, etc.

13
Goals and Observations
14
Observations
  • An observation is expressed as the quaternary
    relation
  • observe(Observer,Reporter,Observation,Value)
  • Observations can stand for actions, goals or
    perceptions.
  • A distinction should be made between goals
    (intentions) and desires.

15
Goals and Desires
  • A goal can be represented as an observation from
    the program to the program which must be proven
    true.
  • A desire can be represented as a possibility to
    fulfill a goal. We represent these by
    on_observe/4 literals, with a structure analogous
    to the observe/4 literals.

16
Goals and Desires
  • In any single moment, an agent can have a
    multitude of desires, but only some of them will
    actually become intentions.
  • We represent evaluation of an intention by the
    following rules
  • try(G) ? G
  • try(G) ? not try_not(G)
  • try_not(G) ? not try(G)

17
Example Tornado
  • Consider a scenario where weather forecasts have
    been transmitted foretelling the possibility of a
    tornado.
  • In case of emergency prevention, it is necessary
    to take action beforehand, proactively, so to
    increase the chances of success.

18
Example Tornado
  • The following prospective logic program aims to
    deal with this scenario
  • ? ? consider(tornado), not deal_with_emergency(tor
    nado)
  • expect(tornado) ? weather_forecast(tornado)
  • deal_with_emergency(tornado) ? consider(decide_boa
    rd_house)
  • expect(decide_board_house) ? consider(tornado)
  • ? ? decide_board_house, not boards_at_home, not
    go_buy_boards

19
Example Tornado
  • The weather forecast implies that a tornado is
    expected and so the above program actually
    encodes two possible predictions about the
    future.
  • In one of the scenaria, the tornado is absent,
    but in the scenario where it is actually
    confirmed, the decision to board up the house
    follows as a necessity.
  • Scenaria generation can trigger goals, which in
    turn can trigger more scenaria generation.

20
Generating Scenaria
21
Generating Scenaria
  • Once the set of the agents active goals is
    known, possible scenaria can be found by
    reasoning backwards from the goals into
    abducibles under consider/1 literals.
  • Each abducible represents a choice it can be
    assumed either true or false, meaning a
    combinatorial explosion of possible abducible
    values in a program.

22
Generating Scenaria
  • In practice, the combinations are contained and
    made tractable by a number of factors.
  • First, we consider only the relevant part of the
    program for collecting considered abducibles.
  • A priori preference rules and preconditions also
    rule out a majority of latent hypotheses, thus
    pruning the search space efficiently, using
    domain-specific knowledge.

23
Top-down consideration,Bottom-up generation
  • Considered abducibles are found by reasoning
    backwards from the goals.
  • However, assuming an abducible as true or false
    may trigger unforeseen side-effects on the rest
    of the program.
  • For this reason, scenario generation is obtained
    by reasoning forwards from selected abducibles to
    find relevant consequences.

24
Example Emergencies
  • Consider the emergency scenario in the London
    underground kowalski06, where smoke is
    observed, and we want to be able to provide an
    explanation for this observation.
  • Smoke can be caused by fire, and the possibility
    of flames should be considered.
  • But smoke could also be caused by tear gas, in
    case of police intervention.

25
Example Emergencies
  • The tu literal stands for true or undefined.
  • smoke ? consider(fire)
  • flames ? consider(fire)
  • smoke ? consider(tear_gas)
  • eyes_cringing ? consider(tear_gas)
  • expect(fire)
  • expect(tear_gas)

26
Example Emergencies
  • ? ? observation(smoke), not smoke
  • observation(smoke)
  • fire ? tear_gas
  • ? ? flames, not observe(program,user,flames,tu)
  • ? ? eyes_cringing, not observe(program,user,eyes_c
    ringing,tu)

27
Preferring a posteriori
28
Quantitative Preferences
  • Once each scenarios model is known, there are a
    number of strategies which can be followed for
    choosing between them.
  • A possible way to achieve this is to use an
    utility theory to assign, in a domain-specific
    way, a numerical value to each scenario, which is
    computed during scenario generation and used as
    an element for choice a posteriori.

29
Qualitative Preferences
  • Numerical assessment of the value of each
    scenario can be effective in many situations, but
    there are occasions where a more qualitative
    expression of preference is desired.
  • This is the role of the moral theory presented in
    the figure. Related work pereira07 explores
    this qualitative preference mechanism in more
    detail.

30
Exploiting Oracles
  • In both quantitative and qualitative cases, the
    possibility of acquiring additional information
    to make a choice is highly advantageous.
  • Prospective logic agents use the concept of
    oracles to access additional information from
    external systems (e.g. sensors, the user, etc.)

31
Exploiting Oracles
  • Queries to oracles are represented using the
    syntax for observations presented previously, in
    the form
  • observe(agent, oracle_name, query, Value) ?
    oracle, L1,...,Lt (t 0)
  • Since oracles can be expensive to query, a
    principle of parsimony is enforced via the oracle
    literal, which is used as a toggle to
    allow/disallow queries to oracles.

32
Exploiting Oracles
  • Information obtained from the oracles can have
    side-effects in the rest of the program as well.
  • After the oracle step, it may be necessary to
    relaunch the procedure in order to reevaluate
    simulation conditions.

33
Consequences of Prospection
  • Even after all the strategies for choice have
    been used, more than a single desirable scenario
    may still remain.
  • In this case, we may have to iterate the
    procedure to incorporate additional information
    until we reach a fix-point.
  • Additionally, we may branch the simulation to
    consider a number of different possible scenarios
    in parallel.

34
Example Automated Diagnosis
  • Consider a robotic gripper immersed in a
    collaborative assembly-line environment.
  • Commands issued to the gripper from its
    controller are updated to its evolving knowledge
    base, as well as regular readings from the
    sensor.
  • Diagnosis requests by the system are issued to
    the gripper's prospecting controller, in order to
    check for abnormal behaviour.

35
Example Automated Diagnosis
  • When the system is confronted with multiple
    possible diagnosis, requests for experiments can
    be asked of the controller. The gripper can have
    three possible logical states open, closed or
    something intermediate. The available gripper
    commands are simply open and close.

36
Example Automated Diagnosis
  • open ? request_open, not consider(abnormal(gripper
    ))
  • open ? sensor(open), not consider(abnormal(sensor)
    )
  • intermediate ? request_close, manipulating_part,
    not consider(abnormal(gripper)), not
    consider(lost_part)
  • intermediate ? sensor(intermediate), not
    consider(abnormal(sensor))
  • closed ? request_close, not manipulating_part,
    not consider(abnormal(gripper))
  • closed ? sensor(closed), not consider(abnormal(sen
    sor))
  • ? ? open, intermediate
  • ? ? open, closed
  • ? ? closed, intermediate

37
Example Automated Diagnosis
  • expect(abnormal(gripper))
  • expect(lost_part) ? manipulating_part
  • expect(abnormal(sensor))
  • expect_not(abnormal(sensor)) ? manipulating_part,
    observe(system,gripper,ok(sensor),true)
  • observe(system,gripper,Experiment,Result) ?
    oracle, test_sensor(Experiment,Result)
  • abnormal(gripper) ? abnormal(sensor) ?
    request_open, not sensor(open), not
    sensor(closed)
  • lost_part ? abnormal(gripper) ?
    observe(system,gripper,ok(sensor),true),
    sensor(closed)
  • abnormal(gripper) ? lost_part ? not (lost_part ?
    abnormal(gripper))

38
Example Automated Diagnosis
  • In this case, there is an available experiment to
    test whether the sensor is malfunctioning, but
    resorting to it should be avoided as much as
    possible, as it will imply occupying additional
    resources from the assembly-line coalition.

39
Implementation
  • The presented system has already been implemented
    using working state-of-the-art logic programming
    frameworks.
  • XSB Prolog was used for the Well-Founded,
    top-down, abductive semantics.
  • Smodels was used for the Stable Models,
    bottom-up, scenaria generation semantics.

40
Prospecting the Future
  • Both kowalski06 and poole00 represent
    candidate actions by abducibles and use logic
    programs to derive possible consequences, to help
    in deciding between them.
  • However, they do not derive consequences of
    abducibles that are not actions, such as
    observations for example. Nor do they consider
    how to determine the value of unknown conditions
    (e.g. by using an oracle).

41
Prospecting the Future
  • Compared with Poole and Kowalski, one of the most
    interesting features of our approach is the use
    of Smodels to perform a kind of forward reasoning
    to derive the consequences of candidate
    hypotheses.
  • These consequences may then lead to a further
    cycle of abductive exploration, intertwined with
    preferences for pruning and for directing search.

42
Prospecting the Future
  • A number of additional challenges still need to
    be addressed, however, in order to allow the
    system to be able to scale up to scenarios of
    greater complexity.
  • Branching update sequences need to be extended to
    handle an arbitrary length in future lookahead.
  • Preferences over observations are also desirable,
    so that agents can best select over which oracles
    to query during prospection.

43
Prospecting the Future
  • Prospective agents could use abduction not only
    to find the means to furtheir their own goals,
    but to abduce the goals and intentions of other
    agents.
  • Prospection over the past is also of interest, so
    to gain the ability to perform counterfactuals in
    order to increase performance in future tasks.

44
Bibliography (excerpt)
  • pereira07 - L. M. Pereira and A. Saptawijaya,
    Modelling Morality with Prospective Logic, 2007
  • kowalski06 R. Kowalski, The Logical Way to be
    Artificially Intelligent, 2006
  • poole00 D. Poole, Abducing Through Negation
    as Failure Stable models within the independent
    choice logic, 2000
  • pla07 - L. M. Pereira, G. Lopes and P.
    DellAcqua, Pre and Post Preferences over
    Abductive Models, 2007
Write a Comment
User Comments (0)
About PowerShow.com