ABM: Decision making - PowerPoint PPT Presentation

1 / 27
About This Presentation
Title:

ABM: Decision making

Description:

Title: 1. Intriduction to Java Programming for Beginners, Novices, Geographers and Complete Idiots Author: Stan Openshaw Last modified by: Linus Created Date – PowerPoint PPT presentation

Number of Views:77
Avg rating:3.0/5.0
Slides: 28
Provided by: StanO150
Category:

less

Transcript and Presenter's Notes

Title: ABM: Decision making


1
ABM Decision making
  • Dr Andy Evans

2
Thinking in AI
  • Agent based systems and other AI can contain
    standard maths etc.
  • But their real power comes from replicating how
    we act in the real world assessing situations,
    reasoning about them, making decisions, and then
    applying rules.
  • Reasoning if a café contains food, and food
    removes hunger, a café removes hunger
  • Rules if my hunger is high, I should go to a
    café

3
Reasoning
  • Reasoning is the remit of brain in a box AI.
  • Useful for
  • Developing rulesets in AI.
  • Interpreting requests from people (Natural
    Language Processing).
  • Developing new knowledge and replicating
    sentience.

4
Reasoning
  • Programming languages developed in the late 60s
    / early 70s offered the promise of logical
    reasoning (Planner Prolog).
  • These allow the manipulation of assertions about
    the world
  • man is mortal and
  • Socrates is a man leads to
  • Socrates is mortal
  • Assertions are usually triples of
  • subject-predicate relationship-object.
  • There are interfaces for connecting Prolog and
    Java
  • http//en.wikipedia.org/wiki/PrologInterfaces_to_
    other_languages

5
Reasoning
  • This led to SHRDLU (MIT), which could do basic
    natural language parsing and responses, based on
    a limited knowledge of the world.
  • SHRDLU, however, was very limited.
  • Need more knowledge about the world.
  • Need, therefore, to get everyone involved in
    uploading knowledge.
  • Led to projects like Cyc / OpenCyc (1984
    306,000 facts) and Open Mind Common Sense (1999
    1,000,000 facts).

6
Reasoning
  • An obvious source for information is the web.
  • However, parsing natural language is not easy a
    lot of the semantic content (meaning) relies on
    context.
  • It would be easier if language was embedded in an
    ontology (structured collection of knowledge).
    For example death the event was marked up as
    different from Death the medieval figure
  • Obvious similarities to metadata standards for
    which (eg Dublin Core) led to work on structuring
    knowledge.

7
Semantic Web
  • 1990s Tim Berners-Lee and others pushed for a
    Semantic Web marked up for meaning in context.
  • 1999 Resource Description Framework (RDF) way
    of marking up web content so it relates to a
    framework of triples defining the content. Very
    general.
  • Used to identify eg. initial Really Simple
    Syndication (RSS) feeds (now not RDF).
  • But, not very specific, needs another language to
    extend it.

8
DARPA Agent Markup Language
  • 1999 DARPA (US Military Research) wanted a
    machine-readable knowledge format for agents,
    that could be used over the web.
  • Developed a RDF extension for this which led to
    the Web Ontology Language (OWL).
  • This can be used to build more specific RDF
    ontologies.
  • http//www.schemaweb.info/schema/BrowseSchema.aspx
  • Example Friend of a Friend (FOAF)

9
Issues
  • Easy to build self-contained ontologies, but what
    happens when the same term is used by different
    groups. How do we resolve conflicts?
  • Automated reasoning about knowledge is still not
    great because ontologies dont capture the
    estimation and flexibility involved in real
    reasoning (eg. accepting paradoxes).
  • By concentrating on formal definitions rather
    than experience they tend to be circular.
    Alternative Folksonomies, but difficult in
    modelling.
  • Hard to move from knowledge to action. For this
    we need rules for when to act, and behaviours.

10
Thinking for agents
  • Building up rulesets is somewhat easier.
  • Though it is harder to represent them in a
    coherent fashion as theres an infinite number of
    things one might do.
  • While many standards for representing agents
    states, few for representing rules, and most very
    very limited.
  • Usual for these to just be encoded into a
    programming language.

11
Rulesets
  • Most rules are condition-state-action like
  • if hunger is high go to café
  • Normally thered be a hunger state variable,
    given some value, and a series of thresholds.
  • A simple agent would look at the state variable
    and implement or not-implement the associated
    rule.

12
How do we decide actions?
  • Ok to have condition-state-action rules like
  • if hunger is high go to café
  • And
  • if tiredness is high go to bed
  • But how do we decide which rule should be enacted
    if we have both?
  • How do real people choose?

13
Picking rules
  • One simple decision making process is to randomly
    choose.
  • Another is to weight the rules and pick the rule
    with the highest weight.
  • Roulette Wheel picking weights rules then picks
    probabilistically based on the weights using
    Monte Carlo sampling.
  • How do we pick the weights? Calibration? Do we
    adjust them with experience? For example, with a
    GA?
  • We may try and model specific cognitive biases
  • http//en.wikipedia.org/wiki/List_of_cognitive_bi
    ases
  • Anchoring and adjustment pick an educated or
    constrained guess at likelihoods or behaviour and
    adjust from that based on evidence.

14
Reality is fuzzy
  • Alternatively we may wish to hedge our bets and
    run several rules.
  • This is especially the case as rules tend to be
    binary (run / dont run) yet the world isnt
    always like this.
  • Say we have two rules
  • if hot open window
  • if cold close window
  • How is hot? 30 degrees? 40 degrees?
  • Language isnt usually precise
  • We often mix rules (e.g. open the window
    slightly).

15
Fuzzy Sets and Logic
  • Fuzzy Sets let us say something is 90 one
    thing and 10 another, without being
    illogical.
  • Fuzzy Logic then lets us use this in rules
  • E.g. its 90 right to do something, so Ill do
    it 90 - opening a window, for example.

16
Fuzzy Sets
  • We give things a degree of membership between 0
    and 1 in several sets (to a combined total of 1).
  • We then label these sets using human terms.
  • Encapsulates terms with no consensus definition,
    but we might use surveys to define them.

1
Membership function
Hot
Cold
Degree of membership
0.5
20
0
40
Degrees
17 15 cold 85 hot
17
Fuzzy Logic models
  • We give our variables membership functions, and
    express the variables as nouns (length,
    temperature) or adjectives (long, hot).
  • We can then build up linguistic equations (IF
    length long, AND temperature hot, THEN
    openWindow).
  • Actions then based on conversion schemes for
    converting from fuzzy percentages of inputs to
    membership functions of outputs.

18
Bayesian Networks
  • Of course, it may be that we see people in one
    state, and their actions, but have no way of
    replicating the rulesets in human language.
  • In this case, we can generate a Bayesian Network.
  • These gives probabilities that states will occur
    together.
  • This can be interpreted as if A then B.
  • They allow you to update the probabilities on new
    evidence.
  • They allow you to chain these rules together to
    make inferences.

19
Bayesian Networks
  • In a Bayesian Network the states are linked by
    probabilities, so
  • If A then B if B then C if C then D
  • Not only this, but this can be updated when an
    event A happens, propagating the new
    probabilities by using the new final probability
    of B to recalculate the probability of C, etc.

20
Expert Systems
  • All these elements may be brought together in an
    Expert System.
  • These are decision trees, in which rules and
    probabilities link states.
  • Forward chaining you input states and the system
    runs through the rules to suggest a most useful
    scenario of action.
  • Backward chaining you input goals, and the
    system tells you the states you need to achieve
    to get there.
  • Dont have to use Fuzzy Sets or Bayesian
    probabilities, but often do.

21
How do we have confidence in our reasoning?
  • Expert systems may allow you to assess how
    confident you are that a rule should be applied,
    though it isnt always clear how confidences add
    up.
  • For example
  • Man is mortal (confidence 0.99)
  • Socrates is a man (confidence 0.5)
  • Final confidence that Socrates is mortal ?
  • Dempster-Shafer theory allows us to deal with the
    confidence in an event is happening (assigns a
    confidence to each potential event totalling one
    for all possible events, and allows us to assess
    multiple groups of evidence.)

22
Picking rules
  • However, ideally we want a cognitive framework to
    embed rule-choice within.
  • Something that embeds decision making within a
    wider model of thought and existence.

23
Belief-Desire-Intention
  • We need some kind of reasoning architecture that
    allows the agents to decide or be driven to
    decisions.
  • Most famous is the Belief-Desire-Intention model.
  • Beliefs facts about the world (which can be
    rules).
  • Desires things the agent wants to do / happen.
  • Intentions actions the agent has chosen,
    usually from a set of plans.
  • Driven by Events, which cascade a series of
    changes.

24
Decision making
  • BDI decisions are usually made by assuming a
    utility function. This might include
  • whichever desire is most important wins
  • whichever plan achieves most desires
  • whichever plan is most likely to succeed
  • whichever plan does the above, after testing in
    multiple situations
  • whichever a community of agents decide on (eg
    by voting)
  • Desires are goals, rather than more dynamic
    drivers.

25
The PECS model
  • Similar model is PECS more sophisticated as it
    includes internal drivers
  • Physis physical states
  • Emotional emotional states
  • Cognitive facts about the world
  • Social status position within society etc.
  • On the basis of these, the agent plans and picks
    a behaviour.
  • Ultimately, though, these are decided between by
    a weighted utility function.

26
Thinking for agents
  • Ultimately we have to trade off complexity of
    reasoning against speed of processing.
  • On the one hand, behaviour developed by a GA/GP
    would be dumb, but fast (which is why it is used
    to control agents in games).
  • On the other, full cognitive architecture systems
    like Soar, CLARION, and Adaptive Control of
    ThoughtRational (ACT-R) are still not perfect,
    and take a great deal of time to set up.

27
Further reading
  • Michael Wooldridge (2009) An Introduction to
    MultiAgent Systems Wiley (2nd Edition)
  • MAS architectures, BDI etc.
  • Stuart Russell and Peter Norvig (2010)
    Artificial Intelligence A Modern Approach
    Prentice Hall (3rd Edition)
  • Neural nets, Language Processing, etc.
Write a Comment
User Comments (0)
About PowerShow.com