Intelligent Behaviors for Simulated Entities - PowerPoint PPT Presentation

About This Presentation
Title:

Intelligent Behaviors for Simulated Entities

Description:

General human-level AI not yet possible. computationally expensive ... AI Magazine, 26(3):82-92. ... AI Magazine, 20(1), 27-41. 47. Strengths / Weaknesses of ... – PowerPoint PPT presentation

Number of Views:52
Avg rating:3.0/5.0
Slides: 70
Provided by: dani75
Category:

less

Transcript and Presenter's Notes

Title: Intelligent Behaviors for Simulated Entities


1
Intelligent Behaviors for Simulated Entities
I/ITSEC 2006 Tutorial
Presented by Ryan Houlette Stottler Henke
Associates, Inc. houlette_at_stottlerhenke.com 61
7-616-1293 Jeremy Ludwig Stottler Henke
Associates, Inc. ludwig_at_stottlerhenke.com 541-
302-0929
2
Outline
  • Defining intelligent behavior
  • Authoring methodology
  • Technologies
  • Cognitive architectures
  • Behavioral approaches
  • Hybrid approaches
  • Conclusion
  • Questions

3
The Goal
  • Intelligent behavior
  • a.k.a. entities acting autonomously
  • generally replacements for humans
  • when humans are not available
  • scheduling issues
  • location
  • shortage of necessary expertise
  • simply not enough people
  • when humans are too costly
  • Defining Intelligent Behavior
  • Authoring Methodology
  • Technologies
  • Cognitive Architectures
  • Behavioral Approaches
  • Hybrid Approaches
  • Conclusion

4
Intelligent Behavior
  • Pretty vague!
  • General human-level AI not yet possible
  • computationally expensive
  • knowledge authoring bottleneck
  • Must pick your battles
  • what is most important for your application
  • what resources are available

5
Decision Factors1
  • Entity skill set
  • Fidelity
  • Autonomy
  • Scalability
  • Authoring

6
Factor Entity Skill Set
  • What does the entity need to be able to do?
  • follow a path
  • work with a team
  • perceive its environment
  • communicate with humans
  • exhibit emotion/social skills
  • etc.
  • Depends on purpose of simulation, type of
    scenario, echelon of entity

7
Factor Fidelity
  • How accurate does the entitys behavior need to
    be?
  • correct execution of a task
  • correct selection of tasks
  • correct timing
  • variability/predictability
  • Again, depends on purpose of simulation and
    echelon
  • training gt believability
  • analysis gt correctness

8
Factor Autonomy
  • How much direction does the entity need?
  • explicitly scripted
  • tactical objectives
  • strategic objectives
  • Behavior reusable across scenarios
  • Dynamic behavior gt less brittle

9
Factor Scalability
  • How many entities are needed?
  • computational overhead
  • knowledge/behavior authoring costs
  • Can be mitigated
  • aggregating entities
  • distributing entities

10
Factor Authoring
  • Who is authoring the behaviors?
  • programmers
  • knowledge engineers
  • subject matter experts
  • end users / soldiers
  • Training/skills required for authoring
  • Quality of authoring tools
  • Ease of modifying/extending behaviors

11
Choosing an Approach
Scalability Ease of Authoring
Skill Set Fidelity Autonomy
  • Also ease of integration with simulation....

12
Agent Technologies
  • Wide range of possible approaches
  • Will discuss the two extremes

Cognitive Architectures
Behavioral Approaches
EPIC, ACT-R, Soar
scripting
FSMs
deliberative
reactive
13
Authoring Methodologies
Agent Architecture
Behavior Model
Simulation
  • Defining Intelligent Behavior
  • Authoring Methodology
  • Technologies
  • Cognitive Architectures
  • Behavioral Approaches
  • Hybrid Approaches
  • Conclusion

14
Basic Authoring Procedure
Evaluate entity behavior
Run simulation
Determine desired behavior
Build behavior model
DONE!
Refine behavior model
15
Iterative Authoring
  • Often useful to start with limited set of
    behaviors
  • particularly when learning new architecture
  • depth-first vs. breadth-first
  • Test early and often
  • Build initial model with revision in mind
  • good software design principles apply
    modularity, encapsulation, loose coupling
  • Determining why model behaved incorrectly can be
    difficult
  • some tools can help provide insight

16
The Knowledge Bottleneck
  • Model builder is not subject matter expert
  • Transferring knowledge is labor-intensive
  • For TacAir-Soar, 70-90 of model dev. time
  • To reduce the bottleneck
  • Repurpose existing models
  • Use SME-friendly modeling tools
  • Train SMEs in modeling skills
  • gt Still an unsolved problem

17
The Simulation Interface
  • Simulation sets bounds of behavior
  • the primitive actions entities can perform
  • the information about the world that is available
    to entities
  • Can be useful to move interface up
  • if simulation interface is too low-level
  • abstract away simulation details
  • in wrapper around agent architecture
  • in library within the behavior model itself
  • enables behavior model to be in terms of
    meaningful units of behavior

18
Cognitive Architectures
  • Overview
  • EPIC, ACT-R, Soar
  • Examples of Cognitive Models
  • Strengths / Weakness of Cognitive Architectures
  • Defining Intelligent Behavior
  • Authoring Methodology
  • Technologies
  • Cognitive Architectures
  • Behavioral Approaches
  • Hybrid Approaches
  • Conclusion

19
Introduction
  • What is a cognitive architecture?
  • a broad theory of human cognition based on a
    wide selection of human experimental data and
    implemented as a running computer simulation
    (Byrne, 2003)
  • Why cognitive architectures?
  • Advance psychological theories of cognition
  • Create accurate simulations of human behavior

20
Introduction
  • What is cognition?
  • Where does psychology fit in?

21
Cognitive Architecture Components
22
A Theory The Model Human Processor
  • Some principles of operation
  • Recognize-act cycle
  • Fitts law
  • Power law of practice
  • Rationality principle
  • Problem space principle

(from Card, Moran, Newell, 1983)
23
Architecture
  • Definition
  • a broad theory of human cognition based on a
    wide selection of human experimental data and
    implemented as a running computer simulation
    (Byrne, 2003)
  • Two main components in modeling
  • Cognitive model programming language
  • Runtime Interpreter

24
EPIC Architecture
  • Processors
  • Cognitive
  • Perceptual
  • Motor
  • Operators
  • Cognitive
  • Perceptual
  • Motor
  • Knowledge Representation

(from Kieras, http//www.eecs.umich.edu/
kieras/epic.html)
25
Model
Task Environment
Task Description
Architecture Runtime
Task Strategy
Architecture Language
26
Task Description
  • There are two points on the screen A and B.
  • The task is to point to A with the right hand,
    and press the Z key with the left hand when it
    is reached.
  • Then point from A to B with the right hand and
    press the Z key with the left hand.
  • Finally point back to A again, and press the Z
    key again.

27
Task Environment
A
B
28
Task Strategy EPIC Production Rules
29
EPIC Production Rule
  • (Top_point_A
  • IF
  • ( (Step Point AtA)
  • (Motor Manual Modality Free)
  • (Motor Ocular Modality Free)
  • (Visual ?object Text My_Point_A)
  • )
  • THEN
  • (
  • (Send_to_motor Manual Perform Ply Cursor ?object
    Right)
  • (Delete (Step Point AtA))
  • (Add (Step Click AtA))
  • ))

30
ACT-R and Soar
  • Motivations
  • Features
  • Models

31
Initial Motivations
  • ACT-R
  • Memory
  • Problem solving
  • Soar
  • Learning
  • Problem solving

32
ACT-R Architecture
(from Bidiu, R., http//actr.psy.cmu.edu/about/)
33
Some ACT-R Features
  • Declarative memory stored in chunks
  • Memory activation
  • Buffer sizes between modules is one chunk
  • One rule per cycle
  • Learning
  • Memory retrieval, production utilities
  • New productions, new chunks

34
ACT-R 6.0 IDE
35
Task Description
  • Simple Addition
  • 1 3 4
  • 2 2 4
  • Goal mimic the performance of four year olds on
    simple addition tasks
  • This is a memory retrieval task, where each
    number is retreived (e.g. 1 and 3) and then an
    addition fact is retrieved (1 3 4)
  • The task demonstrates partial matching of
    declarative memory items, and requires tweaking a
    number of parameters.
  • From the ACT-R tutorial, Unit 6

36
ACT-R 6.0 Production Rules
  • (p retrieve-first-number
  • goalgt
  • isa problem
  • arg1 one
  • state nil
  • gt
  • goalgt
  • state encoding-one
  • retrievalgt
  • isa number
  • name one
  • )
  • (p encode-first-number
  • goalgt
  • isa problem
  • state encoding-one
  • retrievalgt
  • isa number
  • gt
  • goalgt
  • state retrieve-two
  • arg1 retrieval
  • )

37
Some Relevant ACT-R Models
  • Best, B., Lebiere, C., Scarpinatto, C. (2002).
    A model of synthetic opponents in MOUT training
    simulations using the ACT-R cognitive
    architecture. In Proceedings of the Eleventh
    Conference on Computer Generated Forces and
    Behavior Representation. Orlando, FL.
  • Craig, K., Doyal, J., Brett, B., Lebiere, C.,
    Biefeld, E., Martin, E. (2002). Development of
    a hybrid model of tactical fighter pilot behavior
    using IMPRINT task network model and ACT-R. In
    Proceedings of the Eleventh Conference on
    Computer Generated Forces and Behavior
    Representation. Orlando, FL

38
Soar Architecture
  • Problem Space Based

39
Some Soar Features
  • Problem space based
  • Attribute/value hierarchy (WM) forms the current
    state
  • Productions (LTM) transform the current state to
    achieve goals by applying operators
  • Cycle
  • Input
  • Elaborations fired
  • All possible operators proposed
  • One selected
  • Operator applied
  • Output
  • Impasses Learning

40
Soar 8.6.2 IDE
41
Task Description
  • Control the behavior of a Tank on the game board.
  • Each tank has a number of sensors (e.g. radar) to
    find enemies, missiles to launch at enemies, and
    limited resources
  • From the Soar Tutorial

42
Propose Moves
  • sp proposemove
  • (state ltsgt name wander
  • io.input-link.blocked.forward no)
  • --gt
  • (ltsgt operator ltogt )
  • (ltogt name move
  • actions.move.direction forward)
  • sp proposeturn
  • (state ltsgt name wander
  • io.input-link.blocked ltbgt)
  • (ltbgt forward yes
  • ltlt left right gtgt ltdirectiongt no)
  • --gt
  • (ltsgt operator ltogt )
  • (ltogt name turn
  • actions ltagt)
  • (ltagt rotate.direction ltdirectiongt
  • radar.switch on
  • sp proposeturnbackward
  • (state ltsgt name wander
  • io.input-link.blocked ltbgt)
  • (ltbgt forward yes left yes right yes)
  • --gt
  • (ltsgt operator ltogt )
  • (ltogt name turn
  • actions.rotate.direction left)

43
Prefer Moves
  • sp selectradar-offmove
  • (state ltsgt name wander
  • operator lto1gt
  • operator lto2gt )
  • (lto1gt name radar-off)
  • (lto2gt name ltlt turn move gtgt)
  • --gt
  • (ltsgt operator lto1gt gt lto2gt)

44
Apply Move
  • sp applymove
  • (state ltsgt operator ltogt
  • io.output-link ltoutgt)
  • (ltogt direction ltdirectiongt
  • name move)
  • --gt
  • (ltoutgt move.direction ltdirectiongt)

45
Elaborations
  • sp elaboratestatemissileslow
  • (state ltsgt name tanksoar
  • io.input-link.missiles 0)
  • --gt
  • (ltsgt missiles-energy low)
  • sp elaboratestateenergylow
  • (state ltsgt name tanksoar
  • io.input-link.energy lt 200)
  • --gt
  • (ltsgt missiles-energy low)

46
Some Relevant Soar Models
  • Wray, R.E., Laird, J.E., Nuxoll, A., Stokes, D.,
    Kerfoot, A. (2005). Synthetic adversaries for
    urban combat training. AI Magazine, 26(3)82-92.
  • Jones, R. M., Laird, J. E., Nielsen, P. E.,
    Coulter, K. J., Kenny, P., Koss, F. V. (1999).
    Automated intelligent pilots for combat flight
    simulation. AI Magazine, 20(1), 27-41.

47
Strengths / Weaknesses of Cognitive Architectures
  • Strengths
  • Supports aspects of intelligent behavior, such as
    learning, memory, and problem solving, not
    supported by other types of architectures
  • Can be used to accurately model human behavior,
    especially human-computer interaction, at small
    grain sizes (measured in ms)
  • Weaknesses
  • Can be difficult to author, modify, and debug
    complicated sets of production rules
  • High level modeling languages (e.g. CogTool,
    Herbal, High Level Symbolic Representation
    language)
  • Automated model generation (e.g. Konik Laird,
    2006)
  • Computational issues when scaling to large number
    of entities

48
Behavioral Approaches
  • Focus is on externally-observable behavior
  • no explicit modeling of knowledge/cognition
  • instead, behavior is explicitly specified
  • Go to destination X, then attack enemy.
  • Often a natural mapping from doctrine to behavior
    specifications
  • Defining Intelligent Behavior
  • Authoring Methodology
  • Technologies
  • Cognitive Architectures
  • Behavioral Approaches
  • Hybrid Approaches
  • Conclusion

49
Hard-coding Behaviors
  • Simplest approach is write behavior in C/Java
  • MoveTo(location_X)
  • AcquireTarget(target)
  • FireAt(target)
  • Dont do this!
  • Can only be modified by programmers
  • Hard to update and extend
  • Behavior models not easily portable

50
Scripting Behaviors
  • Write behaviors in scripting language
  • UnrealScript
  • Avoids many problems of hard-coding
  • not tightly coupled to simulation code
  • more portable
  • often simplified to be easier to learn use
  • Fine for linear sequences of actions, but do not
    scale well to complex behavior

51
Finite State Machine (FSM)
  • Specifies a sequence of decisions and actions
  • Basic form is essentially a flowchart

no
X?
yes
Z?
yes
52
An FSM Example
  • A basic Patrol behavior
  • implemented for bots in Counter-Strike
  • built in SimBionic visual editor
  • Simulation interface
  • Primitive actions
  • FollowPath, TurnTo, Shoot, Reload
  • Sensory inputs
  • AtDestination, Hear, SeeEnemy, OutOfAmmo, IsDead

53
An FSM Example (2)
54
An FSM Example (3)
55
An FSM Example (4)
56
FSMs Advantages
  • Very commonly-used technique
  • Easy to implement
  • Efficient
  • Intuitive visual representation
  • Accessible to SMEs
  • Maintainable
  • Variety of tools available

57
FSMs Disadvantages
  • Have difficulty accurately modeling
  • behavior at small grain sizes
  • human-entity interaction
  • Lack of planning and learning capabilities
  • gt brittleness (cant cope with situations
    unforeseen by the modeler)
  • Tend to scale ungracefully

58
Hierarchical FSMs
  • An FSM can delegate to another FSM
  • SearchBuilding ? ClearRoom
  • Allows modularization of behavior
  • Reduces complexity
  • Encourages reuse of model components

59
Hierarchical FSMs (2)
60
Hierarchical FSMs (3)
61
Hybrid Architectures
  • Combine cognitive approaches
  • EASE Elements of ACT-R, EPIC, Soar (Chong
    Wray, 2005)
  • Combine behavioral and cognitive approaches
  • Imprint / ACT-R (Craig, et al., 2002)
  • SimBionic / Soar
  • Defining Intelligent Behavior
  • Authoring Methodology
  • Technologies
  • Cognitive Architectures
  • Behavioral Approaches
  • Hybrid Approaches
  • Conclusion

62
Hybrid Architectures
  • Combine cognitive behavioral approaches

goals
  • Pros
  • More scalable
  • Easier to author
  • More flexible
  • Cons
  • Architecture is more complex

Cognitive Layer
behaviors
Behavioral Layer
actions
Simulation
63
Hybrid Example HTN Planner FSMs
  • Hierarchical Task Network (HTN) Planner
  • Inputs
  • goals
  • library of plan fragments (HTNs)
  • Outputs
  • High-level plan achieving those goals
  • Each plan step is an FSM in the Behavior Layer
  • Not really a cognitive architecture, but adds
    goal-driven capability to system
  • Plan fragments represent codified sequences of
    behavior

64
Conclusion
  • Factors affecting choice of architecture
  • Entity capabilities
  • Behavior fidelity
  • Level of autonomy
  • Number of entities
  • Authoring resources
  • Two main paradigms
  • cognitive architectures
  • behavioral approaches

65
Conclusion (2)
  • Recommend iterative model development
  • Build
  • Test
  • Refine
  • Be aware of the knowledge bottleneck

66
Resources
  • EPIC
  • http//www.eecs.umich.edu/kieras/epic.html
  • ACT-R
  • http//act-r.psy.cmu.edu/
  • SOAR
  • http//sitemaker.umich.edu/soar
  • SimBionic
  • http//www.simbionic.com/

67
Questions?
68
References
  • Anderson, J. R., Bothell, D., Byrne, M. D.,
    Douglass,S., Lebiere, C., Qin, Y. (2004). An
    integrated theory of mind. Psychological Review,
    111(4), 1036-1060.
  • Card, S. K., Moran, T. P., Newell, A. (1983).
    The psychology of human-computer interaction.
    Hillsdale, N.J. L. Erlbaum Associates.
  • Chong, R. S., Wray, R. E. 2005. Inheriting
    constraint in hybird cognitive architecutres
    Applying the EASE architecture to perforance and
    learning in a simplified air traffic control
    task. In K. A. Gluck R. W. Pew (Eds.), Modeling
    Human Behavior with Integrated Cognitive
    Architectures Comparison, Evaluation, and
    Validation (237-304) Lawrence Erlbaum
    Associates.
  • Craig, K., Doyal, J., Brett, B., Lebiere, C.,
    Biefeld, E., Martin, E. A. (2002). Development
    of a hybrid model of tactical fighter pilot
    behavior using IMPRINT task network modeling and
    the adaptive control of thought - rational
    (ACT-R). Paper presented at the 11th Conference
    on Computer Generate Forces and Behavior
    Representation.
  • Douglass. (2003). Modeling of Cognitive Agents.
    Retrieved May 22, 2005, from http//actr.psy.cmu.e
    du/douglass/Douglass/Agents/15-396.html).
  • Fu, D., Houlette, R. (2003). The ultimate guide
    to FSMs in games. In S. Rabin (Ed.), AI Game
    Programming Wisdom 2.
  • Fu, D., Houlette, R., Jensen, R., Bascara, O.
    (2003). A Visual, Object-Oriented Approach to
    Simulation Behavior Authoring. Paper presented at
    the Industry/Interservice, Training, Simulation
    Education Conference.
  • Gray, W. D., Altmann, E. M. (1999). Cognitive
    modeling and human-computer interaction. In W.
    Karwowski (Ed.), International Encyclopedia of
    Ergonomics and Human Factors (pp. 387-391). New
    York Taylor Francis, Ltd.
  • Jones, R. M., Laird, J. E., Nielsen, P. E.,
    Coulter, K. J., Kenny, P., Koss, F. V. (1999).
    Automated intelligent pilots for combat flight
    simulation. AI Magazine, 20(1), 27-41.
  • Kieras, D. E. (2003). Model-based evaluation. In
    J. A. Jacko A. Sears (Eds.), The Human-Computer
    Interaction Handbook Fundamentals, Evolving
    Technologies and Emerging Applications (pp.
    1139-1151). Mahway, NJ Lawrence Erlbaum
    Associates.
  • Könik, T., and Laird, J. (2006). Learning Goal
    Hierarchies from  Structured Observations and
    Expert Annotations. Machine Learning. In Print

69
References (2)
  • Laird, J. E. (2004). The Soar 8 Tutorial.
    Retrieved August 16, 2004, from
    http//sitemaker.umich.edu/soar/soar_software_down
    loads
  • Laird, J. E., Congden, C. B. (2004). The Soar
    User's Manual Version 8.5 Edition 1. Retrieved
    August 16, 2004, from http//sitemaker.umich.edu/s
    oar/soar_software_downloads
  • Lehman, J. F., Laird, J. E., Rosenbloom, P. S.
    (1998). A gentle introduction to SOAR An
    architecture for human cognition. In D.
    Scarborough S. Sternberg (Eds.), (2 ed., Vol.
    4, pp. 212-249). Cambridge, MA MIT Press.
  • Pearson, D., Laird, J. E. (2004). Redux
    Example-driven diagrammatic tools for rapid
    knowledge acquisition. Paper presented at the
    Behavior Representation in Modeling and
    Simulation, Washington, D.C.
  • Pew, R. W., Mavor, A. S. (1998). Modeling human
    and organizational behavior application to
    military simulations. Washington, D.C. National
    Academy Press. Unit 7 Production Rule Learning.
    Retrieved October 31, 2004, from
    http//actr.psy.cmu.edu/tutorials/unit7.htm
  • Ritter, F. E, Haynes, S. R., Cohen, M., Howes,
    A., John, B., Best, B., Lebiere, C., Lewis, R.
    L., St. Amant, R., McBraide, S. P. Urbas, L.,
    Leuchter, S., Vera, A. (2006) High-level behavior
    representation languages revisited. In
    Proceedings of ICCM 2006, Seventh International
    Conference on Cognitive Modeling (Trieste, Italy,
    April 5-8, 2006)
  • Wallace, S. A., Laird, J. E. (2003). Comparing
    agents and humans using behavioral bounding.
    Paper presented at the International Joint
    Conference on Artificial Intelligence.
  • Wray, R., van Lent, M., Beard, J., Brobst, P.
    (2005) The Design Space of Control Options for
    AIs in Computer Games. Paper presented at the
    International Joint Conference on Artificial
    Intelligence.
  • Wray, R. E., van Lent, M., Beard, J., Brobst,
    P. 2005. The Design Space of Control Options for
    AIs in Computer Games. In Proceedings of the
    International Joint Conference on Artificial
    Intelligence 2005.
Write a Comment
User Comments (0)
About PowerShow.com