Lessons Learned From Integrating STOW Technology into an Operational Naval Environment - PowerPoint PPT Presentation

1 / 12
About This Presentation
Title:

Lessons Learned From Integrating STOW Technology into an Operational Naval Environment

Description:

Lessons Learned From Integrating STOW Technology into an ... Patrick G. Kenny. Randolph M. Jones. 317 North First St. Ann Arbor MI 48103. www.soartech.com ... – PowerPoint PPT presentation

Number of Views:65
Avg rating:3.0/5.0
Slides: 13
Provided by: patrick208
Category:

less

Transcript and Presenter's Notes

Title: Lessons Learned From Integrating STOW Technology into an Operational Naval Environment


1
Lessons Learned From Integrating STOW Technology
into an Operational Naval Environment
  • Patrick G. Kenny
  • Randolph M. Jones
  • 317 North First St.
  • Ann Arbor MI 48103
  • www.soartech.com

2
Soar Intelligent Agents
  • Developed intelligent agents for the Airforce and
    Navy air domain using the Soar Cognitive
    architecture.
  • 6000 rules over 7 years of work at The
    University of Michigan and Soar Technology.
  • Intelligent realistic behaviors based on KA with
    real human pilots.
  • New BFTT requirements to use Soar Technologys
    intelligent agents in the Navys shipboard
    training of ATC personnel.

3
BFTT Air Management Node
  • Principle goals in developing the AMN were to
    improve upon the ATC and limited AIC training
    capabilities of the BFTT Combat Simulation Test
    System (CSTS) for CV and LHA/LHD ship types.
  • ATC training involved Launch, Marshall and
    Recovery procedures.
  • Aircraft modeling included all relevant Navy and
    USMC FWA and RWA (and VSTOL). This included an
    improved HCI for operator control of the
    aircraft.
  • Used DARPAs Synthetic Theater of War (STOW)
    technology to provide a robust and realistic
    simulation environment that integrates with
    existing systems.

4
User and Customer Centered Design
  • Designing for the customer or the user? Need to
    do both.
  • Requirements come from the people paying for the
    system, however significant research shows that
    the systems improve when the users are involved
    in the design and testing process.
  • Be careful!! With multiple sources of input there
    is potential for mis-communication, conflict, and
    ill-defined or incorrect requirements.
  • E.g. FWA vs RWA priorities.
  • Global requirements were clear, but the details
    were missing. This is important because of the
    need for realism (and particularly important for
    accurate knowledge engineering).
  • Did not have easy access to the ATCs that would
    use the system to clear up the details.
  • Would have been extremely useful to involve users
    throughout the implementation and testing.

5
Team Implementation and Integration
  • AMN effort involved a variety of subsystems,
    intelligent behavior, distributed simulation,
    interfaces between subsystems and human computer
    interfaces.
  • Developed well defined interfaces between the
    subsystems then glued it all together during week
    long integration periods.
  • Downside to integration periods is that if some
    subsystem is not working it can lead to others
    sitting around waiting for it to be fixed.
  • Some may use this to argue for a single
    developer, but as the system gets more complex,
    it requires the specialized expertise of small
    engineering teams and forces the modular
    development of a larger system.
  • To smooth out integration periods requires better
    tools for remote access, version control
    software, off site debugging and testing.
  • Rigorous, distributed version control would
    probably have been the biggest payoff.

6
Integrating Intelligent Behavior Sets
  • AMN required the integration of both Soar
    Technologys FWA and USC ISIs RWA intelligent
    behavior systems.
  • Both systems grew from the same starting point,
    but diverged substantially.
  • Ideally it would have been nice to build a single
    architecture for both FWA and RWA that included
    modules for different behavior. Realistically
    this was beyond the budget and time frame for
    this project.
  • The main lesson to learn is that modular behavior
    has a high payoff. Never write the same piece of
    code twice.
  • In intelligent systems, reusable software
    translates into reusable knowledge.
  • Reusable knowledge makes the agent behavior more
    robust, flexible, and human-like.
  • E.g. Flying a Route to Somewhere can be used
    for both carrier approaches, flying to a waypoint
    and marshalling.

7
Adapting Agent Architecture to BFTT
  • Intelligent agents consist of two parts The
    agent architecture and the agent knowledge for
    behaviors.
  • Soar agents sit on top of JSAF and get input and
    output through an interface called the SMI. We
    had to add new interfaces for BFTT, e.g., TACAN.
  • In prior STOW applications the agents lived a
    full life in the exercise.
  • Mission brief, take off, fly mission, land.
  • For BFTT we needed to be able to dynamically
    create agents on the fly at any location with
    little mission specification.
  • This was an important change because intelligent
    behavior relies heavily on situational awareness,
    environmental context and the agents personal
    history.

8
Adapting Agent Behavior to BFTT
  • Incorporated specific new types of knowledge for
    BFTT.
  • Used existing AIC behaviors, but ended up
    including only a portion of the existing
    behaviors for ATC.
  • Some of the assumptions in BFTT contradict the
    assumptions in STOW.
  • In STOW the agents got a full mission brief
    before launch.
  • In STOW the agents had a high degree of autonomy.

9
Adapting Agent Behavior to BFTT
  • In BFTT, training demands were quite different.
    BFTT agents needed to be totally controllable by
    the ATC. If agents started autonomously flying
    new routes this would confuse the controller.
  • There was no facility set up for the ATC to query
    the agents for their goals or actions.
  • The conflicting requirements created a tension
    between when it was appropriate to be highly
    autonomous and when to be passive.
  • Naturally these requirements would vary across
    training domains, suggesting even more need for
    modular, plug-in behaviors.

10
Robust and Long Term Execution
  • Primary constraint was on how long the system had
    to be up, in terms of time and state changes,
    agent creation and deletion.
  • Need to eliminate system failures for complex
    software.
  • Agents need to be robust in their interaction
    with the simulation.
  • Memory leaks and other system glitches need to be
    tracked down.
  • Having multiple SAFs allowed some agents to
    survive if one simulator went down.
  • We didnt, at first, appreciate what shipboard
    deployment meant.

11
Lessons to be learned
  • NOTE This is a somewhat critical review, but of
    a system that in general is large, robust, and
    successful.
  • Who will be using the system? Design it for them.
  • Collections of small, independent teams work, but
    make sure your tools are appropriate for
    distributed work.

12
Lessons to be learned
  • Never write the same piece of code twice.
    Modularize the behaviors.
  • Our vision of totally autonomous agents has been
    altered slightly to include semi-autonomous
    agents for training. Be smart about when to be
    smart.
  • Robust and flexible behavior will allow for fewer
    system failures and more training.
  • AMN is a valuable test of our ability to
    integrate DARPAs STOW technology with deployed,
    operational systems.
Write a Comment
User Comments (0)
About PowerShow.com