Modeling of OperatorAutomation Authority - PowerPoint PPT Presentation

1 / 1
About This Presentation
Title:

Modeling of OperatorAutomation Authority

Description:

... course of action (Predator incident, American Airlines Flight 1420) ... Our task is pursued in the best spirit of NASA's ongoing mission to research and ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 2
Provided by: deve78
Category:

less

Transcript and Presenter's Notes

Title: Modeling of OperatorAutomation Authority


1
Modeling of Operator/Automation Authority
LARSS
Related Work
Improving airplane cockpit safety with formal
methods and human factors.
  • SPIDER SPIDER is a family of fault-tolerant,
    reconfigurable architectures that provide
    powerful mechanisms for integrating
    inter-dependent applications of differing
    criticalities. These applications communicate via
    a Reliable Optical BUS (ROBUS), a TDMA bus that
    provides all of the basic fault-tolerance
    mechanisms clock synchronization, group
    membership, and interactive consistency. These
    mechanisms have been formally proved correct
    using a theorem proving system, which provides an
    unprecedented level of assurance that SPIDER is
    ultra-dependable.
  • Taken from http//shemesh.larc.nasa.gov/fm/spide
    r/
  • Situational Awareness (SA) SA is a concept that
    was first defined in the field of aviation but
    has become much more general. There are many
    definitions in the literature, but the one
    accepted for use in this project is, as defined
    by Endsley the perception of elements in the
    environment within a volume of time and space,
    the comprehension of their meaning, and the
    projection of their status in the near future.
    Thus, the analyses created for this project
    define and use the following three levels of
    situational awareness
  • Level 1 Perception of elements in the
    environment
  • Level 2 Comprehension of the current situation
  • Level 3 Projection of future status
  • Hazardous Awareness States As considered by
    Pope, among others, there are three primary
    factors that contribute to conditions which may
    lead to pilot error preoccupation, vigilance,
    and excessive absorption. Preoccupation is mainly
    characterized by thought unrelated to matters in
    the current situation, while vigilance relates to
    a persons overall attention within a situation,
    and excessive absorption is defined by the
    exclusion of all but a few elements in the
    current environment. These three categories can
    aid in finely distinguishing hazard scenarios to
    the level required for formal analysis.

The Intelligent Integrated Flight Deck (IIFD)
component of the Aviation Safety Program is
NASAs attempt to improve safety in aircraft and
aircraft systems as their complexity increases
greatly in the information age. Because of its
mission to respond to the public interest without
regards for a specific product or profit margin,
NASA is well-positioned to lead the way in this
research by seeding ideas and attacking technical
challenges with its existing extensive experience
in avionics. Our efforts specifically address
overall avionics system design as it relates to
the pilots in the cockpit. We are attempting to
develop a quantitative, verifiable model of human
capabilities and failures in the cockpit,
leveraging our experience with formally-verified
fault-tolerant distributed systems like SPIDER
(Scalable Processor-Independent Design for
Extended Reliability), developed at NASA Langley.
Our overall goal is to provide for
dynamically-allocated human/automation task
responsibility in the robust, integrated, and
intelligent cockpit of the future. In this
pursuit, I have perused existing human factors
literature on pilot concentration and interaction
in the cockpit, specifically concentrating on the
idea of situational awareness. I have also
developed a model of cockpit communications and
pilot capabilities that gives a clearer picture
of what capabilities must be present in the plane
automation in order to present a fully-capable
backup system for the pilots, including both
sensors and processing equipment. In an attempt
to find minimal principles in support of
fault-tolerance in the cockpit, I have also
developed a fairly extensive model of human
misbehavior. This work provides a firm basis for
more detailed and quantitative research in a
number of the topics investigated at a high level
for this project.
NASA Missions
Introduction
Distributed fault-tolerant architectures for
complex information processing and
decision-making have been a recent focus of
research in the quest for ultra-dependable
safety-critical systems. SPIDER, developed at
NASA Langley, is one such architecture. A large
library of theories created for the analysis of
this and other architectures already exists and
has been verified by the formal verification
system PVS and the model-checker SAL. We propose
to extend this library to include the
human-in-the-loop. The necessity for such work
can clearly be seen in the field of aeronautics.
Recent crash and incident data gathered by the
NTSB indicates that pilot incapacitation or
distraction has led to many recent accidents, at
least one of which resulted in fatalities. In
many of these cases, the automation present on
board these aircraft was not given authorization
to override the pilots commands and plot a safe
course of action (Predator incident, American
Airlines Flight 1420). As flight deck regulations
and flight system architectures are currently
designed, the pilot is the final arbiter in the
cockpit. His decisions can only be advised
against by the automation present, not
overridden. We propose a formalization of the
link between human and automation, as well as
between humans and other humans using knowledge
from both Human Factors and Formal Methods. This
effort includes analysis of human behavior as a
node in systems engineering. Initial design
considerations included the extreme burden of
proof needed by automation to take command, the
differences between humans and machines (mistakes
are intermittent and more frequent but are more
easily recovered from, different sense
capabilities, implicit versus explicit
communication, human downtime, attention
capabilities). Our primary goal is to support
dynamic function shifting between automation and
human by defining an appropriate fault model for
human interaction. Our expertise in
fault-tolerant systems and formal methods will
aid in this endeavor. Some information about
human beings state can be drawn from their
interaction with automation and each other, along
with more explicit state monitors that will be
assumed to exist in a reasonable extent for
purposes of our analysis. In our formal model,
the human(s) in the loop are treated as oracles,
or black boxes, from which ideas and decisions
are drawn relating to the situation at hand. The
overall goal is thus to extract quantitative
principles that can be used as safety
sanity-checks for any proposed new intelligent
avionics architecture that attempts to
incorporate human behavior in its system analysis.
Analysis
Overview of Human Misbehavior
Conclusions/ Future Research
Due to the expansive nature of the topic under
consideration and our goal, with formal analysis
being one of the most exacting and intensive
techniques that can be performed on a system, our
work has necessarily been an overview of topics
that should be studied in more depth. There are
many avenues of research to be pursued from
examination of fault principles to further
modeling of communication to basic sensors
research to feasibility studies to policy-level
issues. We have certainly raised more questions
than we have answered, but that is to be expected
in fundamental research. To be sure, the
feasibility of a safe and reliable implementation
of our work is still in question. But the work
itself supporting transfer of duties between
human and automation is clearly important
enough to pursue to see where it leads. Our work
thus far is preliminary and will most likely not
be seen in commercial aviation for several years
(if at all). However, it has the potential to
save lives and provide for safer, more dependable
air transportation. Our task is pursued in the
best spirit of NASAs ongoing mission to research
and develop aeronautical technologies for safe
and reliable aviation systems.
  • Develop accurate, descriptive model of human
    communication in cockpit to describe current
    holes in automation capabilities.
  • Create accurate, quantitative hierarchy of human
    misbehavior using knowledge and terminology
    developed for other work in distributed
    fault-tolerant avionics architectures
    (specifically SPIDER).
  • Codify the above into appropriate mathematical
    structures, including algorithms, sets,
    multisets, and graphs.
  • Develop and verify a minimal set of safety and
    liveness properties for human beings that are
    appropriate and complete for continued safe
    operation of an aircraft.
  • Expand existing fault-tolerance libraries within
    PVS Specification and Verification System to
    accommodate our new theory.

Of course, any combination of hazard scenarios 2,
3, and 4 can occur with lesser or greater
severity. Hazard scenario 1 was detailed
separately from the others because of its extreme
nature and criticality. It is likely to be the
area most amenable to the application of dynamic
function shifting.
Progress
Non-NASA Sources
In researching this problem, we chose to pursue
a detailed qualitative understanding to
understand the feasibility of incorporating our
newly gathered knowledge into our existing formal
models. I began my effort by becoming familiar
with the verification system PVS, the language
within which most of SPIDER is formalized, as
well as becoming acquainted with the ideas of
formal methods as a whole. I then undertook a
detailed survey of existing pilot-automation
interaction, fault-tolerance, and specific
SPIDER-related literature. To codify knowledge
gained from these sources, I prepared various
documents and diagrams which can be translated
into specific mathematical structures. This work
includes a detailed study of pilot communication,
general human capabilities with respect to
automation, and modes of failure. The following
questions have been discovered, analyzed, and are
currently seen as extremely relevant in our
continued investigation Do there exist
equivalent definitions for humans to those
developed in the current hybrid fault model?
What does automation need to detect human failure
modes within a mathematical bound of
certainty? What is sufficient evidence to
preempt control from the pilots? Can aircraft
automation be fully redundant for a human pilot?
Can it be sufficiently redundant? Can humans
interacting with each other be modeled formally?
Dr. Paul Miner, Jeff Maddalon Research and
Technology Directorate Safety-Critical Avionics
Systems Branch
Student, School
Applications of National Priority
Potential Customers
Ari Wilson, Vanderbilt University
Other airplane and car manufacturers
Aircraft Safety
Transportation Safety
Write a Comment
User Comments (0)
About PowerShow.com