Studying Causal Chain Ontology - PowerPoint PPT Presentation

1 / 10
About This Presentation
Title:

Studying Causal Chain Ontology

Description:

Syntactic examination revealed that one branch ('condition') is simple (asserted ... Equivalence axioms and subclass axioms used concurrently in the same concept: ... – PowerPoint PPT presentation

Number of Views:24
Avg rating:3.0/5.0
Slides: 11
Provided by: dav5297
Category:

less

Transcript and Presenter's Notes

Title: Studying Causal Chain Ontology


1
Studying Causal Chain Ontology
2
Some URIs
  • Original http//www.mindswap.org/tw7/work/profil
    ing/alan.rector/320simple20chain20and20situati
    ons/Having_experiment-with-causal-chain-and-situat
    ion.owl
  • Modularized
  • http//www.mindswap.org/tw7/work/profiling/alan.r
    ector/320simple20chain20and20situations/Having
    _experiment-with-causal-chain-and-situation-module
    s/

3
Studying The Causal Chain Ontology
  • Symptoms
  • Pellet cannot classify the ontology in reasonable
    amount of time (Alan reported 30 min)
  • Clues
  • 48 classes (2 main subclass trees)
  • Expressivity ALCI
  • 2 roles and their inverses
  • No apparent GCIs
  • Question
  • Why does it take so long?

4
Attempts, Trials, Findings (1)
  • Syntactic examination revealed that one branch
    (condition) is simple (asserted subclasses, no
    exciting constructors used)
  • The other half (situation) of the tree seems
    more problematic
  • Equivalence and subclass axioms involving
    existentials with roles that have inverses.
  • Each concept is chained to another by a causal
    role (9causes)
  • Partitioning via e-connection gives this insight
  • The two main branches are e-connected by link
    properties
  • So lets concentrate on the problematic branch
    first

5
Attempts, Trials, Findings (2)
  • Modularize the problematic branch with respect to
    each of its classes (looking at each part of the
    chain individually)
  • Overview
  • The deeper into the chain, the harder it gets
    (Having_J.owl is easier than I, and H, G, F, E,
    ).
  • The completion graph for the hardest class gets
    bigger
  • The completion graph for the class
    Having_Condition also gets bigger
  • Having_B.owl is unprocessable (OOM in 11 hours of
    processing with max memory), due to the class
    Having_Condition (takes 5 hours), but every other
    class is processable.

6
Attempts, Trials, Findings (3)
  • Studying the labels for HavingConditions
    completion graph in sat check revealed some
    labels that I did not expect to see.
  • Debug (ABox) prints from Pellet revealed that
    tableau rules for domain axioms are fired
  • Role Absorption is being used on the GCIs
  • Where do these GCIs come from?
  • Equivalence axioms and subclass axioms used
    concurrently in the same concept Having_I
    9has.I_Condition, Having_I v 9causes.Having_J
  • Equivalence axioms on concepts that have parents
  • Changing the equivalence axioms to subclass
    axioms makes the sat checks trivial

7
Attempts, Trials, Findings (4)
  • Removing inverse roles also makes sat checks
    trivial.
  • Though not entirely sure why.
  • Modularity doesnt give the full picture
  • Knowing that a module is easy doesnt mean its
    easy in the original ontology Having_Condition
    module is easy, but gets harder deeper in the
    chain.
  • In this ontology, single run profiling is pretty
    representative. Though compare/contrast of CGs
    in find-all-models would be nice.

8
Things that helpful
  • Identifying possible problems, noticing something
    unexpected
  • Modularized ontologies (reduced problem size,
    which corresponds to problem hardness in this
    case because of the chaining effect)
  • Detailed inspection of the CG
  • Pellet debug output
  • Understanding why they are the way they are
  • Knowledge of Role Absorption
  • Knowledge of badness of GCIs (and seeing the
    number of choice points in labels)
  • Coming up with a possible solution
  • Replacing equivalences with subclassOf

9
Some thoughts
  • Is detecting, removing/rewriting GCIs (or other
    choice points) a general approach to fixing?
  • What kind of fixing is better than the others?
    How do we evaluate/compare the fixes?
  • Possible Methodology
  • Finding choice points and catalog them
  • Analyze these choice points to give impact
    analyses
  • Estimate on how bad these choices are
  • What axioms contribute to these choice points?
  • What are effected to remove these axioms that
    contribute to the choice points
  • How do choice points interact with each other?

10
Oddities
  • So some equivalences axioms are italicized as
    though they are inferred. They are italicized
    because the equivalence is written, in the
    source, for example 9has.J_condition Having_J.
    When the LHS and RHS are reversed, its
    displayed correctly without italics.
  • This representation also seems to play a role on
    which labels are chosen in the root individual
    for Having_Condition
  • When in GCI form (see Having_G.owl, the smallest
    CG for Having_Condition (2 nodes)), it has
    (9has.G_Condition t Having_G ), does not have
    (9has.G_Condition t 9causes.Having_H ), and does
    not have (9has.G t Situation)
  • When in not GCI form (see Having_H.owl, the
    smallest CG for Having_Condition(2 nodes)), it
    has (9has.H_Condition t 9causes.Having_I ) only
  • The original ontology has the non-italicized
    version, probably introduced in the modularity
    code
Write a Comment
User Comments (0)
About PowerShow.com