MATCHING STRUCTURAL DESCRIPTIONS - PowerPoint PPT Presentation

1 / 12
About This Presentation
Title:

MATCHING STRUCTURAL DESCRIPTIONS

Description:

... STRUCTURAL DESCRIPTIONS. Motivation: ~ Stereopsis ~ Object Recognition ~ Scene Analysis ... A match between two graphs (structural. descriptions) is ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 13
Provided by: kimlb
Category:

less

Transcript and Presenter's Notes

Title: MATCHING STRUCTURAL DESCRIPTIONS


1
MATCHING STRUCTURAL DESCRIPTIONS
Motivation Stereopsis Object
Recognition Scene Analysis Exact
Matching -- we will begin with this a more
formal statement of our last graph matching
discussion. A match between two graphs
(structural descriptions) is described by a
mapping function between members of the two
sets of nodes (primitives). VIP The goal
of this mapping function is to label the
elements of one set of primitives with the
names of their counterparts in the other set
such that the interrelationships among them
remain satisfied. A consistent labeling
problem!
2
Simple Example
Neglecting the dotted arc from B to D in Graph
2, the following mapping preserves all the
relations in the graphs b E c A d B
(or D) a D (or B) e C If we include the
dotted line, all relations in Graph 1 are
preserved in Graph 2, but not vice-versa . There
is a copy of Graph 1 embedded within this
version of Graph 2.
3
Lets Get Mathematical
To make things precise, consider a single N-ary
relation over a set of primitives
R is, therefore, a set of N-tuples drawn from P
. Let h be a mapping function taking
elements of set P into another set Q
Without loss of generality, lets simplify our
notation as follows
Then, we define the composition of R with the
mapping function h as
The mapping of individual elements of P into
elements of Q is used to take N-tuples of p s
into corresponding N-tuples of q s.
4
Relational Homomorphism
Now, let S be a second N-ary relation, defined
over the primitive set Q....
Now, a relational homomorphism from R to S is a
mapping
that satisifies the condition
It is a mapping from elements of P to (a subset
of) elements of Q having all the
interrelationships that the corresponding
elements of P had. The elements of Q may, in
fact, have more (in S) -- but they at least
must have those that the elements of P have (in
R).
5
Classes of Exact Matches
We can have the following forms of match, each
of which is an exact match.
Homomorphism The general case, as weve
described. No real constraints on the
mapping function. Monomorphism A
11 mapping. May only use a subset of Q.
Isomorphism An invertible monomorphism.
The inverse of h is also a monomorphism
from S back to R. A relational isomorphism
is a symmetric match and is the strongest
possible. The cardinalities of the two sets
of primitives are equal, and the mapping is
11 and onto.
No extra parts No missing parts All
parts in the correct relationships.
6
Exact Match between two Structural Descriptions
To identify an exact match between two SDs we
need to construct a mapping between the two
sets of primitives such that a certain class
of relational homomorphism (homo-, mono-,
isomorphism) is achieved for all
corresponding relations. Corresponding
relations? Those having the same name.
Then with two descriptions D1 and D2
we say that D1 matches D2 if there exists a
mapping h such that the corresponding primitives
possess the same attribute-value pairs (or a
subset of our choosing)
AND
7
Problems with this Approach
Results require exactness Matches are crisp
-- that is, GO/NO-GO. Relations are crisp
No room for uncertainty How might we deal with
noise, distortion, and other
perturbations? One idea (Linda Shapiro)
Define a weighted prototype structural
description (to model an object) in which
Primitives are assigned weights according
to their importance N-tuples in
each relation are also weighted Then, the
primitive A-V pairs are compared to
thresholds, and a count is maintained of the
relational inconsistencies (violations of the
homomorphism constraint). It is
difficult to codify this type of knowledge in a
meaningful, reliable way. Still a
crisp test, no measure of optimality or
match quality.
8
An Alternative View
We can capture notions of uncertainty by
combining concepts from probability theory
(decision theory, Bayes) with our graph
theoretical basis for structural
descriptions. Let me make the problem a little
more focused. Lets talk about object
recognition in what is commonly known as
model-based vision. We first extend the
structural description in two ways
Parametric structural description (PSD) We
now allow the individual tuples in each relation
to be assigned a parameter (scalar, vector,
logical). This is analogous to the
attribute-value description of the
individual primitives. Random PSD
(RPSD) We now introduce randomness into
the description Each primitive has a
probablity of being null The attributes are
random variables Each tuple in a relation has
a (conditional) probability of being null
The parameters are random variables
9
We will use these extensions of the structural
description as follows 1. Model
observations as PSDs. These are SDs built
from the image(s) at hand. You run your edge
detectors, segment, group, and so on and then
construct a description of what is seen. 2.
Model objects in the modelbase as RPSDs.
These are built from prior knowledge of the
objects themselves and include randomness to
reflect The probability of missing or
extra parts from occlusion or
segmentation errors. Variations in
primitive description owing to viewpoint
variations, or differences among
individual members of a class. Variations
in tuple parameter values owing to
viewpoint variations, articulations, and so on.
10
Outcome Probability
To recognize objects, we need to construct the
mapping function from the primitives of the
observation to those of the model.
deterministic observation
random model
mapping from observation to model, including a
null primitive for extra parts
Under this mapping function, the probability
that the model G appears in the scene as
observation G can be written
For notational convenience, lets assume that
the primitives are indexed such that
Note that the inverse mapping may be
incomplete,
for some p, indicating a missing part.
11
The probability of observing a given nonnull
outcome primitive is then given by
where denotes the event that the random
primitive and the observed primitive agree in
the value of the j-th attribute.
The probability of the entire set of primitives
as a joint event is the product of the
probabilities of the null and nonnull
outcomes...
12
To compute the probability of the observed
relations R, as an outcome of the random
relations R, we form the composition (as we
did above) and retain the parameter value
assigned to each corresponding observation
tuple. Then, a calculation similar to the one
we just did for the primitives gives us the
conditional probability of the observed tuple
set. The notation is messy, so Ill skip
it.... We now can define the dissimilarity
between a PSD and an RPSD as the information
content in the most likely interprimitive
mapping, normalized by the number of nonnull
primitives (a) and nonnull relation tuples
Why this? Penalize mapping functions that
only use a few primitives. We cant model
the extra parts. Information theory turns
products into sums. Good. It also lets us
talk about the entropy of random SDs.
Write a Comment
User Comments (0)
About PowerShow.com