Title: Edwin R' Hancock
1The University of York
Pattern Analysis with Graphs with applications
from computer vision
Edwin R. Hancock With help from Richard Wilson,
Bai Xiao Bin Luo, Antonio Robles-Kelly and Andrea
Torsello. University of YorkComputer Science
DepartmentYORK Y010 5DD, UK. erh_at_cs.york.ac.uk
2Outline
- Motivation Background
- Graphs from images
- Matching graphs
- Spectral methods
- Pattern spaces for sets of graphs
- Embedding and characterising graphs
3Motivation
4Problem
In computer vision graph-structures are used to
abstract image structure. However, the algorithms
used to segment the image primatives are not
reliable. As a result there are both additional
and missing nodes (due to segmentation error) and
variations in edge-structure. Hence image
matching and recognition can not be reduced to a
graph isomorphism or even a subgraph isomrophism
problem. Instread inexact graph matching methods
are needed.
5Measuring similarity of graphs
- Early work on graph-matching is vision ( Barrow
and Popplestone) introduced association graph and
showed how it could be used to locate maximum
common subgraph, - Work on syntactic and structural pattern
recognition in 1980s unearthed problems with
inexact matching (SanfeliuEshera and Fu.
Haralick and Shapiro, Wong etc) and extended
concept of edit distance from strings to graphs. - Recent work has aimed to develop probability
distributions for graph matching (Christmas,
Kittler and Petrou, Wilson and Hancock, Seratosa
and Sanfeliu) and match using advanced
optmisation methods(Simic, Gold and Rangarjan). - Renewed interest in placing classical methods
such as edit distance (Bunke) and max-clique
(Pelillo) on a more rigorous footing.
6Viewed from the perspective of learning
This work has shown how to measure the similarity
of graphs. It can be used to locate inexact
matches when significant levels of structural
error are present. May also provide a means by
which modes of structural variation can be
assessed.
7Learning with graphs
- Learn class structure Assign graphs to classes.
Need a distance measure. Central clustering is
difficult since number of nodes and edges varies
and correspondences are not known. Easier to
perform pairwise clustering. (Bunke, Buhman). - Embed graphs in a low dimensional space
Correspondences are again needed, but spectral
methods may offer a solution. Can apply standard
statistical and geometric learning methods to
graph-vectors. - Learn modes of structural variation Understand
how edge (connectivity) structure varies for
graphs belonging to the same class.
(Dickinson,Williams) - Build generative model Borrow ideas from
graphical models (Langley, Friedman, Koller).
8Why is structural learning difficult
- Graphs are not vectors There is no natural
ordering of nodes and edges. Correspondences must
be used to establish order. - Structural variations Numbers of nodes and
edges are not fixed. They can vary due to
segmentation error. - Not easily summarised Since they do not reside
in a vector space, mean and covariance hard to
characterise.
9Eigenvector methods for learning in vision
- Eigenvectors of image covariance matrix
Eigenfaces (Turk and Pentland), parametric
eigenspaces (Murase and Nayar). - Point distribution model apply PCA to landmark
position covariance matrix (Cootes and Taylor). - Kernel PCA Tipping and Bishop
- Many, many others.
10Spectral Embedding
Graphs reside on a manifold whose
Laplace-Beltrami operator is the Laplacian of the
graph.
11Spectral Methods
Use eigenvalues and eigenvectors of adjacency
graph (or Laplacian matrix) - Biggs, Cvetokovic,
Fan Chung
- Singular value methods for exact graph-matching
and point-set alignment). (Umeyama) - Singular value methods for point-set
correspondence (Scott and Longuet-Higgins,
Shapiro and Brady). - Use of eigenvalues for image segmentation (Shi
and Malik) and for perceptual grouping (Freeman
and Perona, Sarkar and Boyer). - Graph-spectral methods for indexing shock-trees
(Dickinson and Shakoufandeh)
12Spectral Graph Theory
- Read books by Cvetokovic, Biggs and Chung.
- Good web resources (and papers!) provided by Jon
Kleinberg (Cornell) and Mark Jerrum (Edinburgh). - Used in routing problems, Googlebot and many
other practical applications.
13A taster.
- Eigenvector expansion of adjacency matrix
- Leading eigenvector is steady-state random walk
on the graph. - Number of paths of length L on graph
14Aims
Combine probabilistic modelling with spectral
graph theory to develop methods for
correspondence matching and learning. Apply to
shape analysis problems furnished by computer
vison (2D shape). Look at three problems
- Discover shape categories
- Embed graphs in a pattern space
- Learn modes of structural variations
15What this talk is about
- Develop a probabilistic error model that can be
used to find node correspondences and measure
the similarity of graphs. - Potentially cumbersome, so recast the problem
in a matrix setting. Develop a robust spectral
matching method. - Use this model to learn structural variations for
sets of graphs.
16Graphs in computer vision
- Structural representations of shape
17Graph (structural) representations of shape
- Region adjacency graphs ( Popplestone etc,,
Worthington, Pizlo, Rosenfeld) - View graphs (Freeman, Ponce)
- Aspect graphs (Dickisnon)
- Trees (Forsyth, Geiger).
- Shock graphs (Siddiqi, Zucker, Kimia).
Idea is to segment shape primitives from image
data and to abstract them using a graph. Shape
recognition becomes a problem of graph matching.
However, statistical learning of modes of shape
variation becomes difficult since available
methodology is limited.
18Delaunay Graph
19CMU Sequence
20(No Transcript)
21(No Transcript)
22(No Transcript)
23Constrained Delaunay Triangulation
24Gabriel Graph
25Relative Neighbourhood Graph
26Shock graphs
Type 1 shock(monotonically increasing radius)
Type 2 shock(minimum radius)
Type 3 shock(constant radius)
Type 4 shock(maximum radius)
27Measuring the similarity of graphs
- Develop probabilistic measure of graph errors and
use this to measure similarity.
28Starting point
- Probabilistic Framework for Graph Matching
29Probabilistic modelling
Due to misplacement of correspondence labels
corrected by re-configuring the match f.
Due to the addition of extraneous nodes and
edges corrected by modifying the node set V and
the edge-set E.
30Relational graph matching
Graph G(V,E) with node set V and edge set E
Find state of match fD-gtM between data graph
and a model graph.
Use constraints provided by edges of the two
graphs.
31Distribution of matching errors
Expand probability over dictionary of consistent
matching configurations
Assume individual matching errors are independent
an memoryless
Two component error model for assignment errors
and structural errors
32Probability distribution for matching errors
Three component model
Depends on Hamming distance (number of assignment
errors), size difference (number of structural
errors) and numbner of connecting edges
33Uses
- Optimisation Gradient ascent, naïve mean field,
genetic search, meta-heuristic (tabu search). - Problem Computational complexity grows
exponentially with number of dummy node
insertions. - Solution 1 Treat neighbourhoods as strings and
use Dijkstra algorithm to compute edit distance. - Solution 2 Adopt graph spectral approach,
34Edit Distance
Avoid exponential complexity of dictionary
padding by treating configurations as strings and
comparing them using string edit distance
Use edit distance to compute probability of
configuration of correspondences
35Literature
- Bayesian Framework (Wilson and Hancock IEEE
PAMI, July, 1997) - Graph editing (Wilson, Cross and Hancock, CVIU,
1998) - Soft-assign (Finch, Wilson and Hancock, Neural
Computation, 1998). - Edit-distance (Myers, Wilson and Hancock, IEEE
PAMI, June, 2000) - Dual-step EM algorithm (Cross and Hancock, IEEE
PAMI, Nov, 1998) - Image retrieval (Huet and Hancock, IEEE PAMI,
Dec, 1999) - Factorisation (Carcassoni and Hancock, CVPR00
Luo and Hancock ICPR00)
36Spectral Correspondence Matching
- Use EM algorithm to develop iterative form of
Umeyama algorithm that is robust to differences
in graph size and structural error
37Graph-spectral Methods for Correspondence
- Develop a probabilistic approach to graph
matching by matrix factorisation. - Idea is to match using eigenvectors of graph
adjacency matrix. - Several authors have used algorithms based on
the singular vectors for graphs and point-sets
(Umeyama, Scott and Longuet-Higgins, Shapiro and
Brady). - These methods can be viewed as drawing their
inspiration from spectral graph theory (Chung). - They are highly fragile to differences in
graph-structure. - Aim in this paper is to show how EM algorithm can
be used to provide iterative version of Umeyamas
method that is robust to size differences and
structural error.
38Umeyamas Algorithm
- Find permutation matrix S which minimises
Froebenius norm D-Transpose( S) M S. - Perform singular value decompositions on
adjacency matrices MU .Diag .Transpose(U) and
D V. Diag .Transpose(V). - Here Diag is a diagonal matrix of singular
values, and, U and V are orthogonal matrices. - Optimal correspondence matrix is given by the
singular vectors of the two adjacency matrices
S V Transpose (U) - Does not work when D and M are of different size.
39Approach
- If we consider the model graph node to data graph
node assignments as a set of hidden variables,
then we can apply the expectation-maximization
(EM) algorithm to compute the optimal assignment
that maximizes the likelihood.
40Matrix Representation
Matrix of assignment variables
Data graph connection matrix
Model graph connection matrix
41Problem Formulation
Correspondence variables, S, are hidden variables
that arise through a noisy observation process.
If we assume that any data node can be
generated from any model node, we can set up a
mixture model, yielding the following likelihood
function But how do we model the observation
density p(xaya,S) ?
42Maximum Likelihood Framework
Find the maximum likelihood pattern of
correspondences which satisfies the condition
Construct mixture model over model graph labels
and assume factorial distribution over data graph
nodes
43Factorial observation density
Assume that dat-graph nodes and model-graph nodes
are conditionally independent given
correspondence indicators
44Bernoulli Distribution for Correspondences
Random variable is edge-consistency indicator
Assumed to follow a Bernoulli distribution
45Multiple Edge Constraints
xa
ya
46Probability distribution for correspondences
After algebra, obtain the following exponential
distribution for the correspondence indicators
47Log-likelihood function
Log-likelihood function that needs to be
maximised with respect to the correspondence
indicators S is
48Expected log-likelihood
Correspondences are hidden Work instead with
expected log-likelihood.
- Expand using Bernoulli model
Where
is the a posteriori correspondence
probability.
49EM Algorithm
- Maximisation step Find maximum likelihood
correspondences. - Expectation step Find probabilities of
correspondence indicators.
50Maximisation Step
New configuration of correspondence indicators
maximises the expected log-likelihood subject to
previously available correspondence indicators
51Matrix Representation
Cast expected log-likeihood function into a
matrix representation using assignment matrix and
connection matrices
Maximum likelihood correspondence matrix
satisfies the condition
52Maximisation using SVD
Locate intermediate matrix which satisfies the
condition
Scott and Longuet-Higgins showed that this matrix
can be found using the singular value
decomposition
The correspondences are located by selecting the
elements which are both row and column maxima
53Expectation step
Update a posteriori correspondence probabilities
using the Bayes rule
54CMU House
55Distortions
56Luo EMSVD
57Luo
58Luo
59Luo
60Luo
61Results Summary
62Umeyama
63Shapiro and Brady
64Correspondences
65Convergence
66Edge Errors (Size constant)
67Edge Errors versus Positional Jitter
68Summary
- Have used EM algorithm to develop an SVD method
for graph-matching which works with graphs of
different size and edge structure it copes with
the inexact case. - Subsequent work has shown how to convert graphs
to strings using leading eigenvector, and how
similarity can be measure using string edit
distance can be computed (PAMI 2005
Robles-KellyHancock).
69Graph seriation
- convert graphs to strings using eigenvector
methods and find correspondences using string
matching methods
70Motivation
- Theoretic framework for graph edit distance much
less developed or rigorous than that for string
edit distance. - Aim here is to use seriation of adjacency matrix
to convert graphs to strings. - Use a Bayes model of string matching to compute
edit costs. - Match by finding minimum cost path through
Levenshtein distance matrix.
71Context and contribution
Graph spectral methods can be used to convert
graphs to strings graph seriation (Robles-Kelly
and Hancock PAMI05).
72Example
Eigen-vector components
Original graph
Seriation path
73Semidefinite programming
- Optimisation over positive semi-definite
matrices. - Linear cost function and linear constraints.
- Convex solutions.
74Spectral seriation problem definition
- Our aim is to use a permutation p of the nodes to
find a path sequence so that the edge weight
matrix W decreases as the path is traversed. - This is governed by the penalty function.
- Unfortunately, minimizing g(p)is NP complete.
75Approximate solution relaxed version
- Instead seek a relaxed solution is sought by
using. - vector
- and minimise
- under the constraints
- and.
76Graph Laplacian
- Graph Laplacian
- Graph
- Combinatorial Laplacian matrix LD-A
- Eigenspectrum of the Laplacian matrix
- Eigenvalues
- Eigenvectors
77Spectral seriation Fiedler vector
- Atkinson, Bowman and Hendrikson showed that the
solution to this minimization problem is given by
the Fiedler vector.
78.this is not surprising
- Both the continuous and discrete time random
walks on a graph are determined by the Fiedler
vector of the Laplacian (see review by Lovasc). - Unfortunately random walks teleport and do
not preserve edge connectivity constraints.
79Improved use of edge connectvity
Robles-KellyHancock (PAMI 05).
- Seek relaxed vector
- that minimises path length
- under the constraints
- and.
80Solution
- Relaxed solution is the leading eigenvector
of that minimises the Rayleigh
quotient
81Rank order of nodes gives seriation order
- Sort nodes according to their eigenvector rank
order satisfies - Seriation path given by rank order of leading
eigenvector of transition probability matrix
82Edit Distance
83- Edit path is the sequence of states
- Optimal edit path is the one that satisfies the
condition - Under pairwise dependence of states
84Edit Distance
- Log posterior probability of edit path
- Elementary transition cost
85Model
- Transition probabilities (Bayes treatment)
- Posterior state probabilities
86Comparative Study
87Retrieval Experiments
We have used a data-set of Delaunay graphs
obtained using the corners extracted from the
first ten frames of the CMU and MOVI image
sequences.
88CMU Sequence
89MOVI Sequence
90Example adjacency matrices and matrix y
Final edit distance matrix
91Distance matrices and clustering
92MDS Thumbnails
93Learning
94Why is structural learning difficult
- Graphs are not vectors There is no natural
ordering of nodes and edges. Correspondences must
be used to establish order. - Structural variations Numbers of nodes and
edges are not fixed. They can vary due to
segmentation error. - Not easily summarised Since they do not reside
in a vector space, mean and covariance hard to
characterise.
95Structural Variations
96Learning with graphs
- Learn class structure Assign graphs to classes.
Need a distance measure. Central clustering is
difficult since number of nodes and edges varies
and correspondences are not known. Easier to
perform pairwise clustering. (Bunke,
Buhman,Luo_Torsello_Robles-Kelly). - Embed graphs in a low dimensional space
Correspondences are again needed, but spectral
methods may offer a solution. Can apply standard
statistical and geometric learning methods to
graph-vectors (LuoWilson). - Learn modes of structural variation Understand
how edge (connectivity) structure varies for
graphs belonging to the same class.
(Dickinson,Williams,Torsello) - Build generative model Borrow ideas from
graphical models (Langley, Friedman,
Koller,Torsello,Xaio).
97Pattern spaces sets of graphs
98Preliminary Study (ACCV 2002)
- Known correspondences OR weighted graphs
99Aim of this paper
Investigate whether it is possible to generate
view based eigenspaces using relational graphs.
Image features Objects in images are
first represented by extracted
corners Graphs Objects are then
represented by relational graphs of Delaunay
triangulations Pattern space embedding PCA M
DS
100Graph Representation
Adjacency matrices
Vector representation
Order of components of vector is determined by
known correspondences graphs are of same size
with no node errors but variation in edge
structure .
101Embedding using PCA
Embedding graphs in a pattern space by
storing each image as a feature vector,
calculating the covariance matrix of the vectors,
finding the eigenvalues and eigenvectors using
PCA and projecting the images onto the leading
principal directions. Murase and Nayar
1994
Vector Matrix
Covariance Matrix
Eigenvalue Equation
Eigenvector Equation
Eigenspace Projection
102Synthetic Sequence
103PCA Eigenspaces Synthetic Sequence - 1
Left column Eigenspaces Right column
Graph distances in eigenspace Top row
Unweighted graph Middle row Weighted graph
proximity weights Bottom row Fully connected
weighted graph (point proximity
matrix)
This wont work in practice Components of vectors
are ordered using ground truth correspondences
and no size differences.
104Algebraic graph theory (PAMI 2005)
- Use symmetric polynomials to construct
permutation invariants from spectral matrix
105Spectral Representation
- Compute Laplacian matrix LD-A, where A is the
adjacency matrix and D is the matrix with the
node degree on the diagonal. - Perform spectral decomposition on the Laplacian
matrix - Construct spectral matrix
106Properties of the Laplacian
- Eigenvalues are positive and smallest eigenvalue
is zero - Multiplicity of zero eigenvalue is number
connected components of graph. - Zero eigenvalue is associated with all-ones
vector. - Eigenvector associated with the second smallest
eigenvector is Fiedler vector. - Fiedler vector can be used to perform clustering
of nodes of graph by recursive bisection .
107Eigenvalue spectrum
Vector of ordered eigevectors is permutation
invariant
108Eigenvalues are invariant to permutations of the
Laplacian.
- ..would like to construct family of permutation
invariants from full spectral matrix.
109Why
- According to perturbation analysis eigenvalues
are relatively stable to noise. - Eigenvectors are not stable to noise and undergo
large rotations for small additions of noise.
110Symmetric polynomials on spectral matrix
- Symmetric polynomials
- Power symmetric polynomials
- Newton Giraud formula
111Spectral Feature Vector
- Construct a matrix of permutation invariants by
applying symmetric polynomials to elements in
columns of the spectral matrix. Use entropy
measure to flatten distribution - Stack columns of F to form a long-vector B.
- Set of graphs represented by data-matrix
112extend to weighted attributed graphs.
113Complex Representation
- Encode attributes as complex numbers.
- Off-diagonal elements. Edge weights (W) as
modulus and normalised attributes as phase (y) - Diagonal elements encode node attributes (x) and
ensure H is positive semi-definite
114Spectral analysis
- Perform spectral analysis on H. Real eigenvalues
and complex eigenvectors - Construct spectral matrix of scaled complex
eigenvectors - Complex Laplacian
115Manifold learning methods
- ISOMAP construct neighbourhood graph on pairwise
geodesic distance between data-points. Low
distortion embedding by applying MDS to weighted
graph (Tennenbaum). - Locally linear embedding apply variant of PCA to
data (Roweiss Saul) - Locally linear projection use interpoint
distances to compute weighted covariance matrix,
and apply PCA (HeNiyogi).
116Pattern Spaces
- PCA Project long vectors onto leading
eigenvectors of covariance matrix - MDS Embed graphs in low dimensional space
spanned by eigenvectors of distance matrix - LLP Locally linear projection (Niyogi) perform
eigenvector analysis on weighted covariance
matrix (mixture of PCA and MDS). PCA/MDS hybrid.
117Separation under structural error
Mahalanobis distance between feature vectors for
noise corrupted graph and remaining graphs
Distance between graph and edge-edited variants
Distance between graph and random graphs of same
size and edge density
118Distribution of spectral features
119Variation under structural error (MDS)
MDS applied to Mahalanobis distances between
feature vectors.
120CMU Sequence
121MOVI Sequence
122YORK Sequence
123Comparison
Laplacian eigenvalues
Adj. poly
Lap. Poly.
PCA
MDS
LLP
124Visualisation (LLPLaplacian Polynomials)
125View Trajectories (MOVI)
Adjacency matrix polynomials (top) versus
Laplacian polynomials (bottom) left-to-right
(PCA, MDS, LLP).
126View Trajectories (Chalet)
Adjacency matrix polynomials (top) versus
Laplacian polynomials (bottom) left-to-right
(PCA, MDS, LLP).
127Shock graphs
Type 1 shock(monotonically increasing radius)
Type 2 shock(minimum radius)
Type 3 shock(constant radius)
Type 4 shock(maximum radius)
128Shock Graph Attributes
- Nodes are skeletal branches. Edges indicate
existence of a junction between a pair of
branches. - Edge attributes are angles between branches
- Node attribute is average rate of change of
boundary length with skeleton length along a
branch. This is related to rate of change of
bitangent radius with skeleton distance
129MDS
130PCA
131LDA
132Geometric characterisation og graphs
133Idea
- Kernel PCA applied to heat-kernel of graph.
- Embed nodes of graph into a vector space using
the kernel mapping. - Characterise graph using the geometry of the
kernel embedding.
134Characterisations
- Heat-kernel eigenvalues (eigenvalues of
covariance matrix for the embedded node
co-ordinates). - Moments of point-set distribution.
- Sectional curvatures of edges associated with the
embedding.
135Graph spectra
- .some introductory material
136Normalised Laplacian and its Spectrum
- Adjacency matrix
- Degree matrix
- Normalised Laplacian
137Normalised Laplacian and its Spectrum
- Spectral Decomposition of Laplacian
- Element-wise
138Heat Kernel Trace
Trace
Time (t)-gt
139Moments of the heat-kernel trace
- .can we characterise graph by the shape of its
heat-kernel trace function?
140Zeta function
- Definition of zeta function
141Zeta function and heat-kernel moments
- Mellin transform
- Trace and number of connected components
- Zeta function
C is multiplicity of zero eigenvalue or number of
connected components in graph.
Zeta-function is related to moments of
heat-kernel trace.
142Heat kernel moments as a function of view
143Clusters
144Spectral Clustering
145Zeta derivative and Laplacian determinant
- Zeta function in terms of natural exponential
- Derivative
- Derivative at origin
- Torsion
146works quite well as a feature
147Heat-kernels and random walks
148 Heat Kernels
- Solution of heat equation and measures
information flow across edges of graph with time - Solution found by exponentiating Laplacian
eigensystem
149 Heat kernel is distribution of path lengths
- In terms of number of paths of length
k from node u to node v - Geodesic distance
150Greens function
- Definition
- Spectral representation
- Meaning psuedo inverse of Laplacian
151Commute Time
- Commute time
- Hitting time and the Greens function
- Commute time and Laplacian eigen-spectrum
- For a regular graph
152Heat kernel and random walk
- State vector of continuous time random walk
satisfies the differential equation - Solution
153 154Kernel Mapping
- Heat-kernel ht is a Gram matrix for the points
embedded in the manifold - Determines point positions up to isometry
- Coordinate matrix Y given by Young-Householder
decomposition - The kernel mapping is a mapping from graph
vertices to the corresponding column of Y - Vertex is a point in V dimensional space
155Covariance structure
- Uses the covariance matrix of the point-set from
the kernel mapping - Eigenvalues of covariance are
- Characterise graph using vector of eigenvalues
156Kernel Mapping
- Euclidean distance related to kernel
157Graph clustering COIL database
158Stability of mapping
- Projection onto eigenspace spanned by leading
two heat-kernel eigenvectors. - Elipsoids fitted to points closest to reference
graph nodes (reference graph is largest in the
set).
159Empirical observations
- Points corresponding to nodes relatively stable.
- Could construct a generative model that described
point distribution associated with nodes. - Use either a mixture of Gaussians or a linear
point distribution model.
160Linear generative model
Establish correspondences between reference graph
(largest is set) and each remaining graph from
training set.
- Mean embedded point position
- Covariance matrix
- Eigendecomposition
- Projection of graph onto eigenspace
161Example of projection
162Sectional curvature
- we have geodesic distance from the properties of
the random walk and Euclidean distance from the
embedding. So could we compute curvatures?
163Approach
- Idea
- Nodes of graph reside on a manifold in
low-dimensional-space. -
- Edges are geodesics on the manifold.
- Sectional curvatures of edges depend on
difference between geodesic and sectional
curvatures - Characterise graph using the histogram of
sectional curvatures for the edges.
164Idea
165Spectral Geometry
- Spectral geometry Characterise differential
geometry of manifold using information flow
dictated by heat equation on manifold solution
provided by eigenspectrum of Laplace-Beltrami
operator (Yau, Gilkey).
166Spectral Geometry of the Laplacian of Manifold
- Heat kernel trace
- Polynomial co-efficients
- Volume of manifold
- Gauss curvature
- Ricci curvature
167Way forward
.too hard. Computing the co-efficients is time
consuming. Theoretical physicists involved in
brane-theory are struggling to go beyond the
fifth term in the series. Instead we use the
heat kernel to estimate the Euclidean and
geodesic distances of an implicit embedding, and
make numerical estimates of the geodesic
distances associated with edges.
168Approximating sectional curvature
Euclidean distance
Geodesic distance
169Sectional Curvature
- Maclaurin series of the Euclidean distance
- Substitute from the geodesic distance
- Approximate sectional curvature
170Sectional Curvature for Graph Clustering
- Histograms
- Construct normalised histogram of sectional
curvatures over the edge-set of the graph. - Use normalised bin-contents as component
of a feature vector. Apply PCA to vectors for a
set of graphs, -
171Histograms of sectional curvature
172Clustering Sectional Curvature Histograms
173Clustering using the Laplacian spectrum
174Clustering using geodesic distance histograms
175(No Transcript)
176Conclusions
- Shown how spectral features can be used to
construct pattern spaces for sets of graphs. - Shown how heat-kernel embedding can be used to
characterise graphs in a geometric manner and the
characterisation used for clustering. - Future plans revolve around using embedding to
construct generative model of graph-structure.