Kevin H Knuth - PowerPoint PPT Presentation

About This Presentation
Title:

Kevin H Knuth

Description:

Title: Single-Trial Characterization of Evoked Responses Author: Dr. Kevin Knuth Last modified by: College of Arts and Sciences Created Date: 2/18/2004 4:11:06 AM – PowerPoint PPT presentation

Number of Views:204
Avg rating:3.0/5.0
Slides: 46
Provided by: DrKevi8
Category:
Tags: arts | element | kevin | knuth

less

Transcript and Presenter's Notes

Title: Kevin H Knuth


1
Automating the Processes of Inference and
Inquiry
Kevin H. KnuthUniversity at Albany
2
  • Describing the World

3
States
4
Statements (States of Knowledge)
powerset
statements describe potential states
5
Implication
implies
ordering encodes implication
6
Inference
Quantify to what degree knowing that the system
is in one of three states a, b, cimplies
knowing that it is in some other set of states
inference works backwards
7
Quantification
8
Quantification
quantify the partial order assign real numbers
to the elements
Any quantification must be consistent with the
lattice structure. Otherwise, it does not
quantify the partial order!
9
Local Consistency
Any general rule must hold for special cases Look
at special cases to constrain general rule We
enforce local consistency
x ? y
y
x
This implies that
10
Associativity of Join V
Write the same element two different ways
This implies that
11
Associativity of Join V
Write the same element two different ways
This implies that
The general solution (Aczel) is
DERIVATION OF THE SUMMATION AXIOM IN MEASURE
THEORY!
(Knuth, 2003, 2009)
12
Valuation
VALUATION
x ? y
x
y
13
General Case
x ? y
x
y
x ? y
z
14
General Case
x ? y
x
y
x ? y
z
15
General Case
x ? y
x
y
z
x ? y
16
General Case
x ? y
x
y
z
x ? y
17
SUM RULE
symmetric form (self-dual)
18
Lattice Products
x

Direct (Cartesian) product of two spaces
19
DIRECT PRODUCT RULE
The lattice product is associative
After the sum rule, the only freedom left is
rescaling
20
Context and Bi-Valuations
BI-VALUATION
Valuation
Bi-Valuation
Measure of x with respect to Context i
Context i is implicit
Context i is explicit
Bi-valuations generalize lattice inclusion to
degrees of inclusion
21
Context Explicit
Sum Rule
Direct Product Rule
22
Associativity of Context

23
CHAIN RULE
c
b
a
24
Extending the Chain Rule
Since x?x and x ? x?y, w(xx)1 and w(x?y x)1
25
Extending the Chain Rule
y
z
x
x ? y
y ? z
x ? y ? z
26
Extending the Chain Rule
y
z
x
x ? y
y ? z
x ? y ? z
27
Extending the Chain Rule
y
z
x
x ? y
y ? z
x ? y ? z
28
Extending the Chain Rule
29
Extending the Chain Rule
y
z
x
x ? y
y ? z
x ? y ? z
30
Constraint Equations
(Knuth, MaxEnt 2009)
Sum Rule
Direct Product Rule
Product Rule
31
Commutativity
Commutativity leads to Bayes Theorem
Bayes Theorem involves a change of context.
32
  • Automated Learning

33
Application to Statements
Applied to the lattice of statements our
bi-valuation quantifies degrees of implication
M represents a statement about our MODEL D
represents a statement about our observed DATA T
is the TRUISM (what we assume to be true)
34
Change of Context Learning
Re-arranging the terms highlights the learning
process
Updated state of knowledge about the MODEL
DATA dependent term
Initial state of knowledge about the MODEL
35
  • Information Gain

36
Predict a Measurement Value
Predict the measurement value De we would expect
to obtain by measuring at some position (xe, ye).
We rely on our previous data D, and hypothesized
model M
Using the product rule
37
Select an Experiment
Probability theory is not sufficient to select an
optimal experiment. Instead, we rely on decision
theory, where U(.) is an utility function
Using the Shannon Information as the Utility
function
38
Maximum Information Gain
By writing the joint entropy of the model M and
the predicted measurement De, two different ways,
one can show that(Loredo 2003)
We choose the experiment that maximizes the
entropy of the distribution of predicted
measurements. Other cost functions will lead to
other results (GOOD FOR ROBOTICS!)
39
Robotic Scientists
This robot is equipped with a light sensor. It is
to locate and characterize a white circle on a
black playing field with as few measurements as
possible.
40
Initial Stage
BLUE Inference Engine generates samples from
space of polygons / circles COPPER Inquiry
Engine computes entropy map of predicted
measurement results
With little data, the hypothesized shapes are
extremely varied and it is good to look just
about anywhere
41
After Several Black Measurements
With several black measurements, the hypothesized
shapes become smaller. Exploration is naturally
focused on unexplored regions
42
After One White Measurement
A positive result naturally focuses exploration
around promising region
43
After Two White Measurements
A second positive result naturally focuses
exploration around the edges
44
After Many Measurements
Edge exploration becomes more pronounced as data
accumulates. This is all handled naturally by the
entropy!
45
Special Thanks to
John Skilling Janos AczélAriel Caticha Keith
EarlePhilip GoyalSteve GullJeffrey
JewellCarlos Rodriguez Phil ErnerScott
FrassoRotem GutmanNabin MalakarA.J. Mesiti
Write a Comment
User Comments (0)
About PowerShow.com