Confidence%20Intervals%20Lecture%202 - PowerPoint PPT Presentation

About This Presentation
Title:

Confidence%20Intervals%20Lecture%202

Description:

To interpret a confidence level as a relative frequency ... reasoning follows rules that are isomorphic to those of. probability theory. November 21, 2002 ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 27
Provided by: harrison4
Learn more at: http://www.hep.fsu.edu
Category:

less

Transcript and Presenter's Notes

Title: Confidence%20Intervals%20Lecture%202


1
Confidence IntervalsLecture 2
  • First ICFA Instrumentation School/Workshop
  • At Morelia, Mexico, November 18-29, 2002
  • Harrison B. Prosper
  • Florida State University

2
Recap of Lecture 1
  • To interpret a confidence level as a relative
    frequency requires the concept of a set of
    ensembles of experiments.
  • Each ensemble has a coverage probability, which
    is simply the fraction of experiments in the
    ensemble with intervals that contain the value of
    the parameter pertaining to that ensemble.
  • The confidence level is the minimum coverage
    probability over the set of ensembles.

3
Confidence Interval General Algorithm
Parameter space
Count
4
Coverage Probabilities
Central
Feldman-Cousins
Mode-Centered
NvN
?
5
Whats Wrong With Ensembles?
  • Nothing, if they are objectively real, such as
  • The people in Morelia between the ages of 16 and
    26
  • Daily temperature data in Morelia during the last
    decade
  • But the ensembles used in data analysis typically
    are not
  • There was only a single instance of Run I of DØ
    and CDF. But to determine confidence levels with
    a frequency interpretation we must embed the two
    experiments into ensembles, that is, we must
    decide what constitutes a repetition of the
    experiments.
  • The problem is that reasonable people sometimes
    disagree about the choice of ensembles, but,
    because the ensembles are not real, there is
    generally no simple way to resolve disagreements.

6
Outline
  • Deductive Reasoning
  • Inductive Reasoning
  • Probability
  • Bayes Theorem
  • Example
  • Summary

7
Deductive Reasoning
Consider the propositions A (this is a baby)
B
(she cries a lot)
Major premise If A is TRUE, then B is
TRUE Minor premise A is
TRUE Conclusion Therefore, B is TRUE
Major premise If A is TRUE, then B is
TRUE Minor premise B is
FALSE Conclusion Therefore, A is FALSE
Aristotle, 350 BC
AB A
8
Deductive Reasoning - ii
A (this is a baby) B (she cries a lot)
Major premise If A is TRUE, then B is
TRUE Minor premise A is
FALSE Conclusion Therefore, B is ?
Major premise If A is TRUE, then B is
TRUE Minor premise B is
TRUE Conclusion Therefore, A is ?
AB A
9
Inductive Reasoning
A (this is a baby) B (she cries a lot)
Major premise If A is TRUE, then B is
TRUE Minor premise B is
TRUE Conclusion Therefore, A is more
plausible
Major premise If A is TRUE, then B is
TRUE Minor premise A is
FALSE Conclusion Therefore, B is less
plausible
Can plausible be made precise?
10
Yes!
Bayes(1763), Laplace(1774), Boole(1854),
Jeffreys(1939), Cox(1946), Polya(1946),
Jaynes(1957)
In 1946, the physicist Richard Cox showed that
inductive reasoning follows rules that are
isomorphic to those of probability theory
11
The Rules of Inductive Reasoning Boolean Algebra
Probability
12
Probability
Conditional Probability
A theorem
13
Probability - ii
Product Rule
Bayes Theorem
These rules together with Boolean algebra are the
foundation of Bayesian Probability Theory
14
Bayes Theorem
We can sum over propositions that are of no
interest
marginalization
15
Bayes Theorem Example 1
  • Signal/Background Discrimination
  • S Signal
  • B Background
  • The probability P(SData), of an event being a
    signal given some event Data, can be approximated
    in several ways, for example, with a feed-forward
    neural network

16
Bayes Theorem Continuous Propositions
posterior
prior
likelihood
I is the prior information
marginalization
17
Bayes Theorem Model Comparison
For each model M, we integrate over the
parameters of the theory.
Standard Model
P(Mx,I) is the probability of model M given
data x.
SUSY Models
18
Bayesian measures of uncertainty
Variance
Bayesian Confidence Interval
Also known as a Credible Interval
where
19
Example 1 Counting Experiment
  • Experiment
  • To measure the mean rate ? of UHECRs above 1020
    eV per unit solid angle.
  • Assume the probability of N events to be a given
    by a Poisson distribution

20
Example 1 Counting Experiment - ii
  • Apply Bayes Theorem
  • What should we take for the prior probability
    P(?I)?
  • How about
  • P(?I) d??

21
But why choose P(?I) d??
  • Why not P(?2I) d??
  • Or P(tan(?)I) d??
  • Choosing a prior probability in the absence of
    useful prior information is a difficult and
    controversial problem.
  • The difficulty is to determine the variable in
    terms of which the prior probability density is
    constant.
  • In practice one considers a class of prior
    probabilities and uses it to assess the
    sensitivity of the results to the choice of
    prior.

22
Credible Intervals With P(?I) d?
?
Bayesian
NvN
23
Credible Interval Widths
Bayesian
NvN
24
Coverage Probabilities Credible Intervals
Bayesian
Even though these are Bayesian intervals
nothing prevents us from studying their
frequency behavior with respect to some ensemble!
?
25
Bayes Theorem Measuring a Cross-section
Model
l is the efficiency times branching fraction
times integrated luminosity
Data
Prior information
are estimates
Likelihood
26
Measuring a Cross-section - ii
Apply Bayes Theorem
prior
likelihood
posterior
Then marginalize that is, integrate over
nuisance parameters
27
What Prior?
We can usually write down something sensible for
the luminosity and background priors. But,
again, what to write for the cross-section prior
is controversial. In DØ and CDF, as a matter of
convention, we set P(sI) d s Rule Always
report the prior you have used!
28
Summary
  • Confidence levels are probabilities as such,
    their interpretation depends on the
    interpretation of probability adopted
  • If one adopts a frequency interpretation the
    confidence level is tied to the set of ensembles
    one uses and is a property of that set.
    Typically, however, these ensembles do not
    objectively exist.
  • If one adopts a degree of belief interpretation
    the confidence level is a property of the
    interval calculated. A set of ensembles is not
    required for interpretation. However, the results
    necessarily depend on the choice of prior.
Write a Comment
User Comments (0)
About PowerShow.com