The most common likelihood principle: the gaussian case G. Bonvicini, Wayne State University - PowerPoint PPT Presentation

1 / 11
About This Presentation
Title:

The most common likelihood principle: the gaussian case G. Bonvicini, Wayne State University

Description:

Talk based on paper to be published in NIM A. For once, this is a talk where data configurations can be considered as smoothly ... – PowerPoint PPT presentation

Number of Views:73
Avg rating:3.0/5.0
Slides: 12
Provided by: physic68
Category:

less

Transcript and Presenter's Notes

Title: The most common likelihood principle: the gaussian case G. Bonvicini, Wayne State University


1
The most common likelihood principle the
gaussian caseG. Bonvicini, Wayne State University
  • . With many thanks to Louis
  • Talk based on paper to be published in NIM A
  • For once, this is a talk where data
    configurations can be considered as smoothly
    distributed

2
The principle
  • P(aL)P(La)/G
  • (L?f(L) a possibility), where
  • L is the likelihood, L(ya)
  • a are the fit parameters
  • G is the generalized goodness of fit parameter,
    equals P(L)
  • Defined in function space (typically not a
    Hilbert space)
  • Clearly on a collision course with the maximum
    likelihood principle

3
History and method
  • Based on the likelihood theorem (Birnbaum 1963),
    and wishing to use all moments of the likelihood
    in the definition of G. Avoided the unbinned
    likelihood due to known problems there - only
    binned data.
  • In practice one chooses a suitable set of
    moments or derivatives and develops the
    likelihood L ? (c2 -2lnLmax,m,s,) (more on it
    later). I choose derivatives and also develop
    f(L)lnL.

4
Example all these four distributions fit to same
(c2,m,s) with Lu (J. Heinrich, PHYSTAT2003)
5
Properties of the method
  • The method works if strong correlation is
    observed at higher moments/derivatives
  • The plots correlating two different moments tend
    to have a generally common shape

6
Chi-square versus dm-a
  • In the unbiased hypothesis, one finds that P(c2
    d0) peaks at c2Nbins, and
  • Very low c2 values correspond to large d and/or
    large pulls

7
sigma versus dm-a
  • For poissonian and Gaussian statistics the
    distributions P(m,sa) do not merge smoothly into
    one another

8
The method in the (m,s) plane
  • A mapping takes place from the original
    likelihood (m,s) to the highest probability ones.
    The longest the distance traveled the worse the
    goodness of fit

9
Likelihood with gaussian statistics (with
undergraduate Brandon Willard)
  • -2lnL Si ((yi-fi(x,a))/si)2
  • with substitutions
  • ri (yi-fi(x,a))/si
  • f(n)i f(n)i/si
  • dm-a
  • And
  • P(a)gaussian with E(a)0 and D(a)Si a2i
  • P(a,b)correlated gaussian with
    ?2(Saibi)/D(a)D(b)
  • ?2(a,b)c2/D(a)D(b)
  • -2lnL
  • S ri2
  • -2dfiri a
  • d2 (fi2 -firi b)
  • d3 (3fi fi -firi c)/3 .
  • To second order P(La) P(aba)
  • P(c2-2a2bab,a)

10
A basic point about robustness correlation to
pull of the quantity you are trying to maximize
is paramount
11
Conclusions
  • A new method to fit data is being developed. It
    seeks to overcome the limitations of the
    likelihood to improve robustness. It provides a
    much stronger, generalized GOF parameter.
    Smaller corrections to peak locations and
    parameter uncertainties are also expected
Write a Comment
User Comments (0)
About PowerShow.com