Title: Efficiency of Cuts in the Inverted Analysis
1Efficiency of Cuts in the Inverted Analysis
Ndirc gt 13 (number of direct hits) Ldirb gt170
(track length in meters) Smootallphit lt
0.250 (smoothness of hits along track) Medres lt
4 (median resolution in degrees) Likelihood
ratio vs. zenith (horizontal events must be gt
27.5 and vertical events must be greater than
65.7 (linear function)
2How I calculated the efficiency 1) Made an N-1
Plot of the selected parameter. (applied all
cuts at my cut level except for the cut on the
parameter I am studying) 2) Counted number of
events that passed and failed each cut 3)
efficiency events that pass the cut / total
of events (plots will follow the numbers....)
3(No Transcript)
4(No Transcript)
5(No Transcript)
6(No Transcript)
7The next pages contain the N-1 plots for each
parameter. Each page contains 4 plots.
Nch gt 100
Nch lt 100
Nch gt 100 with the dCorsika normalized to have
the same number of events as the data
Nch lt 100 with the dCorsika normalized to have
the same number of events as the data
The normalization factor needed is approximately
1.25.
8INVERTED
Nch lt 100
Nch gt 100
Normalized Nch lt 100
Normalized Nch gt 100
9INVERTED
Nch lt 100
Nch gt 100
Normalized Nch lt 100
Normalized Nch gt 100
10INVERTED
Nch lt 100
Nch gt 100
Normalized Nch lt 100
Normalized Nch gt 100
11INVERTED
Nch lt 100
Nch gt 100
Normalized Nch lt 100
Normalized Nch gt 100
12INVERTED
Nch lt 100
Nch gt 100
Normalized Nch lt 100
Normalized Nch gt 100
13Now, take a look at the comparable plots for the
upgoing analysis. (Sorry that the histograms
don't have identical binning... I can do them
again if critical.)
14UPGOING n-1 plot
Nch lt 100
Normalized Nch lt 100
15UPGOING n-1 plot
Nch lt 100
Normalized Nch lt 100
16UPGOING n-1 plot
Nch lt 100
Normalized Nch lt 100
17UPGOING n-1 plot
Nch lt 100
Normalized Nch lt 100
18What I am working on..... If we are cutting on
distributions that don't agree, then we are
likely to get the normalization for low Nch
events wrong. What would happen to the
normalization if we had gotten the Monte Carlo
distribution incorrect? Right now, I see two
ways to approach this.
191) We could try to shift the MC to match the
data. Using different ice models, for
instance, could shift the Ndirc into better
agreement ---gtgt We decided this was a bad idea
because it would send parameters like Nch out of
agreement.
cut keep
cut keep
2) We could shift the Monte Carlo cut (but keep
the data cut).
Then we could see how this changes the overall
normalization.
atms cut
data cut
20If the Ndirc peak is off by 20, you can shift
it higher (or shift the cut lower) and see the
effect on the normalization. For MC 1.2Ndirc
gt 13 This is the same as shifting the cut --gtgt
Ndirc gt 13 / 1.2 Ndirc gt
10.83 Ndirc gt 11
since it is discrete
We can compare what happens to the normalization
at low Nch if we pretend that we are working at
an entirely different quality cut level.
21Ignore the data for a moment and pretend that the
Level 7 central Bartol distribution is the truth
for atmospheric neutrinos. Count the number of
events above and below the Nch cut for other
quality levels. Since I work at Level 7,
consider Levels 5,6,7,8 and 9.
Bartol Min
Bartol Central
Bartol Max
22Bartol Max
Signal
Bartol Central
Bartol Min
Assuming the Bartol Central Level 7 is the truth,
you can find the low Nch normalization factor for
each scenario 5 levels 3 fluxes 15
scenarios For each, you can then calculate a
normalized number of background and signal
events. Example Bartol Max, Level
5 normalization 533.8 / 912.6
0.585 normalized background 0.585 17.9
10.5 normalized signal 0.585 82.6 48.3
23This may appear somewhat random, but the pattern
is evident on the next slide.
24Assuming Bartol Central Level 7 is the truth.....
Lv. 9
Bartol Min
Lv. 8
Bartol Central
Lv. 7
Bartol Max
Lv. 6
Lv. 5
25You start with a single prediction of the
background and signal for the final sample.
signal
Bartol Central
bgd
26Uncertainties in the theoretical prediction of
the atmospheric neutrino flux lead to a spread in
background values predicted in the final sample.
normalized signal
Bartol Min
Bartol Central
Bartol Max
normalized bgd
27Normalization to low nch events. Despite the low
normalization factor, Bartol max will still
predict the highest normalized background.
However, it will predict the lowest signal.
normalized signal
normalized bgd
28Assume there is a non-uniform, energy dependent
scale factor. The signal and background may be
shifted by different amounts (shown by the
different sizes of the arrows.
normalized signal
normalized bgd
29Cut levels 5,6, and 7 (the circled region, with
level 7 being the blue line) show similar
behavior. Because of the large gap, it seems that
the cuts tighten dramatically between levels 7
and 8. If I wanted to, I could add a cut level in
that region. I hope that our distributions (data
vs MC) are not in as large a disagreement as
Level 7 MC to Level 9 MC. If the data and MC show
a disagreement that is similar to the
disagreement between Level 6 MC and Level 7 MC
(for instance), then it seems that we can
constrain the range of signal and background.
Level 7
30(No Transcript)
31(No Transcript)
32(No Transcript)
33(No Transcript)
34(No Transcript)
35Using the 2003 files with modified OM
sensitivity, I made this plot of normalized
background vs. normalized signal. Everything is
normalized assuming that Bartol central 100 OM
sensitivity is the truth.
Bartol min
70
Bartol central
Bartol max
100
130
36Albrecht asked me to check the space angle
difference between the True and Reconstructed
tracks of the muons near the horizon in the
inverted analysis. Although my statistics are
low (not as good as Newt's), I find that events
that pass my final quality cuts (minus the Nch
cut) are well reconstructed. The difference
between the true angle and the reconstructed
angle is usually within 4 to 5 degrees.
37Obviously, my statistics are low. Unweighted,
there are 107 events in this plot, but they are
weighted up to be comparable in numbers to the
4-year data.
38(No Transcript)
39(No Transcript)