Title: STATISTICAL INFERENCE PART II SOME PROPERTIES OF ESTIMATORS
1STATISTICAL INFERENCEPART IISOME PROPERTIES OF
ESTIMATORS
2SOME PROPERTIES OF ESTIMATORS
- ? a parameter of interest unknown
- Previously, we found good(?) estimator(s) for ?
or its function g(?). - Goal
- Check how good are these estimator(s). Or are
they good at all? - If more than one good estimator is available,
which one is better?
3SOME PROPERTIES OF ESTIMATORS
- UNBIASED ESTIMATOR (UE) An estimator is an UE
of the unknown parameter ?, if
Otherwise, it is a Biased Estimator of ?.
4SOME PROPERTIES OF ESTIMATORS
- ASYMPTOTICALLY UNBIASED ESTIMATOR (AUE) An
estimator is an AUE of the unknown parameter
?, if
5SOME PROPERTIES OF ESTIMATORS
- CONSISTENT ESTIMATOR (CE) An estimator which
converges in probability to an unknown parameter
? for all ??? is called a CE of ?.
For large n, a CE tends to be closer to the
unknown population parameter.
6EXAMPLE
By WLLN,
7MEAN SQUARED ERROR (MSE)
- The Mean Square Error (MSE) of an estimator
for estimating ? is
8MEAN SQUARED ERROR CONSISTENCY
- Tn is called mean squared error consistent (or
consistent in quadratic mean) if ETn??2?0 as
n??.
Theorem Tn is consistent in MSE iff
i) Var(Tn)?0 as n??.
- If ETn??2?0 as n??, Tn is also a CE of ?.
9EXAMPLES
- XExp(?), ?gt0. For a r.s of size n, consider the
following estimators of ?, and discuss their bias
and consistency. -
- Which estimator is better?
10SUFFICIENT STATISTICS
- YU(X1, X2,,Xn ) is a statistic.
- A sufficient statistic, Y, is a statistic which
contains all the information for the estimation
of ?.
11SUFFICIENT STATISTICS
- Given the value of Y, the sample contains no
further information for the estimation of ?.
- Y is a sufficient statistic (ss) for ? if the
conditional distribution h(x1,x2,,xny) does not
depend on ? for every given Yy.
- If Y is a ss for ?, then any 1-1 transformation
of Y, say Y1fn(Y) is also a ss for ?.
12SUFFICIENT STATISTICS
- The conditional distribution of sample rvs given
the value of y of Y, is defined as
Not depend on ? for every given y.
ss for ?
may include y or constant.
- Also, the conditional range of Xi given y not
depend on ?.
13SUFFICIENT STATISTICS
- EXAMPLE XBer(p). For a r.s. of size n, show
that is a ss for p.
14SUFFICIENT STATISTICS
- Neymans Factorization Theorem Y is a ss for ?
iff
The likelihood function
Does not depend on xi except through y
Not depend on ? (also in the range of xi.)
where k1 and k2 are non-negative functions.
15EXAMPLES
- 1. XBer(p). For a r.s. of size n, find a ss for
p if exists.
16EXAMPLES
- 2. XBeta(?,2). For a r.s. of size n, find a ss
for ?. -
17SUFFICIENT STATISTICS
- A ss may not exist.
- Jointly ss Y1,Y2,,Yk may be needed. Example
Example 10.2.5 in Bain and Engelhardt (page 342
in 2nd edition), X(1) and X(n) are jointly ss for
? - If the MLE of ? exists and unique and if a ss for
? exists, then MLE is a function of a ss for ?.
18EXAMPLE
- XN(?,?2). For a r.s. of size n, find jss for ?
and ?2. -
-
19MINIMAL SUFFICIENT STATISTICS
- If is a ss for ?, then,
- is also a ss
-
- for ?. But, the first one does a better job in
data reduction. A minimal ss achieves the
greatest possible reduction.
20MINIMAL SUFFICIENT STATISTICS
- A ss T(X) is called minimal ss if, for any other
ss T(X), T(x) is a function of T(x). - THEOREM Let f(x?) be the pmf or pdf of a sample
X1, X2,,Xn. Suppose there exist a function T(x)
such that, for two sample points x1,x2,,xn and
y1,y2,,yn, the ratio - is constant with respect to ? iff T(x)T(y).
Then, T(X) is a minimal sufficient statistic for
?.
21EXAMPLE
- XN(?,?2) where ?2 is known. For a r.s. of size
n, find minimal ss for ?.
Note A minimal ss is also not unique. Any 1-to-1
function is also a minimal ss.
22RAO-BLACKWELL THEOREM
- Let X1, X2,,Xn have joint pdf or pmf
f(x1,x2,,xn?) and let S(S1,S2,,Sk) be a
vector of jss for ?. If T is an UE of ?(?) and
?(T)E(T?S), then - ?(T) is an UE of ?(?) .
- ?(T) is a fn of S, so it is also jss for ?.
- Var(?(T) )? Var(T) for all ???.
- ?(T) is a uniformly better unbiased estimator of
?(?) .
23RAO-BLACKWELL THEOREM
- Notes
- ?(T)E(T?S) is at least as good as T.
- For finding the best UE, it is enough to consider
UEs that are functions of a ss, because all such
estimators are at least as good as the rest of
the UEs.
24Example
- Hogg Craig, Exercise 10.10
- X1,X2Exp(?)
- Find joint p.d.f. of ss Y1X1X2 for ? and Y2X2.
- Show that Y2 is UE of ? with variance ?².
- Find f(y1)E(Y2Y1) and variance of f(Y1).
25ANCILLARY STATISTIC
- A statistic S(X) whose distribution does not
depend on the parameter ? is called an ancillary
statistic. - Unlike a ss, an ancillary statistic contains no
information about ?.
26Example
- Example 6.1.8 in Casella Berger, page 257
- Let XiUnif(?,?1) for i1,2,,n
- Then, range RX(n)-X(1) is an ancillary statistic
because its pdf does not depend on ?.
27COMPLETENESS
- Let f(x ?), ??? be a family of pdfs (or pmfs)
and U(x) be an arbitrary function of x not
depending on ?. If - requires that the function itself equal to 0
for all possible values of x then we say that
this family is a complete family of pdfs (or
pmfs).
i.e., the only unbiased estimator of 0 is 0
itself.
28EXAMPLES
- 1. Show that the family Bin(n2,?) 0lt?lt1 is
complete.
29EXAMPLES
- 2. XUniform(??,?). Show that the family f(x?),
?gt0 is not complete.
30COMPLETE AND SUFFICIENT STATISTICS (css)
- Y is a complete and sufficient statistic (css)
for ? if Y is a ss for ? and the family
is complete.
The pdf of Y.
1) Y is a ss for ?.
2) u(Y) is an arbitrary function of Y.
E(u(Y))0 for all ??? implies that u(y)0
for all possible Yy.
31BASU THEOREM
- If T(X) is a complete and minimal sufficient
statistic, then T(X) is independent of every
ancillary statistic.
Ancillary statistic for ?
S2
32BASU THEOREM
- Example
- Let TX1 X2 and UX1 - X2
- We know that T is a complete minimal ss.
- UN(0, ?2) ? distribution free of ?
- ? T and U are independent by Basus Theorem
X1, X2N(?,?2), independent, ?2 known.
33THE MINIMUM VARIANCE UNBIASED ESTIMATOR
- Rao-Blackwell Theorem If T is an unbiased
estimator of ?, and S is a ss for ?, then
?(T)E(T?S) is - an UE of ?, i.e.,E?(T)EE(T?S)? and
- the MVUE of ?.
34LEHMANN-SCHEFFE THEOREM
- Let Y be a css for ?. If there is a function Y
which is an UE of ?, then the function is the
unique Minimum Variance Unbiased Estimator
(UMVUE) of ?.
- T(Y) is the UMVUE of ?.
- So, it is the best estimator of ?.
35THE MINIMUM VARIANCE UNBIASED ESTIMATOR
- Let Y be a css for ?. Since Y is complete, there
could be only a unique function of Y which is an
UE of ?. - Let U1(Y) and U2(Y) be two function of Y. Since
they are UEs, E(U1(Y)?U2(Y))0 imply
W(Y)U1(Y)?U2(Y)0 for all possible values of Y.
Therefore, U1(Y)U2(Y) for all Y.
36Example
- Let X1,X2,,Xn Poi(µ). Find UMVUE of µ.
- Solution steps
- Show that is css for µ.
- Find a statistics (such as S) that is UE of µ
and a function of S. - Then, S is UMVUE of µ by Lehmann-Scheffe Thm.
37Note
- The estimator found by Rao-Blackwell Thm may not
be unique. But, the estimator found by
Lehmann-Scheffe Thm is unique.