Inference about a Mean - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Inference about a Mean

Description:

The values of Y are added up in a linear fashion such that all Y values appear ... Our expression for variance shows that the variance of is dependent on the ... – PowerPoint PPT presentation

Number of Views:21
Avg rating:3.0/5.0
Slides: 20
Provided by: irene8
Learn more at: http://elsa.berkeley.edu
Category:
Tags: fashion | inference | mean | show | the

less

Transcript and Presenter's Notes

Title: Inference about a Mean


1
Inference about a Mean
  • Lecture 6

2
Todays Plan
  • Start to explore more concrete ideas of
    statistical inference
  • Look at the process of generalizing from the
    sample mean to the population value
  • Consider properties of the sample mean as a
    point estimator
  • Properties of an estimator BLUE (Best Linear
    Unbiased Estimator)

3
Sample and Population Differences
  • So far weve seen how weights connect a sample
    and a population
  • But what if all we have is a sample without any
    weights?
  • What can the estimation of the mean tell us?
  • We need the sample to be composed of
    independently random and identically drawn
    observations

4
Estimating the Expected value
  • Weve dealt with the expected value µyE(Y) and
    the variance V(Y) ?2
  • Previously, our estimation of the expected value
    of the mean was
  • But this is only only a good estimate of the true
    expected value if the sample is an unbiased
    representation of the population
  • What does the actual estimator tell us?

5
BLUE
  • We need to consider the properties of as a
    point estimator of µ
  • Three properties of an estimator BLUE
  • Best (efficiency)
  • Linearity
  • Unbiasedness
  • (also Consistency)
  • Well look at linearity first, then unbiasedness
    and efficiency

6
BLUE Linearity
  • is a linear function of sample observations
  • The values of Y are added up in a linear fashion
    such that all Y values appear with weight equal

7
BLUE Unbiasedness
  • Proving that is an unbiased estimator of the
    expected value of µ
  • We can rewrite the equation for
  • This expression says that each Y has an equal
    weight of 1/n
  • Since ci is a constant, the expectation of is

8
Proving Unbiasedness
  • Lets examine an estimator that is biased and
    inefficient
  • We can define some other estimator m as
  • We can then plug the equation for c into the
    equation for m and take its expectation
  • The expected value of this new estimator m is
    biased if

9
BLUE Best (Efficiency)
  • To look at efficiency, we want to consider the
    variance of
  • We can redefine as
  • Our variance can be written as
  • Where the last term is the covariance term
  • Covariance cancels out because we are assuming
    that the sample was constructed under
    independence. So there should be no covariance
    between the Y values
  • Note well see later in the semester that
    covariance will not always be zero

10
BLUE Best (Efficiency) (2)
  • So how did we get the equation for the variance
    of ?

11
Variance
  • Our expression for variance shows that the
    variance of is dependent on the sample size
    n
  • How is this different from the variance of Y?

12
Variance (2)
  • Before when we were considering the distribution
    around µy we were considering the distribution of
    Y
  • Now we are considering as a point estimator
    for µy
  • The estimate for will have its own
    probability distribution much like Y had its own
  • The difference is that the distribution for
    has a variance of ?2/n whereas Y has a variance
    of ?2

13
Proving Efficiency
  • The variance of m looks like this
  • V(m) Sici2V(Y) ShSichciC(YhYi)
  • Why is this not the most efficient estimate?
  • We have an inefficient estimator if we use
    anything other than ci for weights

14
Consistency
  • This isnt directly a part of BLUE
  • The idea is that an optimal estimator is best,
    linear, and unbiased
  • But, an estimator can be biased or unbiased and
    still be consistent
  • Consistency means that with repeated sampling,
    the estimator tends to the same value for

15
Consistency (2)
  • We write our estimator of µ as
  • We can write a second estimator of µ
  • The expected value of Y is

16
Consistency (3)
  • If n is small, say 10,
  • Y will be a biased estimator of µ
  • But, Y will be a consistent estimator
  • So as n approaches infinity Y becomes an
    unbiased estimator of µ

17
Law of Large Numbers
  • Think of this picture

As you draw samples of larger and larger size,
the law of large numbers says that your
estimation of the sample mean will become a
better approximation of µy
The law only hold if you are drawing random
samples
18
Central Limit Theorem
  • Even if the underlying population is not normally
    distributed, the sampling distribution of the
    mean tends to normality as sample size increases
  • This is an important result if n lt 30

19
What have we done today?
  • Examined the properties of an estimator.
  • Estimator was for the estimation of a value for
    an unknown population mean.
  • Desirable properties are BLUE Best Linear
    Unbiased.
  • Also should include consistency.
Write a Comment
User Comments (0)
About PowerShow.com