Classical Regression - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

Classical Regression

Description:

Rewriting beta(2) To operationalize, we want to think of what we know ... What do we know about the expected value of beta? We can rewrite this as ... – PowerPoint PPT presentation

Number of Views:12
Avg rating:3.0/5.0
Slides: 22
Provided by: irene8
Learn more at: https://eml.berkeley.edu
Category:

less

Transcript and Presenter's Notes

Title: Classical Regression


1
Classical Regression
  • Lecture 7

2
Todays Plan
  • For the next few lectures well be talking about
    the classical regression model
  • Looking at both estimators for a and b
  • Inferences on what a and b actually tell us
  • Today how to operationalize the model
  • Looking at BLUE for bi-variate model
  • Next lecture Inference and hypothesis tests
    using the t, F, and c2
  • Examples of linear regressions using Excel

3
Estimating coefficients
  • Our model Y a bX e
  • Two things to keep in mind about this model
  • 1) It is linear in both variables and parameters
  • Examples of non-linearity in variables
  • Y a bX2 or Y a bex
  • Example of non-linearity in parameters
  • Y a b2X
  • OLS can cope with non-linearity in variables but
    not in parameters

4
Estimating coefficients (3)
  • 2) Notation were not estimating a and b
    anymore
  • We are estimating coefficients which are
    estimates of the parameters of a and b
  • We will denote the coefficients as or and
    or
  • We are dealing with a sample size of n
  • For each sample we will get a different and
    pair

5
Estimating coefficients (4)
  • In the same way that you can take a sample to get
    an estimate of µy you can take a sample to get an
    estimate of the regression line, of ? and ?

6
The independent variable
  • We also have a given variable X, its values are
    known
  • This is called the independent variable
  • Again, the expectation of Y given X is
  • E(Y X) a bX
  • With constant variance
  • V(Y) ?2

7
A graph of the model
Y
(Y1, X1)
Y
X
8
What does the error term do?
  • The error term gives us the test statistics and
    tells us how well the model Y abXe fits the
    data
  • The error term represents
  • 1) Given that Y is a random variable, e is also
    random, since e is a function of Y
  • 2) Variables not included in the model
  • 3) Random behavior of people
  • 4) Measurement error
  • 5) Enables a model to remain parsimonious - you
    dont want all possible variables in the model if
    some have little or no influence

9
Rewriting beta
  • Our complete model is Y a bX e
  • We will never know the true value of the error e
    so we will estimate the following equation
  • For our known values of X we have estimates of ?,
    ?, and ?
  • So how do we know that our OLS estimators give us
    the BLUE estimate?
  • To determine this we want to know the expected
    value of ? as an estimator of b, which is the
    population parameter

10
Rewriting beta(2)
  • To operationalize, we want to think of what we
    know
  • We know from lecture 2 that there should be no
    correlation between the errors and the
    independent variables
  • We also know
  • Now we have that E(YX) a bX E(?X)
  • The variance of Y given X is V(Y) ?2 so
    V(?X) ?2

11
Rewriting beta(3)
  • Rewriting ?
  • In lecture 2 we found the following estimator for
    ?
  • Using some definitions we can show
  • E(?) b

12
Rewriting beta (4)
  • We have definitions that we can use

So that
  • Using the definitions for yi and xi we can
    rewrite ? as
  • We can also write

13
Rewriting beta (5)
  • We can rewrite ? as where
  • The properties of ci

14
Showing unbiasedness
  • What do we know about the expected value of beta?
  • We can rewrite this as
  • Multiplying the brackets out we get
  • Since b is constant,

15
Showing unbiasedness (2)
  • Looking back at the properties for ci we know that
  • Now we can write this as
  • We can conclude that the expected value of ? is b
    and that ? is an unbiased estimator of b

16
Gauss Markov Theorem
  • We can now ask is ? an efficient estimator?
  • The variance of b is

Where
  • How do we know that OLS is the most efficient
    estimator?
  • The Gauss-Markov Theorem

17
Gauss Markov Theorem (2)
  • Similar to our proof on the estimator for my.
  • Suppose we use a new weight
  • We can take the expected value of E(?)

18
Gauss Markov Theorem (3)
  • We know that
  • For ? to be unbiased, the following must be true

19
Gauss Markov Theorem (4)
  • Efficiency (best)?
  • We have where
  • Therefore the variance of this new ? is

  • Sdi2 2Scidi
  • If each di ? 0 such that ci ?ci then
  • So when we use weights cI we have an inefficient
    estimator

20
Gauss Markov Theorem (5)
  • We can conclude that
  • is BLUE

21
Wrap up
  • What did we cover today?
  • Introduced the classical linear regression model
    (CLRM)
  • Assumptions under the CLRM
  • 1) Xi is nonrandom (its given)
  • 2) E(ei) E(eiXi) 0
  • 3) V(ei) V(eiXi) ?2
  • 4) Covariance (eiej) 0
  • Talked about estimating coefficients
  • Defined the properties of the error term
  • Proof by contradiction for the Gauss Markov
    Theorem
Write a Comment
User Comments (0)
About PowerShow.com