Chapter 5, Sections 78 Multivariate Distributions - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

Chapter 5, Sections 78 Multivariate Distributions

Description:

Theorem 5.10. (p. 266) Cov(Y1, Y2) = E (Y1Y2) E (Y1) E (Y2). This ... The proof only uses the properties of expectation from Section 5.6 (Theorems 5.6 5.8) ... – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 21
Provided by: johnjc
Category:

less

Transcript and Presenter's Notes

Title: Chapter 5, Sections 78 Multivariate Distributions


1
Chapter 5, Sections 7-8 Multivariate
Distributions
Covariance Expected Value of Functions of RVs
? John J Currano, 03/30/2009
2
Definitions. Let Y1 and Y2 be random variables
with means ?1 and ?2, respectively. We define the
covariance of Y1 and Y2 to be Cov(Y1, Y2) E
(Y1 ? ?1) (Y2 ? ?2) and we define the
correlation cofficient, ?? , of Y1 and Y2 to be
Theorem 5.10. (p. 266) Cov(Y1, Y2) E (Y1Y2) ?
E (Y1) E (Y2). This theorem is proved in the
text. The proof only uses the properties of
expectation from Section 5.6 (Theorems 5.6 5.8).
3
Theorem 5.10. Cov(Y1, Y2) E (Y1Y2) ? E (Y1) E
(Y2).
Example. Suppose
4
Theorem 5.10. Cov(Y1, Y2) E (Y1Y2) ? E (Y1) E
(Y2).
Example. Suppose
5
f (y1, y2) 6(1 y2) for 0 ? y1 ? y2 ? 1
6
Since Cov(Y1, Y2) E (Y1Y2) ? E (Y1) E (Y2),
and since if Y1 and Y2 are independent E (Y1Y2)
E (Y1) E (Y2), we have the following
theorem Theorem. If Y1 and Y2 are independent
random variables, then Cov(Y1, Y2)
0 and Corr(Y1, Y2) ? 0. The converse is
false, as the next example shows.
7
Example. Suppose X U (??1, 1) and Y X 2.
What are Cov (X, Y ) and Corr (X, Y )? Using the
computing formula for covariance, Cov(X, Y) E
(X Y) ? E (X) E (Y). Since X U (??1, 1),
Since Y X 2, Since X Y X 3, so Cov (X, Y )
E (X Y) ? E (X) E (Y) 0 0 ? (1/3)
0 and Thus X and Y are uncorrelated even though
Y X 2 !
8
Properties of the Covariance Operator Recall
Cov (Y1, Y2) E (Y1 ? ?1) (Y2 ? ?2) .
Theorem 1. Cov (Y1, Y2) Cov (Y2, Y1). Theorem
2. V (Y ) Cov (Y, Y). Theorem 3. Cov(Y1, Y2)
E (Y1Y2) ? E (Y1) E (Y2). Theorem 4. Cov
(a1Y1, a2Y2) a1a2 Cov (Y1, Y2). Theorem 5. Cov
(X, Y1 Y2 ) Cov (X, Y1 ) Cov (X, Y2 ).
Cov (Y1 Y2, X ) Cov (Y1, X ) Cov (Y2, X
). Theorem 6. (Bilinearity of the Covariance
Operator)
9
Theorem 6. (Bilinearity of the Covariance
Operator) Theorem 7. (Variance of a linear
combination of random variables) Theorem 8.
(Variance of a linear comb. of independent random
vars.) If Y1, Y2, . . . , Yn are independent
random variables, then
Note Theorems 6 7 are parts of Theorem 5.12 on
page 271 of the text.
10
The correlation cofficient provides a measure of
both the sign and the degree of the linear
association between Y1 and Y2 . Since
,
  • ? gt 0 means Y1 and Y2 tend to be above average
    together and below average together
  • (Y1 ? ?1) and (Y2 ? ?2) tend to have same
    sign.
  • ? lt 0 indicates that one of Y1 and Y2 tends to
    be above average when the other is
    below average
  • (Y1 ? ?1) and (Y2 ? ?2) tend to have
    opposite signs.
  • the absolute value of ? (between 0 and 1) is a
    measure of the strength of the association.

11
  • The correlation cofficient
  • has the following properties
  • ?1 ? ? ? 1
  • If Y2 aY1 b (so that Y1 and Y2 are linearly
    related), then
  • If ? 1, then Y1 and Y2 are linearly
    related with a gt 0 if ? 1 and a lt 0 if ? ?1.
  • ? 0 ?? Cov(Y1, Y2) 0 ? E (Y1 Y2) E (Y1)
    E (Y2).
  • If ? 0, we say that Y1 and Y2 are
    uncorrelated. They may or may not be independent.

12
Uncorrelated random variables (? 0) need not be
independent, although independent random
variables are uncorrelated since Theorem 5.9 ?
E (Y1 Y2) E (Y1) E (Y2) when Y1 and Y2 are
independent, and then property 4 on the previous
slide ? ?? 0. The following is an example of
two uncorrelated random variables which are not
independent
Similar to Table 5.3 on page 267.
Then E (Y1) E (Y2) E (Y1 Y2) 0, but Y1
and Y2 are not independent, as can be seen since
the joint support is not a rectangle or p (?1,
0) 0, while
13
Example. Consider We have already calculated
Find (a) E(2Y1 3Y2) (b) V(2Y1 3Y2)
14
Example. Consider We know
Find (a) E(2Y1 3Y2)
  • 2 E(Y1) 3 E(Y2)
  • ?1

15
Example. Consider We know
Find (b) V ( 2Y1 3Y2) V ( 2Y1 3Y2) 22
V(Y1) ( 3)2 V(Y2) 2(2)(3) Cov(Y1, Y2)
4 V(Y1) 9 V(Y2) 12
Cov(Y1, Y2) We need to find V(Y1) and V(Y2).
16
Example. Consider We know
We need to find V(Y1) and V(Y2)
17
Example. Consider We have found
Thus, V(2Y1 3Y2) ?? 4 V(Y1) ?? 9 V(Y2)
? 12 Cov(Y1, Y2)
18
Read Examples 5.27 5.29 (pages 274-276). These
present three important results.
Example 5.27. If Y1, Y2, . . . , Yn are
independent random variables with common mean ??
and variance ? 2, and if
Where this is used Y1, Y2, . . . , Yn are the
outcomes of n independent trials of an
experiment, called a Random Sample. is
called the Sample Mean.
19
  • Example 5.28. If
  • p P(success) in a sequence of independent
    binomial trials
  • Y binomial(n, p)
  • an estimate of p
  • then

20
Example 5.29. Y, the number of red balls in a
sample of size n selected at random without
replacement from an urn with r red and (N r)
black balls, is a hypergeometric random variable.
On their way to calculating the mean and
variance of Y, they show that the probability a
red is chosen on any given draw is This is
obvious for the first draw, but not for the rest.
Write a Comment
User Comments (0)
About PowerShow.com