Analysis of Variance - PowerPoint PPT Presentation

1 / 49
About This Presentation
Title:

Analysis of Variance

Description:

Therefore, to look up the critical value of F, we have to take into account both ... If it does not exceed the critical value, we fail to reject the null hypothesis. ... – PowerPoint PPT presentation

Number of Views:18
Avg rating:3.0/5.0
Slides: 50
Provided by: buddy5
Category:

less

Transcript and Presenter's Notes

Title: Analysis of Variance


1
Chapter 13
  • Analysis of Variance

2
Weve talked about how one evaluates the
difference between two sample means by conducting
a t test. In many instances, however, we might
have three, four, or more sample means to
compare. When that is true, we will want to
conduct an analysis of variance.
3
An Example
Suppose we were interested in the effect of a
particular drug on activity level. Rather than
comparing a group that received the drug with one
that did not, perhaps we manipulated the drug
dose level resulting in the following four
groups control (no drug), low dose, medium
dose, and high dose.
4
Possible Outcome
Average activity level as a function of drug
dosage
5
We might analyze the results of this experiment
by comparing pairs of means with t tests. To
compare all possible pairs of means, however, it
would take six t tests. Since each t test has a
5 chance of producing a type I error, the chance
of getting at lease one type I error would be
quite high.
6
Experimentwise Error
Experimentwise error is the overall probability
of making a type I error in the analysis of the
experiment. If we conducted six t tests, the
experimentwise error would be too high.
7
The solution to the problem is to conduct an
analysis of variance.
8
Logic of Analysis of Variance(ANOVA)
The logic of analysis of variance is to compare
the variance observed among the sample means to
the variance one would expect due to chance or
error.
9
If the variance among the groups is greater than
we would have expected by chance, then we reject
the null hypothesis.
10
The Null and Alternative Hypotheses
H0 All means are equal, there are no treatment
effects, the differences are not significant, etc.
HA All means are not equal, there are treatment
effects, the differences are significant, etc.
11
To determine if the variance among the groups is
more than we would expect by chance, we compute
an F score.
12
The F score
The F score is a ratio of two variances
13
The observed variance among the groups
potentially has two sources Variance due to
error and variance due to treatment effects
14
(No Transcript)
15
If the null hypothesis is true, and there are no
treatment effects, then
16
If the null hypothesis is not true, and there are
treatment effects, then
17
To decide if we should reject the null
hypothesis, we have to determine if the F score
is so large that the likelihood of having
obtained it when the null hypothesis is true is
less than 5 (i.e., alpha .05).
18
We determine that likelihood by comparing our
calculated value of F with the critical value
that defines the rejection region
19
Formulas for Computing F
We can see what needs to be computed by working
backwards from the F score.
MSA refers to the observed variance among the
groups and MSE refers to the estimated variance
due to error
20
Computing Mean Squares (MS) and degrees of
freedom (df)
21
Computing sum of squares (SS)
22
Steps for Computing F
  • Compute Sums of Squares
  • SStot
  • SSA
  • SSE
  • Compute degrees of freedom
  • dfA
  • dfE

23
Steps for Computing F contd
  • Compute Mean Squared
  • MSA
  • MSE
  • Compute F

24
Example
25
Sum of Squares Total
26
Sum of Squares Among
27
Sum of Squares Error
28
Degrees of Freedom
29
Mean Squares
30
F score
31
Interpreting the F score
Since the F score is the ratio of variance among
the groups and the estimated variance due to
error, we can say that in this example the
observed variance was roughly 162 times the
variance we would expect due to error.
32
This should lead us to suspect that we will
probably reject the null hypothesis. We dont
know however until we compare our calculated
value of F with the critical value found in the
table.
33
Looking up the critical value of F
The shape of the F distribution changes as a
function of the degrees of freedom (as the t
distribution did). Therefore, to look up the
critical value of F, we have to take into account
both the number of degrees of freedom associated
with the numerator and the number of degrees of
freedom associated with the denominator.
34
To find the critical value of F in table B.4 (The
F Distribution), one must find the column for the
degrees of freedom associated with the numerator
and the row for the degrees of freedom associated
with the denominator. The intersection of the
column and row lists the critical values of F.
The F in light print indicates the critical value
of F when alpha .05 the dark print indicates
the value when alpha .01.
35
If the calculated value exceeds the critical
value of F, we reject the null hypothesis. If it
does not exceed the critical value, we fail to
reject the null hypothesis.
36
In this example, the calculated value of F
exceeds both the critical value of F when alpha
.05 and when alpha .01. Therefore, we reject
the null hypothesis and indicate that the
probability of a type I error is less than 1
(i.e., p lt .01).
37
Post Hoc (after F) tests
Rejecting the null hypothesis there is more
variation among the means than one would expect
on the basis of chance. It does not tell us
which means differ from each other (except that
we know the two most extreme means must differ).
38
To determine which means differ, we would have to
conduct a post hoc test. Post hoc tests are
conducted when a significant F score is obtained.
There are a variety of post hoc tests most
control for the experimentwise error rate.
39
Other Issues Related to Analysis of Variance
40
As with t tests, the computational strategy will
vary depending on the relationship among the
groups.
41
One-way ANOVA
When the samples are independent (i.e., the
variable has been manipulated between subjects),
a one-way analysis of variance is conducted. The
preceding example illustrates a one-way ANOVA.
42
Repeated Measures ANOVA
When the samples are related (i.e., the variable
has been manipulated within subjects), a repeated
measures analysis of variance is conducted. The
logic is the same, but the computational
procedures change.
43
Higher Order Factorial Designs
Frequently, experiments will have more than one
independent variable.
44
For example, in addition to wanting to know how
drug dosage influences behavior, we might also be
interested in whether the effects were the same
for males and females. In this case we would
have two independent variables drug dosage and
gender.
45
Experimental Design
46
In this example, we would conduct a two-way
analysis of variance. We would compute separate
F scores for main effect of drug dosage, for
gender and for the interaction between drug
dosage and gender.
47
The main effects refer to the general, overall
effect of a variable without regard to other
variables. An interaction refers to instances
where the effect of one variable depends upon a
second variable.
48
These results suggest that there is an
interaction between drug dosage and gender
49
Some experiments will have three or more
independent variables. An F score must be
calculated for each source (i.e., main effect and
interaction). Each F score will indicate whether
the source of variance is significant or not.
Write a Comment
User Comments (0)
About PowerShow.com