More About Hypothesis Test - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

More About Hypothesis Test

Description:

To perform a hypothesis test, the null must be a statement ... null hypothesis? ... When testing a null hypothesis, we make a decision either to reject it or ... – PowerPoint PPT presentation

Number of Views:119
Avg rating:3.0/5.0
Slides: 21
Provided by: karieb
Category:
Tags: hypothesis | more | null | test

less

Transcript and Presenter's Notes

Title: More About Hypothesis Test


1
More About Hypothesis Test
  • Chapter 21

2
The Null Hypothesis
  • To perform a hypothesis test, the null must be a
    statement about the value of a parameter for a
    model.
  • How do we choose the null hypothesis? The
    appropriate null arises directly from the context
    of the problemit is not dictated by the data,
    but instead by the situation.
  • A good way to identify both the null and
    alternative hypotheses is to think about the Why
    of the situation.

3
The Null Hypothesis (cont.)
  • There is a temptation to state your claim as the
    null hypothesis.
  • However, you cannot prove a null hypothesis true.
  • So, it makes more sense to use what you want to
    show as the alternative.
  • This way, when you reject the null, you are left
    with what you want to show.

4
Review of Necessary Conditions
We must state the assumption and check the
corresponding conditions to determine whether we
can model the sampling distribution of the
proportion with a Normal model. Conditions to
check (same conditions as used for C.I.) 1.
Random Sampling Condition 2. 10 Condition 3.
Success/Failure Condition
5
How to Think about P-Values
A P-value is a conditional probability Tells us
the probability of the observed statistic given
that the null hypothesis is true P-value
Pobserved statistic value (or even more extreme)
Ho P-value is not the probability that the
null hypothesis is true The smaller the P-value,
the more confident we can be in declaring that we
doubt the null hypothesis
6
Alpha Levels
  • We can define rare event arbitrarily by setting
    a threshold for our P-value.
  • If our P-value falls below that point, well
    reject H0. We call such results statistically
    significant.
  • The threshold is called an alpha level, denoted
    by ?.

7
Alpha Levels (cont.)
  • Common alpha levels are 0.10, 0.05, and 0.01.
  • You have the optionalmost the obligationto
    consider your alpha level carefully and choose an
    appropriate one for the situation.
  • The alpha level is also called the significance
    level.
  • When we reject the null hypothesis, we say that
    the test is significant at that level.

8
Alpha Levels (cont.)
  • What can you say if the P-value does not fall
    below ??
  • You should say that The data have failed to
    provide sufficient evidence to reject the null
    hypothesis.
  • Dont say that you accept the null hypothesis.

9
Making Errors
When testing a null hypothesis, we make a
decision either to reject it or fail to reject
it. Our conclusions are sometimes correct and
sometimes wrong (even if we do everything
correctly). There are two types of errors that
can be made. Type I error The mistake of
rejecting the null hypothesis when it is actually
true. Type II error The mistake of failing to
reject the null hypothesis when it is actually
false.
10
Making Errors (cont.)
Heres an illustration of the four situations in
a hypothesis test
Which type of error is more serious depends on
the situation at hand. In other words, the
gravity of the error is context dependent.
11
Making Errors (cont.)
How can we remember which error is type I and
which is type II? Lets try a mnemonic device -
ROUTINE FOR FUN Using only the consonants from
those words RouTiNe FoR FuN We can easily
remember that a type I error is RTN reject true
null (hypothesis), whereas a type II error is
FRFN failure to reject a false null
(hypothesis).
12
Making Errors (cont.)
  • How often will a Type I error occur?
  • Since a Type I error is rejecting a true null
    hypothesis, the probability of a Type I error is
    our ? level.
  • When H0 is false and we reject it, we have done
    the right thing.
  • A tests ability to detect a false hypothesis is
    called the power of the test.

13
Making Errors (cont)
  • When H0 is false and we fail to reject it, we
    have made a Type II error.
  • We assign the letter ? to the probability of this
    mistake.
  • Its harder to assess the value of ? because we
    dont know what the value of the parameter really
    is.
  • There is no single value for ?--we can think of a
    whole collection of ?s, one for each incorrect
    parameter value.

14
Making Errors (cont.)
  • One way to focus our attention on a particular ?
    is to think about the effect size.
  • Ask How big a difference would matter?
  • We could reduce ? for all alternative parameter
    values by increasing ?.
  • This would reduce ? but increase the chance of a
    Type I error.
  • This tension between Type I and Type II errors is
    inevitable.
  • The only way to reduce both types of errors is to
    collect more data. Otherwise, we just wind up
    trading off one kind of error against the other.

15
Power
  • The power of a test is the probability that it
    correctly rejects a false null hypothesis.
  • The power of a test is 1 ?.
  • The value of the power depends on how far the
    truth lies from the null hypothesis value.
  • The distance between the null hypothesis value,
    p0, and the truth, p, is called the effect size.
  • Power depends directly on effect size.

16
Controlling Type I and Type II Errors
  • a, b, and sample size (n) are all related, so
    when you choose or determine any two of them, the
    third is automatically determined.
  • Try to use the largest a that you can tolerate.
    However, for type I errors with more serious
    consequences, select smaller values of a.
  • Then choose a sample size as large as reasonable,
    based on considerations of time, cost and other
    relevant factors.

17
Controlling Type I and Type II Errors (cont.)
  • For any fixed sample size n, a decrease in a will
    cause an increase in b.
  • For any fixed a, an increase in the sample size n
    will cause a decrease in b.
  • To decrease both a and b, increase the sample
    size.

18
Example - Radio Ads
A company is willing to renew its advertising
contract with a local radio station only if the
station can prove that more than 20 of the
residents of the city have heard the ad and
recognize the companys product. The radio
station conducts a random phone survey of 400
people. A.) What are the hypotheses? B.) What
would a Type I error be? C.) What would a Type II
error be?
19
Example - Radio Ads (cont.)
D.) The station plans to conduct this test using
a 10 level of significance, but the company
wants the significance level lowered to 5.
Why? E.) What is meant by the power of the
test. F.) For which level of significance will
the power of this test be higher? Why? G.) They
finally agree to use a 0.5, but the company
proposes that the station call 600 people instead
of the 400 initially proposed. Will that make
the risk of Type II error higher or lower.
Explain.
20
Assignment
  • Read Chapter 23
  • Try the following problems from Ch. 21
  • 1, 3, 13, 15, 19, 21, and 25
Write a Comment
User Comments (0)
About PowerShow.com