Experiments, Good and Bad - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Experiments, Good and Bad

Description:

Randomization uses chance to assign subject to the treatments. ... lurking variables on the responses can be avoided by comparing two or more treatments. ... – PowerPoint PPT presentation

Number of Views:27
Avg rating:3.0/5.0
Slides: 20
Provided by: MBre57
Category:

less

Transcript and Presenter's Notes

Title: Experiments, Good and Bad


1
Chapter 5
  • Experiments, Good and Bad
  • Statistics David S. Moore

2
Experimental Studies
  • Experiments are active data production
  • Components of an Experiment
  • Response (dependent) Variable a variable that
    measures the outcome of the experiment (i.e. the
    Y variable in an equation)
  • Explanatory (independent) variable the variable
    that we think predicts the changes in the
    response variable (i.e. the X variable in an
    equation)
  • Subjects individuals studied in an experiment
  • Treatment A specific experimental condition
    applied to the subjects. A treatment can be a
    combination of explanatory variables.

3
EXAMPLE
  • Does nicotine gum reduce cigarette smoking?
  • Response variable
  • Explanatory variable

4
Hidden Components
  • Confounding when the effects of predictor
    variables cannot be separated from the effects of
    uncontrolled variables.
  • What problems can confounding cause?
  • Lurking variable basically an unnoticed
    confounding variable
  • Is confounding present here? How?
  • The stats. department has instituted a new
    tracking system for students in STA 200 to
    prevent students from failing the class. The
    system is said to be very effective because in
    the first year after its initiation there is a
    lower failure rate of students in STA 200
    compared with its previous year.

5
Example
  • A study is done to compare the progress of
    students taking a course online versus taking the
    course in the classroom. In order to measure the
    progress of students, a test is given to the
    students after taking the course and the test
    scores are compared.
  • Explanatory variable
  • Response variable
  • Lurking variable
  • Confounded variables

6
How can we improve experiments?
  • Randomized Comparative Experiments a method to
    compare two or more treatments
  • Helps eliminate confounding and lurking variables
  • You may be familiar with this method, medical
    studies often compare multiple treatments. (i.e.
    control groups vs. treatment groups)
  • Randomization uses chance to assign subject to
    the treatments.
  • Randomization means to randomly select which
    subjects receive which treatment.

7
Randomized Experimental Design
  • In an experiment we want to make conclusions
    about the effects of our explanatory variable.
  • Randomization is important so that we do not have
    biased groups before we apply the treatments
  • The comparative design of the experiment allows
    us to eliminate some of the confounding effects
    that may be present. This allows us to make
    conclusions about the effects of the explanatory
    variables on the response variables.
  • SIZE MATTERS (sample size)! Use a large enough
    sample!

8
Randomized comparative experiments
  • Best way to conduct an experiment is to compare
    two or more treatments.
  • Reasons Shows us the effect of the explanatory
    variable(s) on the response variable(s) by
    eliminating a great deal of confounding.
  • This is exactly what randomized comparative
    experiments do.
  • To conduct a randomized comparative experiment,
    we use random assignments to divide our subjects
    into groups. Each group receives a different
    treatment. (We should try to keep our groups
    about the same size.)

9
Randomized comparative experiments (contd)
  • Simplest randomized comparative experiment
    divide subjects into two groups.
  • Control Group usually the group receiving the
    placebo
  • Treatment Group the group actually receiving
    the treatment
  • The designed experiment above should be done in
    this way.
  • Additional types of randomized comparative
    experiments
  • Type 1 Use an existing treatment as the control
    group. Compare this group with a group receiving
    a new treatment.
  • Type 2 Compare multiple new treatments by using
    multiple groups. (No control group.)

10
Most Common Experiment
  • Clinical Trials
  • Clinical trials experiments that study the
    effectiveness of medical treatments on actual
    patients
  • Placebo a dummy treatment that is made to look
    exactly like the true treatment (There are no
    active ingredients in a placebo.)
  • (If the placebo is a pill, sometimes it is
    referred to as a sugar pill.)
  • Main Reason to use a placebo Eliminate the
    placebo effect
  • Placebo effect Positive response to a dummy
    treatment.
  • The idea of receiving a treatment can affect the
    outcome of a clinical trial.
  • If ethical, dont tell the placebo group theyre
    getting a placebo blind

11
Example
  • In order to study the effectiveness of vitamin C
    in preventing colds, a researcher recruited 200
    volunteers. She randomly assigned 100 of them to
    take vitamin C for 10 weeks and the remaining 100
    to take nothing. The 200 participants recorded
    how many colds they had during the 10 weeks. The
    two groups were compared, and the researcher
    announced that taking vitamin C reduces the
    frequency of colds.

12
Logic of experimental design
  • Randomized comparative experiments are one of the
    most important ideas in statistics.
  • Reason They give us the opportunity to draw
    cause-and-effect conclusions.
  • How do they accomplish this goal?
  • By using randomization, we are able to form
    groups that are similar in all respects before we
    apply the treatment.
  • By using comparative designs, influences other
    than the experimental treatment(s) are the same
    in all groups.
  • By using this type of design, it is easy to see
    that any differences in the response variables
    must be due to the effects of the treatments.

13
Principles of design of experiments
  • Control
  • lurking variables on the responses can be avoided
    by comparing two or more treatments.
  • Randomize
  • Use enough subjects
  • reduces the possibility that variation in the
    results were due to chance

14
Careful!
  • Control refers to the overall effort to minimize
    variability in the way the experimental units are
    obtained and treated.

15
Drawing Conclusions
  • In an experiment we want to draw conclusions by
    comparing results of the treatments.
  • To conclude treatments are different, the
    differences in the treatments must be large
    enough to conclude that they do come from the
    variation that is always present in an
    experiment.
  • Statistically significant - when an observed
    effect is so large that it would rarely occur by
    chance.

16
Block Design Experiment
  • Block Design- separating experimental units or
    subjects into groups based on their likeness.
    Then, a random assignment of treatments will be
    given to the units within each block.
  • Example Men and women are different! Block by
    gender, then carry out two different treatments
    on the male group and the female group.

17
Why Block Design?
  • Another form of control- by making the subjects
    in the blocks similar, you control an amount of
    inconsistency that can happen between un-similar
    people. (Example rash cream)
  • Comparing male strength vs. female strength would
    return poor, inaccurate data and would increase
    the variability of the data.
  • A wise experimenter will form blocks based on the
    most important unavoidable sources of variability
    among experimental units.

18
Matched Pairs Design
  • A Match-Pairs Design is an experimental design in
    which the experimental units are somehow related
    -- can be the same person before and after a
    treatment, twins, husband wife, same
    geographical location, etc.
  • There are only two treatments in a matched-pairs
    design.

19
Matched Pairs Design
  • Ex measure a response variable on a unit prior
    to the treatment is applied and then measure the
    response variable on the same unit after the
    treatment is applied.
  • Why Matched Pairs? It reduces variation among the
    subject being tested.
  • Matched Pairs is a form of block testing.
Write a Comment
User Comments (0)
About PowerShow.com