Sampling - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

Sampling

Description:

Why do we do we use sampling methods? ... Dissimulation. Potential Problems. Excessive random variability ... Dissimulation ... – PowerPoint PPT presentation

Number of Views:25
Avg rating:3.0/5.0
Slides: 37
Provided by: JCooper5
Category:

less

Transcript and Presenter's Notes

Title: Sampling


1
Sampling Experimental Control
  • Psych 231 Research Methods in Psychology

2
Sampling
  • Why do we do we use sampling methods?
  • Typically dont have the resources to test
    everybody, so we test a subset
  • Goals of good sampling
  • Maximize Representativeness
  • to what extent do the characteristics of those in
    the sample reflect those in the population
  • Reduce Bias
  • a systematic difference between those in the
    sample and those in the population

3
Sampling
everybody that the research is targeted to be
about
the subset of the population that actually
participates in the research
4
Sampling
5
Sampling Methods
  • Probability sampling
  • Use some form of random sampling
  • Non-probability sampling
  • Dont use random sampling
  • These are typically not considered as good

6
Simple random sampling
  • Every individual has a equal and independent
    chance of being selected from the population

7
Systematic sampling
  • Selecting every nth person

8
Stratified sampling
  • Step 1 Identify groups (strata)
  • Step 2 randomly select from each group

9
Convenience sampling
  • Use the participants who are easy to get

10
Quota sampling
  • Step 1 identify the specific subgroups
  • Step 2 take from each group until desired number
    of individuals

11
Experimental Control
  • Our goal
  • to test the possibility of a relationship between
    the variability in our IV and how that affects
    our DV.
  • Control is used to minimize excessive
    variability.
  • To reduce the potential of confoundings.

12
Sources of variability (noise)
  • Sources of Total (T) Variability
  • T NonRandomexp NonRandomother Random
  • Nonrandom (NR) Variability - systematic variation
  • A. (NRexp)manipulated independent variables (IV)
  • i. our hypothesis is that changes in the IV will
    result in changes in the DV

13
Sources of variability (noise)
  • Sources of Total (T) Variability
  • T NonRandomexp NonRandomother Random
  • Nonrandom (NR) Variability - systematic variation
  • B. (NRother)extraneous variables (EV) which
    covary with IV
  • i. other variables that also vary along with the
    changes in the IV, which may in turn influence
    changes in the DV (Condfounds)

14
Sources of variability (noise)
  • Sources of Total (T) Variability
  • T NonRandomexp NonRandomother Random
  • Non-systematic variation
  • C. Random (R) Variability
  • imprecision in manipulation (IV) and/or
    measurement (DV)
  • randomly varying extraneous variables (EV)

15
Sources of variability (noise)
  • Sources of Total (T) Variability
  • T NRexp NRother R
  • Goal to reduce R and NRother so that we can
    detect NRexp.
  • That is, so we can see the changes in the DV that
    are due to the changes in the independent
    variable(s).

16
Weight analogy
  • Imagine the different sources of variability as
    weights

Treatment group
control group
17
Weight analogy
  • If NRother and R are large relative to NRexp then
    detecting a difference may be difficult

18
Weight analogy
  • But if we reduce the size of NRother and R
    relative to NRexp then detecting gets easier

19
Using control to reduce problems
  • Potential Problems
  • Excessive random variability
  • Confounding
  • Dissimulation

20
Potential Problems
  • Excessive random variability
  • If control procedures are not applied, then R
    component of data will be excessively large, and
    may make NR undetectable
  • So try to minimize this by using good measures of
    DV, good manipulations of IV, etc.

21
Excessive random variability
Hard to detect the effect of NRexp
22
Potential Problems
  • Confounding
  • If relevant EV co-varies with IV, then NR
    component of data will be "significantly" large,
    and may lead to misattribution of effect to IV

IV
DV
EV
23
Confounding
Hard to detect the effect of NRexp because the
effect looks like it could be from NRexp but is
really (mostly) due to the NRother
NR
exp
24
Potential Problems
  • Potential problem caused by experimental control
  • Dissimulation
  • If EV which interacts with IV is held constant,
    then effect of IV is known only for that level of
    EV, and may lead to overgeneralization of IV
    effect
  • This is a potential problem that affects the
    external validity

25
Methods of Controlling Variability
  • Comparison
  • Production
  • Constancy/Randomization

26
Methods of Controlling Variability
  • Comparison
  • An experiment always makes a comparison, so it
    must have at least two groups
  • Sometimes there are control groups
  • This is typically the absence of the treatment
  • Without control groups if is harder to see what
    is really happening in the experiment
  • it is easier to be swayed by plausibility or
    inappropriate comparisons
  • Sometimes there are just a range of values of the
    IV

27
Methods of Controlling Variability
  • Production
  • The experimenter selects the specific values of
    the Independent Variables
  • (as opposed to allowing the levels to freely vary
    as in observational studies)
  • Need to do this carefully
  • Suppose that you dont find a difference in the
    DV across your different groups
  • Is this because the IV and DV arent related?
  • Or is it because your levels of IV werent
    different enough

28
Methods of Controlling Variability
  • Constancy/Randomization
  • If there is a variable that may be related to the
    DV that you cant (or dont want to) manipulate
  • you should either hold it constant (control
    variable)
  • let it vary randomly across all of the
    experimental conditions (random variable)
  • But beware confounds, variables that are related
    to both the IV and DV but arent controlled

29
Poorly designed experiments
  • Example Does standing close to somebody cause
    them to move?
  • So you stand closely to people and see how long
    before they move
  • Problem no control group to establish the
    comparison group (this design is sometimes called
    one-shot case study design)

30
Poorly designed experiments
  • Does a relaxation program decrease the urge to
    smoke?
  • One group pretest-posttest design
  • Pretest desire level give relaxation program
    posttest desire to smoke

31
Poorly designed experiments
  • One group pretest-posttest design
  • Problems include history, maturation, testing,
    instrument decay, statistical regression, and more

Independent Variable
Dependent Variable
Dependent Variable
participants
Pre-test
Training group
Post-test Measure
32
Poorly designed experiments
  • Example Smoking example again, but with two
    groups. The subjects get to choose which group
    (relaxation or no program) to be in
  • Non-equivalent control groups
  • Problem selection bias for the two groups, need
    to do random assignment to groups

33
Poorly designed experiments
  • Non-equivalent control groups

Self Assignment
Independent Variable
Dependent Variable
Training group
Measure
participants
No training (Control) group
Measure
34
Well designed experiments
  • Post-test only designs

Random Assignment
Independent Variable
Dependent Variable
Experimental group
Measure
participants
Control group
Measure
35
Well designed experiments
  • Pretest-posttest design

Random Assignment
Independent Variable
Dependent Variable
Dependent Variable
Experimental group
Measure
Measure
participants
Control group
Measure
Measure
36
Next time
  • Read Chpt 8
Write a Comment
User Comments (0)
About PowerShow.com