The British Crime Survey - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

The British Crime Survey

Description:

The standard error is the square root of (p * q) / N. p = the estimate, q = 100 the estimate ... Square root of 1.6 = 1.26. Multiply 1.97 by 1.26 = 2.48 ... – PowerPoint PPT presentation

Number of Views:46
Avg rating:3.0/5.0
Slides: 16
Provided by: rds70
Category:

less

Transcript and Presenter's Notes

Title: The British Crime Survey


1
(No Transcript)
2
Overview
  • Why carry out a survey?
  • Pros and cons of different approaches
  • Selecting a sample
  • Generalising our findings
  • Comparing results from different surveys

3
Why carry out a survey?
  • To find out something about our population of
    interest which we dont already know, e.g.
  • think there is a problem with ASB
  • satisfied with public services
  • ... and to monitor performance and inform policy
    development
  • Cheaper and quicker than to ask everyone

4
Pros and cons of different approaches
  • Interviewer administered
  • The pros
  • Allows more complex questionnaires
  • Obtain higher quality data
  • Achieve higher response rates
  • The cons
  • More costly and sometimes prohibitive
  • May be impractical
  • Issues around sensitive questions

5
Pros and cons of different approaches
  • Self-administered
  • The pros
  • Much cheaper if done by mail/email or the web
  • Good for asking sensitive questions
  • The cons
  • More constrained in length and complexity of
    questionnaire
  • Data quality concerns (e.g. lack of control over
    who fills it in)
  • Tend to get lower response

6
Selecting a sample
  • Identify population of interest
  • Non probability methods (e.g. quota sampling)
  • Lack of control over who is selected
  • Risk that those who easier to contact/more
    willing to participate will be over-represented
  • Cant use methods of statistical inference to
    generalise findings
  • Higher quality obtained by using probability
    sampling
  • All units have known and non zero chance of
    selection
  • Can use statistical methods to infer from our
    findings...
  • ... and take some account of possible
    non-response bias

7
Response rates
  • Vary by mode of interview
  • Face to face surveys typically achieve around 70
    (e.g. BCS currently 75)
  • Telephone surveys 60 plus
  • Mail surveys 30 to 50
  • Web surveys 10 to 20
  • Real concern is non-response bias
  • Extent to which our responding sample
    systematically differ from our non-responding
    sample
  • Response rates are an indication of risk but not
    necessarily a good measure

8
What are confidence intervals
  • A (probability) sample survey produces
  • an estimate of the true population value that we
    are trying to measure ...
  • ... and it may differ from the actual figures we
    would have been obtained if the whole population
    had been interviewed
  • repeat surveys will produce estimates that
    cluster around the real population value
  • We can calculate the level of precision of our
    estimate with different levels of confidence
    (chance of being wrong)
  • Conventional to use 95 confidence intervals
    i.e. we are prepared to accept that we may be
    wrong 5 of time (1 time in 20).

9
More on confidence intervals
  • Conventional to express confidence intervals in
    terms of a margin of error around our estimate,
    e.g. our estimate of X is 60 /- 3
  • in other words we are 95 confident that the true
    population value lies between 57 and 63
  • What determines the size of the confidence
    intervals?
  • size of the sample (generally most important
    factor)
  • size of the estimate
  • design of the survey (e.g. clustered samples v
    simple random samples)

10
How to calculate CIs
  • For SRS, the CI is around 2 times (1.97) the
    value of the standard error of the estimate
  • The standard error is the square root of (p q)
    / N
  • p the estimate, q 100 the estimate
  • N whole sample asked this question
  • Example of 20 estimate for problem with ASB of a
    sample of 1,000
  • (20 80) 1,600
  • 1,600 / 1,000 1.6
  • Square root of 1.6 1.26
  • Multiply 1.97 by 1.26 2.48
  • Therefore our estimate ranges from 17.5 to 22.5

11
Illustrations using SRS
12
Comparing results
  • As previously mentioned any one survey produces
    an estimate that may differ from figures that
    would have been obtained if the whole population
    had been interviewed
  • Thus, changes in estimates between different
    rounds of a survey may occur by chance. In other
    words, the difference may be simply due to which
    adults were randomly selected for interview.
  • Again we can measure whether this is likely to be
    the case using statistical tests, e.g. if CIs
    overlap then differences are not likely to be real

13
Comparing results other issues
  • Different methodologies are a problem, e.g. there
    are known mode effects
  • Issues with non-response bias
  • Question wording and order important
  • Context effects can have impact

14
Other issues for perception surveys?
  • More difficult to measure perceptions than
    facts
  • How old are you? is easier to answer than
    Thinking about your local area, how much of a
    problem do you think noisy neighbours or loud
    parties are?
  • Why?
  • first is asking to retrieve easily remembered
    objective fact
  • second requires people to define (or remember
    given definition), consider how much of a problem
    2 different elements are, with no time frame
    (last week, last month, last year?)

15
Other issues for perception surveys?
  • May be systematic differences in way people
    understand or interpret terms
  • by age, sex, ethnicity or soco-economic groups
  • What influences perceptions?
  • Direct experience
  • Indirect experience (e.g. friends, families and
    neighbours)
  • Media
Write a Comment
User Comments (0)
About PowerShow.com