Analyzing and Interpreting Vehicle Stop Data - PowerPoint PPT Presentation

1 / 276
About This Presentation
Title:

Analyzing and Interpreting Vehicle Stop Data

Description:

... Black Difference (Obs.-Crashes) Crashes Observations 1.15% (29/206) 14.1% (3340/25831) 12.93% Racially Mixed .69% (52/94) 55.3% (12366/22636) 54.6% ... – PowerPoint PPT presentation

Number of Views:311
Avg rating:3.0/5.0
Slides: 277
Provided by: dennisk3
Category:

less

Transcript and Presenter's Notes

Title: Analyzing and Interpreting Vehicle Stop Data


1
Analyzing and Interpreting Vehicle Stop Data

2
Purpose of workshop
  • Provide a conceptual understanding of the purpose
    and challenges of benchmarking
  • Provide scheme for assessing benchmarking quality
  • Describe the early decisions a dept will need to
    make when initiating data collection and
    analysis.
  • Provide an overview of various benchmarking
    methods, and assessments of their quality
  • Discuss how to analyze search and
    stop-disposition data.

3
Team Members
  • Lorie A. Fridell, Ph.D.
  • Director of Research, PERF
  • Geoff Alpert, Ph.D.
  • Professor/Chair Criminology/CJ, University of
    South Carolina
  • Comprehensive analysis in Miami-Dade County,
    including one of first uses of crash data
  • Robin Engel, Ph.D.
  • Associate Professor, CJ, University of Cincinnati
  • A number of published articles on topic
    analyzing data for PA Highway Patrol

4
Team (cont.)
  • Amy Farrell, Ph.D.
  • Associate Director, Institute on Race and
    Justice, Northeastern University
  • Co-author of first resource guide for depts on
    data collection analyzing data in Mass and Rhode
    Island, Seattle
  • David A. Harris, J.D.
  • Professor, College of Law, University of Toledo
  • Author of Profiles in Injustice Why Racial
    Profiling Cannot Work Analyses conducted in Ohio
    counties

5
Team (Cont.)
  • John Lamberth, Ph.D.
  • President, Lamberth Consulting
  • Conducted seminal studies in New Jersey and MD
  • Multiple studies nationwide
  • Father of the observation methodology.

6
Doing some of the best work!
  • Dont be daunted.
  • They will describe top-notch research
  • Not all departments can implement the strongest
    models
  • if cant implement the best of the best..
  • You must interpret responsibly.
  • Also note We dont all agree on everything you
    will hear.

7
Resources
  • Final version of PowerPoint Slides
  • PERF web site (www.policeforum.org), Racially
    Biased Policing, Supplemental Materials and
    Resources for Ch. VIII
  • By the Numbers A Guide to Analyzing Race Data
    from Vehicle Stops
  • How to guide on analysis/interpretation of
    vehicle stop data geared toward those doing the
    analyses
  • PERF report funded by COPS
  • Also on PERF web site (December).

8
Resources (cont.)
  • Forthcoming, 2004
  • Understanding Race Data from Vehicle Stops A
    Stakeholders Guide.
  • Denominator II
  • How to Correctly Collect Traffic Stop Data
    Your Reputation Depends on it.

9
The Benchmarking Challenge
  • Lorie A. Fridell
  • Director of Research
  • PERF

10
What are we trying to do?
  • Social Science Research Question Are officers
    inappropriately using race/ethnicity when making
    law enforcement decisions.
  • Tough stuff Trying to measure what officers are
    thinking when they make stops, conduct searches,
    etc.

11
The Challenge of Analysis With an example based
on gender
  • The police-citizen contact data for most
    jurisdictions show males stopped more than
    females.

12
Male/Female Stops with residential population
added
13
Is this proof of gender-biased policing?
  • What OTHER THAN BIAS might account for the
    difference in the extent to which males and
    females are stopped by police?

14
Possible explanations for disproportionate stops
of males
  • Gender-biased policing OR maybe
  • Other factors at work, related to
  • Driving quantity
  • Driving quality
  • Driving location.

15
People are not at equal risk of being stopped by
police (even in unbiased world)
  • The more they drive the more at-risk they are
  • The more they violate traffic laws the more
    at-risk they are
  • The more they drive in areas with high police
    stopping activity the more at-risk they are.

16
This example shows
  • When we look at the police-citizen contact data
  • We need to take into consideration the factors
    OTHER THAN BIAS that might explain police
    behavior.

17
  • We need to RULE OUT (through our research
    methods) the impact of those other factors
  • in order to determine if bias impacts on police
    behavior.

18
Basic Conceptual Model What we want to know
  Police Stops
  Driver Race/Ethnicity
 
19
Conceptual Model for Analysis
Driving Quality
  Police Stops
 Driver Race/Ethnicity
 
Driving Location
Driving Quantity
20
The quest of analysis of police stops
  • To determine whether there is a causal
    relationship between citizen race/ethnicity and
    police behavior
  • Again To do this we must rule out the possible
    causal impact of ALTERNATIVE, LEGITIMATE factors
    associated with police action
  • driving quantity, driving quality, location.
  • Again, the model

21
Conceptual Model for Analysis
Driving Quality
  Police Stops
 Driver Race/Ethnicity
 
Driving Location
Driving Quantity
22
How do we control for these alternative factors
that might impact on police behavior?
  • Agencies try to develop COMPARISON GROUPS that
    reflect the demographic makeup of groups AT RISK
    of being stopped by police, absent bias.
  • Terminology Benchmarking the stops.

23
Benchmarking
  • We find that 25 of traffic stops within a
    jurisdiction are of Hispanics
  • Benchmarking question To what do we compare the
    25?
  • What percentage would indicate racially biased
    policing?
  • What percentage of Hispanics would be stopped if
    bias absent?

24
Benchmark Quality
  • Benchmarks vary considerably in the extent to
    which they encompass or control for the
    legitimate factors that impact on police
    behavior.
  • Driving Quantity
  • Driving Quality
  • Location

25
I know what you are thinking!
  • Those arent ALL the factor that might impact on
    who is stopped in a jurisdiction.
  • Not even close!
  • This is a reality of social scienceand is not
    specific to attempts to measure RBP..

26
(Constraints of Social Science)
  • In virtually all social science research there
    are factors that arent identified or cant be
    reasonably measured.
  • This is why we dont (indeed CANT) draw firm,
    unequivocal conclusions..

27
  • The stronger the benchmark, the stronger the
    conclusions that can be drawn.
  • but we never prove anything in social science.
  • In forthcoming By the Numbers Describe even
    the weak benchmarks
  • but tell the reader what conclusions can and
    cannot be drawn in a report based on those
    benchmarks.

28
How can we consider location, driving
quantity/quality?

29
How can we consider location (to address police
activity)?
  • Easiest Can conduct analysis within sub-areas
    of the city.

30
That is,
  • Analyze the data within sub-areas such that all
    people in that area are exposed to the same
    amount of policing, e.g.,
  • Heavy police activity
  • Medium police activity
  • Low police activity.

31
Analysis Area A High level of police stopping
activity
32
Analysis Subarea B Low level of police
stopping activity
33
If we are concerned that variation in Variable X
will distort our findings
  • Remove the variation from the analysis.
  • By conducting analysis within subareas
  • Weve removed the variation.

34
  • The fact that Area A and Area B have different
    levels of stopping behavior.
  • Will not impact on our analysis,
  • because we compare Area A stops to Area A
    benchmarks
  • And Area B stops to Area B benchmarks.

35
How to consider driving quantity and/or quality.
  • Observation is a solid method for this.

36
Observation Method
  • Researchers identify through observation
  • the demographics
  • of drivers (or violators)
  • in the jurisdiction.

37
Observation (Cont.)
  • Observers placed at various locations around
    jurisdiction
  • Trained observers note/count race/ethnicity of
    drivers.
  • Produces racial/ethnic profile of who is
    driving.

38
Observation (Cont.)
  • Compare demographics of drivers stopped by police
    to demographics of who is driving
  • This covers Driving Quantity.
  • Researcher has determined who is on the road
    behind the wheel.

39
Sample Results
40
Some observe/count who is violating
  • to incorporate a measure of Driving Quality.

41
What if a Benchmark does NOT Account for Driving
Quality/Quantity?
  • Lets say the agency finds disproportionate
    representation of Group B among people stopped
  • compared to their representation in the
    benchmark
  • That is, the results show disparity.

42
Hypothetical Data Shows Disparity
43
Possible explanations
  • Biased policing OR
  • Disproportionate reption of Group B on
    jurisdiction roads (compared to residential pop)
    Driving quantity
  • Disproportionate reption of Group B as law
    violators Driving quality
  • Disproportionate reption of Group B in places
    where police most likely to be. Location.

44
Disparity has been found, but
  • The CAUSE or CAUSES of that disparity cannot be
    determined.

45
Benchmarking Myths
  • Answers to Fact or Fiction from Monday.

46
Myth 1 No racial/ethnic disparity means no RBP
  • Some claim that a weak benchmark (e.g., census
    benchmarking) cannot prove the existence of RBP,
    but Can prove it DOES NOT EXIST..
  • That is If disparities are indicatedcannot
    determine if this is bias.
  • BUT, in contrast, If disparities are not
    indicated.this means no bias.
  • Wrong! If a method is faulty, it is faulty for
    all results.

47
Myth 2 Results from a weak benchmark become
strong if replicated in multiple geographic areas
  • Some say Method is faulty, but if the same
    disparity (e.g., over-rep of minorities) is found
    in district after district.must be RBP.
  • Wrong! Finding disparity over and over does not
    magically isolate the CAUSE of that disparity.

48
Myth 3 Results from a weak benchmark become
more worthy over time
  • Some say Faulty measure, but can be used as
    baseline for future data.
  • Wrong! If you dont know what construct you are
    measuring the first year..what are you looking
    for in the second?..

49
First year as Baseline
  • Only can identify/track disparity, but you
    wont know the CAUSE of that disparity.
  • The cause wont emerge over time with the same
    weak benchmark.

50
Summary
  • In benchmarking we try to develop a comparison
    group that represents the people at risk of being
    stopped by police, absent bias.
  • Driving quantity, quality and location impact on
    risk of being stopped.
  • Benchmarks vary significantly with regard to the
    extent to which they represent the people at risk
    of being stopped by police

51
  • That is, they vary with regard to the extent to
    which they encompass the risk-increasing factors.
  • The more risk-increasing factors a benchmark
    encompasses
  • The stronger it is.

52
Getting Started Collecting Data Early Decisions
  • Amy Farrell,
  • Northeastern University
  • With John Lamberth, Lamberth Consulting

53
Analyzing and Interpreting Vehicle Stop Data
  • Early Expectations of Data Analysis
  • 1. Providing Hard Facts
  • Hard data was expected to help move police and
    community groups away from anecdotes and accounts
    toward more rational discussion about police
    practices and the appropriate allocation of
    resources. Unfortunately, when results of
    studies were unclear and no common framework with
    which to evaluate data was established accounts
    often did not change.

54
Analyzing and Interpreting Vehicle Stop Data
  • 2. Evaluate Effectiveness
  • Although numerous legal scholars (e.g. Randall
    Kennedy) argued that the use of race in traffic
    enforcement, though potentially effective, was
    unconstitutional, professional police
    organizations and advocacy groups began to
    question whether such practices might be both
    unconstitutional and ineffective.

55
Analyzing and Interpreting Vehicle Stop Data
  • Why Did Departments Begin Collecting
  • 1. Voluntary
  • 2. State Legislation
  • 3. Court Order/Legal Settlement

56
Analyzing and Interpreting Vehicle Stop Data
  • Common Challenges to Data Collection
  • 1. How can officers determine the race or
    ethnicity of the citizens they stop in the least
    obstructive manner and without increasing the
    intrusiveness of the stop?

57
Analyzing and Interpreting Vehicle Stop Data
  • 2. What budgetary, time, and paper work burdens
    will data collection impose on police
    departments?
  • 3. Will data collection procedures result in
    police disengagement by leading police to
    scale down the number of legitimate stops and
    searches they conduct?

58
Analyzing and Interpreting Vehicle Stop Data
  • 4. How can departments ensure the accuracy of
    data collection procedures and be certain that
    reporting requirements are not circumvented by
    officers who fail to file required reports or who
    report erroneous information?
  • 5. Will the data be analyzed and compared to an
    appropriate measure of the statistically correct
    representative population? How do you ascertain
    and define the parameters of that population?

59
Analyzing and Interpreting Vehicle Stop Data
  • How is data being collected?
  • 1. Development of New Collection Procedures
  • Scantron Cards
  • Fill-In Sheets
  • On-Line Data Form/Handheld PDA

60
Analyzing and Interpreting Vehicle Stop Data
  • 2. Modified Use of Existing Data Collection
    Systems
  • CAD
  • MDT

61
Analyzing and Interpreting Vehicle Stop Data
  • What you collect has implications on the
    questions you can answer?
  • Balance necessary information with ease of data
    collection.

62
Analyzing and Interpreting Vehicle Stop Data
  • Therefore, you need to know what questions your
    department and your community want answered
    before you begin the process of collecting data

63
Analyzing and Interpreting Vehicle Stop Data
  • Four Main Questions that Traffic Stop Data Are
    Commonly Used to Answer
  • 1. Is the department engaged in racial profiling?
  • 2. Are more non-white drivers stopped than would
    be expected if no bias existed?
  • 3. Are non-white drivers disproportionately
    searched?
  • 4. Are non-white drivers more likely to be found
    in possession of contraband (e.g. drugs, guns,
    alcohol)

64
Analyzing and Interpreting Vehicle Stop Data
  • Types of Stops
  • All Vehicle Stops
  • Traffic Stops
  • Pedestrian Stops
  • These parameters must be very clear

65
Analyzing and Interpreting Vehicle Stop Data
  • 1. Time, Date Location Information
  • As specific information on location as possible.
    This variable becomes critical when considering
    how to construct the benchmark against which to
    compare stops.

66
Analyzing and Interpreting Vehicle Stop Data
  • 2. Demographic Information about Driver or
    Occupants
  • a. race b. gender
  • c. age d. of occupants
  • e. in-state/out-of-state
  • resident/non-resident

67
Analyzing and Interpreting Vehicle Stop Data
  • 3. Information about officers involved
  • a. officer identification b. age/race
  • c. experience of officer d. unit

68
Analyzing and Interpreting Vehicle Stop Data
  • 4. Reason for the Stop
  • Legal basis officer used to make the stop
  • a. speeding (amount) b. other moving violations
  • c. equipment violations d. registration
    violations
  • e. APB/call for service f. city code violations
  • g. warrant

69
Analyzing and Interpreting Vehicle Stop Data
  • 5. Outcome of the Stop
  • a. citation () b. written warning
  • c. verbal warning d. arrest
  • e. tow of vehicle

70
Analyzing and Interpreting Vehicle Stop Data
  • 6. Searches
  • a. Was a search conducted?
  • a. yes b. no
  • b. What type of search was conducted?
  • a. vehicle b. driver
  • c. passenger or passengers

71
Analyzing and Interpreting Vehicle Stop Data
  • c. What was the basis for the search?
  • a. visible contraband b. odor of contraband
  • c. canine alert d. incident to arrest
  • e. inventory search prior to impoundment
  • f. consent search
  • Could also use PC/RAS as separate categories

72
Analyzing and Interpreting Vehicle Stop Data
  • d. Was contraband found?
  • a. yes b. no
  • e. Was the property seized?
  • a. yes b. no
  • f. Nature and quantity of the contraband
  • seized or found.

73
Analyzing and Interpreting Vehicle Stop Data
  • Data Collection Alone May Not Solve the Racial
    Profiling Controversy
  • The Problem of Frozen Accounts and the Need for
    Community Task Forces

74
Analyzing and Interpreting Vehicle Stop Data
  • Task Forces
  • Accounts of Profiling Racial Profiling
    controversy based on police and community
    accounts of situations used to excuse or justify
    their behavior.
  • Account of Profiling Have Become Frozen
  • The same scripts are repeated with new anecdotes
    or situations
  • Unless the accounts are challenged, data
    collection is either dismissed or results are
    incorporated into standard accounts.

75
Analyzing and Interpreting Vehicle Stop Data
  • Task Forces
  • Utilization of task force with police and
    community representatives at outset of data
    collection or during in planning phase.
  • Input into types of variables collected, feedback
    during analysis, discussion of dissemination and
    policy recommendation.
  • Come to agreement about limitations of data, good
    faith effort of department of overcome
    limitations and concrete answers upon which they
    can agree.

76
Analyzing and Interpreting Vehicle Stop Data
  • Partnering with an Academic
  • May lend objectivity to the analysis process.
    Should be careful to select academic with
    experience in policing, understands nature of
    police practices and is familiar with issues
    around racial profiling data analysis.
  • Can be a member of task force process. Academic
    should be willing to explain analysis process to
    group, present preliminary findings where
    possible and discuss complex analysis issues in
    user friendly ways.

77
Analyzing and Interpreting Vehicle Stop Data
  • Selecting a Benchmark
  • By themselves, the demographics of traffic stops
    are difficult to interpret. If after collecting
    data, a particular city discovers that 65 of its
    traffic stops are of Black drivers, that number
    by itself does not reveal very much. Agencies
    would want to know the proportion of traffic
    stops compared to an appropriate benchmark or
    base rate of those eligible to be stopped in that
    community.

78
Analyzing and Interpreting Vehicle Stop Data
  • Issues to Consider
  • 1. Level of Measurement Precision Desired
  • Types of Questions You Want to Answer
  • 2. Required Agency Resources
  • 3. Available Data

79
Analyzing and Interpreting Vehicle Stop Data
  • 1. Measures of Who is Driving
  • a. census data
  • b. modified census data (driving pop estimate)
  • c. road survey data
  • d. traffic accident data

80
Analyzing and Interpreting Vehicle Stop Data
  • 2. Measures of Who is Violating Traffic Laws
  • a. speeding indicators/speed cameras
  • b. red light violation data

81
Risk Management
  • John Lamberth
  • Lamberth Consulting

82
Areas of risk relative to racial profiling
  • Negative media exposure
  • Threat or acts of litigation

83
Negative media exposure
  • Media has compared citations to census data
  • Media has compared stop data to census
  • It is prudent for the agency to educate and
    articulate proper analysis to media sources

84
Threat of litigation
  • Proactive engagement can result in a concomitant
    decrease in the probability of being sued.
  • If sued, then having taken the steps that have
    been and will be recommended here today will also
    decrease culpability.

85
Litigation mistakes
  • In one agency that was successfully litigated
    against over this issue, the following occurred
  • Stated we do not target minorities
  • No data or analysis (internal or benchmarking) to
    support contention
  • No training
  • Ignored media coverage
  • No community involvement

86
Case Study Reno NV, Hot August Nights
  • Focus of complaints by civil rights groups
  • Approach - temporal benchmarking of the
    pedestrian population
  • Compare pedestrian benchmarks to stops and arrests

87
Data Analysis Guidelines for All Benchmarking
Methods
  • Robin S. Engel, Ph.D.
  • Associate Professor
  • University of Cincinnati

88
Proper Stop Data Analyses Must Address Several
Issues
  1. Maintaining data quality
  2. Limiting officer disengagement
  3. Length of data collection prior to analyses
  4. Utilizing data subsets for meaningful analyses

89
Proper Stop Data Analyses Must Address Several
Issues
  • Maintaining data quality
  • Limiting officer disengagement
  • Length of data collection prior to analyses
  • Utilizing data subsets for meaningful analyses

90
Maintaining data quality ensures reliable valid
results
  1. What is data quality and why is it important?
  2. How might the data be inaccurate?
  3. How can data inaccuracies be checked?

91
What is data quality and why is it important?
  • The two main facets of data quality are
    reliability and validity
  • GIGO Garbage In, Garbage Out

92
How might the data be inaccurate?
  1. Information is incorrectly recorded
  2. Not all stops are recorded
  3. Missing data, random and non-random errors
  4. Intentional missing data
  5. Misstatement of facts

93
How can data inaccuracies be checked?
  • Auditing data for quality control
  • Different auditing techniques address different
    data problems
  • Timing is everything
  • Give feedback
  • Let them know you mean it!

94
How can data inaccuracies be checked?
  • Problem 1 Inaccurate data coding
  • Method 1 Pilot test data forms
  • Conduct pilot test at least one month
  • Check data for systematic errors
  • Make corrections to the form if needed
  • Method 2 Train officers in data collection

95
How can data inaccuracies be checked?
  • Problem 2 Are all stops recorded?
  • Method 1 Cross-check with other data sources
  • Citation, dispatch, arrest data
  • Video data
  • Levels of correspondence
  • Rule of thumb recommended by PERF about 90

96
How can data inaccuracies be checked?
  • Problem 3 Missing data, random and non-random
    errors
  • Method 1 Computer systems with error traps
  • Method 2 Scantron form rejection
  • Method 3 Analyze data for inconsistencies
  • Method 4 Immediate and routine feedback to
    officers

97
How can data inaccuracies be checked?
  • How much missing data is acceptable?
  • Identify and eliminate non-random errors
  • Correct or clean data
  • Random errors rule of thumb recommended by PERF
    no more than 10 missing data

98
How can data inaccuracies be checked?
  • Problem 4 Intentional missing data
  • Method 1 Immediate and routine feedback to
    officers
  • Feedback must be provided quickly
  • Hold supervisors accountable
  • Method 2 Consider alternatives for officer
    identifiers
  • Change badge / employee number
  • Use outside analysts for confidentiality
  • Do not use for disciplinary purposes

99
How can data inaccuracies be checked?
  • Problem 5 Intentional misstatement of facts
  • Method 1 Cross-check with other data sources
  • DMV records / photos
  • Motorists self-report data tear off cards
  • Method 2 Internal data comparisons
  • Compare officers to one another
  • Look for outliers with similar shifts and
    assignments

100
How can data inaccuracies be checked?
  • Method 3 Administrators and supervisors
  • Work to minimize officers fears
  • Sell the positives of data collection
  • Get the union involved upfront
  • Provide continual oversight and accountability
  • Make it a known priority

101
Proper Stop Data Analyses Must Address Several
Issues
  1. Maintaining data quality
  2. Limiting officer disengagement
  3. Length of data collection prior to analyses
  4. Utilizing data subsets for meaningful analyses

102
Officer disengagement should be anticipated
corrected
  1. What are the types of officer disengagement?
  2. How can disengagement be measured?
  3. How long might disengagement last?
  4. How can disengagement be limited?

103
What are the types of officer disengagement?
  • General disengagement
  • Specific disengagement

104
How can disengagement be measured?
  • Comparisons of output data (e.g., citations,
    arrests)
  • Compare current output to combined yearly
    averages
  • Monthly comparisons
  • Internal comparisons
  • Compare officers, shifts, districts
  • Look for outliers / problem spots

105
How long might disengagement last?
  • Depends on specific circumstances surrounding
    data collection initiative
  • Rule of thumb about 4 months given normal
    conditions

106
How can disengagement be limited?
  • Same techniques as those used to limit
    intentional data errors
  • Administrators and supervisors
  • Work to minimize officers fears
  • Sell the positives of data collection
  • Get the union involved upfront
  • Provide continual oversight and accountability
  • Make it a known priority

107
Proper Stop Data Analyses Must Address Several
Issues
  1. Maintaining data quality
  2. Limiting officer disengagement
  3. Length of data collection prior to analyses
  4. Utilizing data subsets for meaningful analyses

108
What is the optimal length of time needed to
collect data?
  • Recommended reference period for analysis a
    minimum of 1 year
  • Allows for seasonal variation
  • Lessens the impact of special events /
    circumstances

109
What is the optimal length of time needed to
collect data?
  • First month to two months of data collection
    should not be included in 12 month analyses
  • Allows officers to become accustomed to data
    collection process
  • Allows time to identify and correct problems
  • Increases validity of data

110
Proper Stop Data Analyses Must Address Several
Issues
  1. Maintaining data quality
  2. Limiting officer disengagement
  3. Length of data collection prior to analyses
  4. Utilizing data subsets for meaningful analyses

111
Data subset analyses yield important information
  • Several different types of subsets that are
    frequently used
  • Reasons for the stop
  • Who is stopped
  • Who made the stop
  • Geographic location of the stop

112
Data subset analyses yield important information
  • Reasons for the stop
  • Proactive vs. reactive stops
  • Traffic violations vs. investigatory stops
  • Speeding traffic stops vs. other traffic
    violations

113
Data subset analyses yield important information
  • Who is stopped
  • Citizens characteristics
  • Multiple stops of same citizen
  • Who made the stop
  • Officers characteristics
  • Internal structure

114
Data subset analyses yield important information
  • Geographic location of the stop
  • High-activity vs. low-activity areas
  • Areas that differ in racial / ethnic residency
  • Different geographic areas (e.g., neighborhoods,
    beats, census tracts, municipalities, counties,
    etc.)
  • Different roadway types

115
Proper Stop Data Analyses Must Address Several
Issues
  1. Maintaining data quality
  2. Limiting officer disengagement
  3. Length of data collection prior to analyses
  4. Utilizing data subsets for meaningful analyses

116
Census Benchmarking
  • Geoff Alpert
  • University of South Carolina

117
Census Benchmarking
  • Most racial profiling studies to date have used
    census data as the benchmark against which police
    traffic stop data have been compared.

118
Census Benchmarking
  • Evidence is mounting very quickly that the census
    population of an area under study does not
    accurately represent the driving population
    available to be stopped.

119
Census Benchmarking
  • In England, research confirmed that the
    population of persons who frequented an area was
    substantially different from the census-based
    residential population.

120
Census Benchmarking
  • In most cases, the pedestrian and vehicular
    populations of the areas under study were
    comprised of a greater percentage of minorities
    than the census indicated.

121
Census Benchmarking
  • Researchers in Sacramento found significant
    differences in the race of drivers observed at
    key intersections when compared to the census
    population of the areas in which the
    intersections were located

122
Census Benchmarking
  • At some intersections, minority drivers observed
    as a percentage of total observations far
    exceeded their proportions in the corresponding
    census population. At other intersections, the
    reverse was true and minorities were
    significantly underrepresented relative to the
    census.

123
Census Benchmarking
  • In Denver, less than half of motorists stopped by
    police from June 2001 through May 2002 were
    residents of the City of Denver.

124
Census Benchmarking
  • Census data provide a static population
  • A count of people who live in the area

125
Census Benchmarking
  • People who drive in a specific area represent a
    fluid population

126
Census Benchmarking
  • We found significant differences between the
    color of observed drivers at intersections in
    Miami-Dade County and census figures taken at
    individual census blocks and census tracts
    surrounding the intersections where the
    observations took place

127
Census Benchmarking
  • Miami-Dade County Worker Population Living in
    Broward County 115,044
  • Miami-Dade County Worker Population Living in
    Palm Beach County 5,560
  • Miami-Dade County Worker Population Living in
    Monroe County 1,186

128
Census Benchmarking
  • Approximately 14 of the working population lives
    outside Miami-Dade County

129
Census Benchmarking
  • Miami-Dade County Residents Working in Broward
    Country 60,096
  • Miami-Dade County Residents Working in Palm Beach
    Country 3,843
  • Miami-Dade County Residents Working in Monroe
    County 2,821

130
Census Benchmarking
  • www.census.gov/population/ www/cen2000/commuting.h
    tml

131
Census Benchmarking
  • These findings show, that at the local level,
    unadjusted census data do not provide a reliable
    benchmark against which the racial composition of
    motorists stopped by police should be compared

132
Census Benchmarking
  • Conclusion
  • It is improper to draw conclusions regarding
    racial bias using unadjusted census benchmarks at
    the local level

133
Census Benchmarking
  • If you are going to use census benchmarks make
    sure they are adjusted!

134
Adjusted (Modified) Census Benchmarking
  • Amy Farrell,
  • Northeastern University

135
Analyzing and Interpreting Vehicle Stop Data
  • Modified Census or Driving Population Estimate
  • An estimate of the demographics of drivers can be
    obtained based on the residential population and
    the population of drivers from surrounding cities

136
Analyzing and Interpreting Vehicle Stop Data
  • Must determine factors that both push drivers out
    of surrounding communities and draw drivers into
    target city from surrounding communities.

137
Analyzing and Interpreting Vehicle Stop Data
  • Assumptions of the Model
  • People are more likely to travel to cities which
    are close in distance (Gravity Model Carroll,
    1954).
  • The economic draw of a city mediates the effect
    of spatial separation
  • The characteristics of contributing cities
    influence the proportion of individuals leaving a
    city
  • Resident drivers make up a proportion of the
    driving population as a function of the relative
    draw of the city.

138
Analyzing and Interpreting Vehicle Stop Data
  • Determining Push of
  • Contributing Cities
  • People who live within 30 miles of the target
    city will be considered potential contributors
  • Includes both in-state and out-of-state towns

139
Analyzing and Interpreting Vehicle Stop Data
  • Three main factors of push
  • 1) The percentage of people within the community
    who own cars (making them eligible to drive out
    of the city).
  • 2) The percentage of people who drive more than
    10 miles to commute to work based on the 2000
    journey to work data provided by the US Census.
  • 3) The travel time (in minutes) between the
    contributing city and the target city.

140
Analyzing and Interpreting Vehicle Stop Data
  • Determining Draw of Target City
  • The next step is to determine how strongly the
    residential population of the target city
    influences both its own driving population and
    the population of those drivers drawn in from
    surrounding communities.

141
Analyzing and Interpreting Vehicle Stop Data
  • To determine the potential draw of the target
    cities we constructed a series of indicators
    based on economic and road travel data for each
  • jurisdiction.
  • Four Economic and Travel Indicators Used in Rhode
    Island
  • 1. of State Employment 2. of State Retail
    Trade
  • 3. of State Food and Accommodation Sales
  • 4. of State Road Volume

142
Analyzing and Interpreting Vehicle Stop Data
  • Determining Draw Levels
  • Although there are numerous indicators of draw to
    a city these four variables were chosen both for
    logical and statistical purposes
  • Based on the distribution of average indicators
    we developed four draw categories High, Moderate
    High, Moderate Low and Low.

143
Analyzing and Interpreting Vehicle Stop Data
  • Each draw level represented a different
    proportion of drivers who populated the roadway
    of target cities from residents or from those
    from surrounding cities.
  • High 40 of Drivers from Surrounding Com.
  • 60 of Drivers Residents
  • Moderate High 30 of Drivers from Surrounding
    Com.
  • 70 of Drivers Residents
  • Moderate Low 20 of Drivers from Surrounding
    Com.
  • 80 of Drivers Residents
  • Low 10 of Drivers from Surrounding Com.
  • 90 of Drivers Residents

144
Analyzing and Interpreting Vehicle Stop Data
  • Testing the Driving Population Estimate in RI
  • To determine the success of the DPE we compare
    various estimates of driving population for two
    jurisdictions in Rhode Island where traffic
    survey data was available.
  • DPE closer to road survey estimates than either
    census data or census data modified by distance
    alone (within 1 for most groups).

145
Benchmarking with Crash Data
  • Driving Population Estimation Measure (DPEM)
  • Geoff Alpert

146
Weaknesses in Current Racial Profiling Benchmarks
  • Census
  • Static measure of residential population
  • Does not account for differential driving
    patterns among racial groups
  • Observation studies increasingly show large
    disparities between the census and who is
    actually driving on the roadways

147
Weaknesses in Current Racial Profiling Benchmarks
  • Licensed drivers
  • Data are unavailable at the local level in many
    states
  • Again, does not account for differential driving
    patterns among racial groups
  • Reported crime suspects or arrestees
  • Most traffic stops are made for traffic
    infractions and not for suspected criminal
    involvement
  • Comparing arrestees or reported suspects to
    traffic violators is analytically unsound

148
Weaknesses in Current Racial Profiling Benchmarks
  • Field observation of drivers and traffic
    violators
  • Best method currently available for estimating
    the driving population
  • Labor intensive expensive limited to selected
    areas or intersections
  • Nighttime observations are difficult or
    impossible
  • Ethnic characteristics (e.g. Hispanic, Native
    American) cannot be identified with accuracy
    measurement validity is seriously compromised

149
The Theory of Quasi-Induced Exposure
  • Traffic safety engineers use not-at-fault drivers
    in two vehicle crashes as a proxy for the driving
    population
  • According to the theory, subpopulations of
    drivers (gender, age, etc.) are struck
    proportionately to their composition in the
    driving population as a whole

150
Research on Quasi-Induced Exposure
  • DeYoung, Peck, Helander (1997)
  • Compared six years of crash data in California
    among licensed, unlicensed, and
    suspended-licensed drivers
  • Each category of drivers should be represented
    equally as victims within each category of
    at-fault drivers
  • No statistically significant differences in the
    proportions within each category of not-at-fault
    drivers struck by at-fault drivers.

151
Research on Quasi-Induced Exposure
  • Lyles and Stamatiadis (1991)
  • Examined all traffic crash data from Michigan in
    1988
  • Found that in two vehicle crashes, male and
    female at-fault drivers struck male and female
    crash victims proportionately.
  • 66.8 percent of victims struck by male at-fault
    drivers were males, while 33.2 percent of victims
    were females. For female at-fault drivers, 65.3
    percent of their victims were males, and 34.7
    percent were females.
  • Conclusion Not-at-fault drivers in two vehicle
    crashes represent a random sample of all those
    on the road under the specified conditions males
    vs. females.

152
Testing Quasi-Induced Exposure
  • Ongoing study of racial profiling in Miami-Dade,
    Florida involved, among other things extensive
    driver/violator observations at 11 high volume
    intersections
  • 65,000 vehicles were observed (8/01-2/02) driver
    race was captured as a dichotomy
    Black/non-Black
  • Observation data were compared to not-at-fault
    crash data taken from police accident reports at
    the same 11 intersections (2002) 403 crashes
    total

153
Aggregate Comparison of Observations to Crashes
at 11 Intersections
Observations Crashes Difference (Obs.-Crashes)
Black 16,937 (26) 87 (22) 4
Non-Black 48,088 (74) 316 (78) 4
TOTAL 65,025 (100) 403 (100) ---
154
Difference Between Percent Black Drivers and
Percent Black Crash Victims By Area Type
Areas Sampled Black Drivers Observed Black Crash Victims Percentage Point Difference
Predominately non-Black (1231/16558) 7.43 (6/103) 5.8 1.6
Substantial Black Pop. (12366/22636) 54.6 (52/94) 55.3 .69
Racially Mixed (3340/25831) 12.93 (29/206) 14.1 1.15
155
Conclusions
  • Not-at-fault drivers in two vehicle crashes
    represented a reasonably accurate estimate of the
    racial composition of drivers on the road at a
    sample of high traffic intersections in
    unincorporated Miami-Dade County.

156
Questions for Future Research
  • Findings need to be replicated
  • More research is needed
  • Do at-fault drivers represent a random sample of
    traffic violators?
  • Do roadway conditions vary in ways that may
    impact accident rates disproportionately among
    some racial groups?
  • What are the optimal ways to disaggregate traffic
    crash data?

157
References
  • DeYoung, D.J., Peck, R.C., Helander, C.J.
    (1997). Estimating the exposure and fatal crash
    rates of suspended/revoked and unlicensed drivers
    in California. Crash Analysis and Prevention
    29(1), 17-23.
  • Lyles, R.W., Stamatiadis, P. Lighthizer, D.R.
    (1991). Quasi-induced exposure revisited. Crash
    Analysis Prevention, 23, 275-285.

158
Observation Benchmarks
  • John Lamberth,
  • Lamberth Consulting
  • Robin Engel,
  • University of Cincinnati

159
History of observations
  • Observational benchmarks were born in a
    litigation context (NJ, 1993)
  • They have matured in a cooperative context
  • Used by agencies to proactively address racial
    profiling

160
Where have they been used?
  • New Jersey1993-2000 (6 occasions)always in a
    litigation context.
  • Maryland1996 litigation context
  • Arizona2000-2003 (2 occasions, litigation)
  • Washtenaw County, Michigan1999-1st proactive use
    of this benchmark

161
Other studies using these benchmarks
  • 2001
  • Zingraff et. al., North Carolina
  • Overland Park, Kansas
  • Emporia, Kansas
  • Hutchinson, Kansas
  • Kansas City, Kansas
  • Kansas Highway Patrol
  • Marysville, Kansas
  • Olathe, Kansas
  • Witchita, Kansas
  • 2002
  • Ann Arbor, Michigan
  • Redlands, California
  • Rhode Island-Farrell, et al.
  • Miami-Dade County-Alpert, et al.
  • Pennsylvania State PoliceEngle

162
Other studies (continued)
  • 2003
  • Capitola, California
  • Santa Cruz, California
  • Santa Cruz County Sheriffs Department,
    California
  • Scotts Valley, California
  • Watsonville, California
  • San Antonio, Texas
  • Grand Rapids, Michigan
  • Montgomery County, Maryland
  • Reno, Nevada

163
How are they done?
  • Stationary surveyors visually identify and
    manually record race/ethnicity at street corners
  • Rolling surveyors ride in cars and record the
    race/ethnicity of the driver of cars that pass
    the observation vehicle

164
How are they designed?
  • Select locations by police activity and surveying
    conditionsdone with the PD
  • Randomly select days and times for observations,
    including week ends, week days and both day and
    night
  • Survey long enough to achieve a stable sample size

165
Why have observations been so successful?
  • Observations have been validated repeatedly in
    courts of law
  • Observations are a DIRECT measure of the driving
    population NOT an estimate
  • Observational benchmark measure the traffic from
    which officers select motorists to stop
  • Stops made IN SPECIFIC LOCATIONS are compared to
    the racial/ethnic composition of traffic IN THOSE
    LOCATIONS

166
What are the challenges?
  • Observations are simple in concept, but complex
    in implementation. They require
  • Skilled training of observers
  • Diligent management, oversight, quality assurance
  • Ingenuity to overcome obstacles
  • Experience to get it right

167
Common misconceptions
  • Drivers in cars are moving too fast to see their
    race
  • Positioning is vital!
  • Not all surveyors can do this - training and
    testing is critical
  • Break-up the roads to survey specific portions of
    roads, one lane at a time
  • Use on-going tests (inter-rater reliability) for
    surveyor quality

168
Examples of problems
  • New Jersey. Study conducted using cameras.
  • BJS Study. Observers positioned 9 feet above car
    and 20 feet away.
  • Arizona. IRRs conducted on basis of car
    description. Found out observers were looking at
    different cars!

169
Common misconceptions
  • You cant see motorists at night
  • Find or create ambient lighting
  • Use well lit locations
  • Use alley lights, construction lighting
  • Use checkpoint setups, or toll booths
  • Surveyor training position away from
    headlights, use side windows as a critical part
    of driver identification

170
Common misconceptions
  • You cant identify Hispanics
  • Training and testing! IRRs before surveying
    begins. QA reviews during surveying
  • Lamberth, 2002 (81-85)
  • Farrell
  • Solop

171
Analyzing the data
  • Select perimeters for each benchmark location
    with the PD
  • Compute odds ratios for each location
  • This means that there are as many discrete
    analyses as there are benchmark locations.

172
Reporting
  • If there are discrepancies, see if there is a
    reason for them other than race.
  • Examplein one location, young people were 4.4
    times as likely to be stopped as middle aged and
    older drivers.
  • Drag racing at that location and, at the request
    of the community, police adopted a zero tolerance
    policy for MV infractions for young people.

173
Emphasize a prospective view
  • If discrepancies exist, what does the department
    do to reduce them?
  • If discrepancies do not exist, why does the
    community perceive that they do?
  • The data collection/analysis plan should be part
    of an overall plan to enhance police community
    cooperation

174
View on other benchmarks?
  • It will make it easier for law enforcement to
    analyze this issue if estimate benchmarks work
  • But they must be rigorously tested and validated
    otherwise, agencies will be faced with poor
    analysis and greater problems

175
Observations of Roadway Usage in Pennsylvania
  • Robin S. Engel, Ph.D.
  • Associate Professor
  • University of Cincinnati

176
Data Analyses in PA Involved Multiple
Considerations
  • Residency of drivers stopped
  • Who was traveling on the roadways
  • Who was speeding on the roadways

177
Important to Record Drivers Residency
  • Census benchmarks are based on the assumption
    that people who drive in an area also live in
    that area
  • This assumption is likely faulty
  • Using drivers residency, the numerator can be
    changed to match Census-based denominator

178
Important to Measure who is Using the Roadways
  • Census benchmarks are based on the assumption
    that people who drive in an area also live in
    that area
  • Assumption is likely faulty
  • Using roadway observations, the denominator can
    be changed to better match the numerator

179
Consideration of Violating Behaviors
  • Observations can measure the driving population
    but cannot assess drivers risk of being stopped
    by police
  • Some evidence to suggest that driving patterns
    may differ by gender, age and racial/ethnic
    groups
  • Speeding savvy may differ
  • Use of RADAR detectors may differ

180
Use of Violating Observations
  • Observation benchmarks are based on the
    assumption that different groups have similar
    driving patterns
  • Some have questioned this assumption
  • Using violating observations, both the numerator
    and denominator can be changed to provide a
    different type of analysis

181
Studies Examining Racial Differences in Driving
Behavior
  • Speeding
  • Engel Calnon (2003) Pennsylvania
  • Alpert et al. (2003) Miami-Dade
  • Zingraff et al. (2002) North Carolina
  • Lange et al. (2001) New Jersey
  • Lamberth (1994, 1996) Maryland New Jersey
  • Minor Violations
  • Alpert et al. (2003) red light violations and
    illegal turns Miami-Dade

182
Observations of Roadway Usage Speeding Behavior
in Pennsylvania
  • Drivers residency collected by officers
  • Stationary roadway observation
  • Stationary observations with RADAR

183
Where, When How Much Observation?
  • The need to sample!
  • Large geographic area
  • Large differences in minority populations
  • Three-tiered sampling method
  • County selection
  • Municipality selection
  • Roadway selection
  • Times of days / days of the week/ seasons
  • How many observations? Budget restricted

184
Observations of Roadway Usage Speeding Behavior
in Pennsylvania
  • Findings
  • Who uses the roadways and who lives there are
    VERY different
  • Census data is not an accurate denominator
  • Racial differences in speeding behavior
  • Using different benchmarks, different results

185
The Influence of Changes in Numerators
Denominators
  • Using one PA county as an example, it is easy to
    see how changing the numerator and/or the
    denominator provides dramatically different
    results
  • DI Disproportionality Index
  • DI of stops / of expected stops based on
    some benchmark

186
Model 1 Census Data Benchmark
  • Numerator of traffic stops in County A of
    Black drivers
  • Denominator Census data, Black population 16
    years or older in County A
  • PSP traffic stops 12.26 Black
  • Population 16 0.30 Black
  • DI 1 12.26 / 0.30 40.9

187
Model 2 Numerator Based on County Residency
  • of drivers stopped in County A who live in
    County A 10.4
  • Change the numerator to match the denominator
  • Numerator
  • Drivers who reside in county 0.89 Black
  • Denominator
  • Population 16 0.30 Black
  • DI 2 0.89 / 0.30 3.0

188
Model 3 Change the Denominator to Observations
  • Numerator
  • PSP traffic stops 12.26 Black
  • Denominator
  • observed on roadways 10.51 Black
  • DI 3 12.26 / 10.51 1.2

189
Model 4 Numerator Denominator based on
Speeding Observations
  • Numerator
  • stopped for speeding 10 mph over speed limit
    12.92 Black
  • Denominator
  • observed speeding 10 mph over speed limit
    12.17 Black
  • DI 4 12.92 / 12.17 1.06

190
Comparisons of Disproportionality Indices
191
Consider the Most Accurate Numerators
Denominators
  • Important to consider in PA
  • Residency of drivers stopped
  • Who was traveling on the roadways
  • Who was speeding on the roadways
  • Most accurate numerators and denominators may
    likely differ by jurisdiction

192
Internal Benchmarking
  • David Harris
  • University of Toledo

193
Internal Benchmarking Involves
  • Making comparisons WITHIN the agency.

194
With this method, the researcher
  • Compares officers to other officers, or
  • Compares units of officers to other units.

195
Example of comparing officers to each other
  • Would compare officers in terms of the
    demographic profile of the people they stop..

196
Example Showing Proportion of Stops that are of
Minorities for 4 Officers
197
Must compare officers who are similarly situated

198
Thus, you might compare
  • Demographic profiles of people stopped across the
    patrol officers that work the same shift in the
    same area.

199
These similarly situated officers are all exposed
to the same folks
  • who are at risk of being stopped by police.

200
Looking for outliers
  • Use this method to identify officers, units or
    beats
  • that appear to intervene with racial/ethnic
    minorities .
  • at higher rates than do their matched
    counterparts.

201
Detecting Outliers Among Officers in Area A
Across Three Shifts Stops of Minorities
202
  • This identification can lead to further inquiry
    into circumstances/context to see if bias plays a
    role.

203
Strengths Accounts for our key variables
  • Same advantage of blind v. not-blind
    enforcement (to come)
  • By matching the officers, units across key
    variables shift, location
  • The matched officers or units are dealing with
    the same people who are at risk of being stopped
    by police

204
Weaknesses
  • Cannot assess overall department performance
  • Because this method compares the department to
    itself.
  • That is, if everyone in the agency is acting in a
    biased fashion
  • We wont know it
  • We will only detect the worst of the lot.

205
Miscellaneous Other Benchmarks and Tools
  • Lorie Fridell
  • PERF

206
Misc.
  • Blind versus not-blind enforcement
  • Crime data
  • Survey methods
  • Using GIS data

207
Comparing Blind v. Not-blind Enforcement

208
Comparing Blind vs. Not-blind Enforcement
  • Some enforcement is blind as to race/ethnicity
    of driver (e.g., radar stops, speed detected by
    air craft, photo red light cameras)
  • The demographic profile of people stopped u
Write a Comment
User Comments (0)
About PowerShow.com