Measurement in Marketing Research - PowerPoint PPT Presentation

1 / 24
About This Presentation
Title:

Measurement in Marketing Research

Description:

Always remember that what you collect is a measurement score - it is not the ... Semantic Differential (Osgood, Suci, and Tannenbaum 1957) ... – PowerPoint PPT presentation

Number of Views:19
Avg rating:3.0/5.0
Slides: 25
Provided by: John3
Category:

less

Transcript and Presenter's Notes

Title: Measurement in Marketing Research


1
Measurement in Marketing Research
  • Dr. John Drea
  • Professor of Marketing
  • MKTG 329 - Western Illinois University

2
Basic Measurement Concepts
  • Always remember that what you collect is a
    measurement score - it is not the true level of a
    construct.
  • True score (construct) actual score (measure)
    error
  • Error refers to any type of variation in the
    actual score that is not attributable to
    variation in the true score.
  • The best measurement technique is the one which
    satisfies the research question while closely
    approximating the true score.

3
Basic Measurement Concepts
  • Outside sources of variation are commonly
    referred to as bias. To reduce bias, remember
  • Clearly define what it is you want to measure
  • (e.g., theres a difference in attitude toward a
    brand and satisfaction in using the brand)
  • Research subjects must have the ability to answer
    the stated question.
  • Research subjects must perceive the data
    collection instrument as intended.
  • Research subjects must be motivated to provide
    the an actual score in such a way it parallels
    their true score.
  • The response categories provided must closely
    approximate the true score of the individual.

4
Should You Use Single or Multi-Item Measures?
  • It depends on the complexity of what is being
    measured.

Multi-item Measures
Single-item Measure
Actual-1
Actual-2
TRUE
TRUE
Actual
Actual-3
Think of trying to represent an object by
photographing it from one angle vs. several
different angles.
5
Scale Properties
  • The first and most fundamental issue to resolve
    is the type of scale data to be obtained. This
    determines the type of analysis which can be
    used.
  • Nominal-level data (identity)
  • found in dichotomous categories (e.g., male1,
    female2) and multichotomous categories
    (1marketing, 2finance, 3economics)
  • Measure of average is the mode.

6
Scale Properties
  • Ordinal-level data (order)
  • Categories are in some type of order, but the
    distance between points is not known.
  • Ex Very Good 5, Good 4, Neutral 3, Poor
    2, Very Poor 1
  • Measures of the average are the median and the
    mode.

7
Scale Properties
  • Interval-level data (comparison of intervals)
  • The categories are not only in order, but the
    distance between each category is known and has a
    numerical meaning.
  • Ex a 1-7 scale, anchored only by bi-polar
    adjectives (Good/Bad, Agree/Disagree)
  • Anchoring the intermediate points converts the
    scale into an ordinal scale.
  • There is no absolute zero to an interval scale.
  • Measure of the average is the mean.

8
Scale Properties
  • Ratio-level data (absolute magnitudes)
  • Categories are in order, distances between items
    is known, and an absolute zero exists.
  • Ex Sales (an absolute zero exists)
  • Tip If a number doubles, does it mean theres
    twice as much of it? If yes, its ratio data.
    If no, its interval (or less).
  • Measure of the average is the mean.

9
Reliability
  • Refers to the similarity of results provided by
    independent but comparable measures of the same
    object, trait, or construct.
  • Reliability is based on the use of maximally
    similar methods.
  • If we measure X using scale Y and we do this
    three times, will we get effectively the same
    result every time?
  • Most common means of assessing reliability is
    through Cronbachs alpha.

10
Reliability (continued)
  • Cronbachs (coefficient) alpha
  • A measure of the internal homogeneity of a set of
    scale items.
  • Assesses the degree to which a group of scale
    items correlate with a representation of the
    score.
  • Acceptable levels are 0.70 for exploratory
    research, 0.80 for applied research (Nunnally)
  • Look at item-to-total correlations to improve
    levels of reliability.

11
Validity
  • Refers to the extent to which differences in
    scores on (a measure) reflect true differences
    among individuals on the characteristic we seek
    to measure, rather than constant or random
    errors. (Selltiz, Wrightsman, and Cook 1976)
  • Pragmatic Validity How well does the measure
    predict?
  • Construct Validity What is the instrument
    actually measuring?
  • Convergent Validity two different measures of
    the same construct are highly correlated with one
    another
  • Discriminant validity two measures of different
    constructs do not correlate highly
  • Nomological validity the measure behaves in a
    way consistent with theory/prior experience
  • The assessment of validity should involve the use
    of maximally different methods.

12
Data Collection Choices
Structured or unstructured?
Disguised or undisguised?
Phone
Communication
Mail
Method of administration
Interview
Structured or unstructured?
Disguised or undisguised?
Observation
Setting
Human
Method of administration
Mechanical
Source Gilbert A. Churchill (1995). Marketing
Research, 6th Ed., p. 348
13
Structured Questions
  • Response categories are standardized
  • Easy to analyze
  • Can miss the richness of responses
  • Greater reliability
  • Lower validity
  • Tip Use established measures which have been
    found to have evidence of reliability and
    validity.

14
Unstructured Questions
  • Attempts to elicit those responses which most
    closely approximate true scores.
  • Allows the subject to define the most appropriate
    way to respond.
  • Can be guided to make responses more useable.
  • Examples
  • Word Association
  • Sentence Completion

15
Disguised Questions
  • Occurs when the purpose of the study is not clear
    to the respondent.
  • When does it make sense to disguise the purpose
    of the study?
  • When knowing the purpose might bias the results.
    (response bias, non-response bias)
  • e.g., Would you tell a bank (not your own) about
    your financial patterns?

16
Open Ended or Closed Ended Questions?
  • Open ended
  • Difficult to code and can introduce some bias
    problems in interpretation.
  • Provides subjects with an opportunity for a
    variety of responses - good for pre-tests and
    creating multichotomous response categories.
  • EX Why would you not purchase Product X in the
    Macomb area?

17
Closed End Questions
  • Multichotomous Provides three or more structured
    categories for responses.
  • Dichotomous Provides two structured categories
    for responses.
  • Use dichotomous only when its an either/or
    type of question - no middle position is possible

18
Examples, Dichotomous Questions
  • Appropriate
  • Has your household purchased an automobile in the
    past twelve months? ___ yes ___no ___ dont know
  • What is your gender? ___female ___male
  • Inappropriate
  • Will your household purchase an automobile in the
    next twelve months? ___yes ___no ___dont know
  • Are you satisfied with your current automobile?
    ___ yes ___no
  • Do you consider yourself to be ____tall ___short

19
Alternative ways to ask the same question
  • Will your household purchase an automobile in the
    next twelve months?
  • definitely will purchase
  • probably will purchase
  • probably will not purchase
  • definitely will not purchase
  • no opinion/dont know

20
Common Types of Scales
  • Likert a summated ratings scale (express the
    degree to which you agree/disagree w/ a statement)

What type of data does this measure produce?
21
Common Types of Scales
Semantic Differential (Osgood, Suci, and
Tannenbaum 1957) --Involves the use of bipolar
adjectives --Frequently used for assessing
affect.
Good Quality ___ ___ ___ ___ ___ Poor
Quality Fun ___ ___ ___ ___ ___
Not Fun Friendly ___ ___ ___ ___ ___
Unfriendly
Issue Adjectives must be bi-polar opposites.
Question Is the opposite of fun, not fun,
unpleasant, unfunny? Each would likely
produce different results.
22
Common Types of Scales
Magnitude Scaling (graphic rating) drawing a
line/making a mark to indicate the extent to
which you concur with scale anchors.
What is the educational quality at WIU?
Very Poor
Very Good
Issues Allows for fine distinctions (attenuates
correlations), but may result in underreporting
or over-reporting. Also, length of line may
report finer distinctions than intended.
23
Common Types of Scales
Itemized Rating Scales individuals make
independent judgments. The rater chooses from
fixed, limited categories.
The quality of food in todays meal
is ___excellent ___very good ___good ____fair
___satisfactory ___poor ___very poor
or The quality of food in todays meal is very
good ___ ___ ___ ___ ___ ___ ___ very
poor Issues Are categories sufficient?
Are distinctions meaningful? Are you using the
appropriate level of analysis?
24
Common Types of Scales
Comparative Making assessments comparing
alternatives. Ex 1 Rank order the following
attributes in order of importance in selecting a
university, assigning a 1 to the most important
and a 3 to the least important. ___ Overall
quality of education. ___ Always something fun to
do. ___ Friendliness of the people. Constant
Sum A type of comparative scale - shows
magnitude. Ex 2 Please divide 100 points between
the following attributes in terms of its
importance to you in selecting a university
. ___pts. Overall quality of education. ___pts. Al
ways something fun to do. ___pts. Friendliness
of the people. 100 pts.
Write a Comment
User Comments (0)
About PowerShow.com