RATING SCALES - PowerPoint PPT Presentation

About This Presentation
Title:

RATING SCALES

Description:

Title: CHECK-LIST FOR RATING SCALES Author: Harsh Shah Last modified by: Beverly Wood Created Date: 1/15/2003 7:13:37 PM Document presentation format – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 15
Provided by: Harsh4
Category:

less

Transcript and Presenter's Notes

Title: RATING SCALES


1
RATING SCALES
  • Win May
  • USC Keck School of Medicine

2
CRITIQUING RATING SCALES
  • Does the rating scale have a title describing the
    performance/trait to be measured?
  • Does the performance/trait to be measured reflect
    the objectives/
  • Is the performance/ trait to be measured clearly
    defined in operational terms?
  • Are the reference points on the scale clearly
    defined?

3
CRITIQUING RATING SCALES (cont.)
  • How many categories/rating positions are
    utilized? (Optimum 5-9)
  • Is each category/rating position described in
    specific observable behavioral terms?
  • If there is an option not to assess the behavior,
    is a concrete reason given for using that option?

4
CRITIQUING RATING SCALES (cont.)
  • Are the statements in the form of single
    sentences?
  • Are the statements phrased in a positive manner?
  • How long is the rating scale? (Optimum 10-25
    behaviors)
  • Are there directions for use of the scale?
  • Is there a key with the directions?
  • Is there provision for training of raters?

5
CRITIQUING RATING SCALES (cont.)
  • Test the rating scale in a focus group, by
    having it read aloud. This will enable you to
    make sure that every person understands each item
    in the same way.

6
WHY TRAIN RATERS?
  • RATERS DO BETTER,
  • IF TRAINED

7
COMMON RATING ERRORS
  • Error of central tendency - the hesitancy to give
    extreme judgments
  • Error of leniency - the tendency of a rater to
    rate too high or too low regardless of what trait
    is being tested
  • The halo effect - the contamination of the rating
    of a specific trait by the overall impression of
    the person.

8
COMMON RATING ERRORS(continued)
  • The logical error - the tendency to apply the
    same rating to two traits or skills that the
    rater feels are logically related.
  • The contrast error - the contamination of the
    rating of a specific trait by the raters
    possession of that trait.
  • The proximity error - the tendency to be
    influenced by traits or persons rated just
    previous to the present rating

9
COMMON RATING ERRORS
  • (Habitual Performance)
  • Generalization error - making ratings at the end
    of a designated period of time, based on samples,
    which are too small or not representative
  • Rater bias - anything which causes the rater to
    make a less than totally objective assessment of
    a learners performance.

10
SELECTION OF RATERS
  • Important to determine the ability of a rater to
    rate a specific trait
  • Raters are more accurate (do better)
  • when they are interested in the ratings they make
  • if they have educational and professional
    backgrounds similar to those of the ratees

11
RATER TRAINING
  • Train with respect to
  • Distribution of abilities
  • Nature of the scale
  • Halo effect
  • Logical errors, etc.

12
OTHER INFLUENCING FACTORS
  • Raters tend to have greater leniency error when
    they must confront ratee with the ratings
  • Raters produce more accurate ratings when they
    are aware that they will be checked
  • Raters must have sufficient time to complete the
    ratings

13
OTHER INFLUENCING FACTORS (cont.)
  • Accuracy is influenced by the number or
    categories or the fineness of discrimination
    called for by the rating task.
  • The more judgment is required, the lower the
    rater accuracy
  • Raters rate their colleagues, or fellow teachers
    or fellow students higher than they rate others.
  • Length of acquaintance is highly correlated with
    errors of leniency, but can be reduced by
    training.

14
OTHER INFLUENCING FACTORS (cont.)
  • Two ratings by the same rater on the same content
    are no more valid than one.
  • Raters disagree more when they observe
    individuals in different situations, than when
    they observe the same situation.
  • Different raters use different criteria in
    judging the same trait.
Write a Comment
User Comments (0)
About PowerShow.com