Reading Assessments for Elementary Schools - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Reading Assessments for Elementary Schools

Description:

Construct. Face (Cash) Content Validity ... Construct Validity. Overarching construct: Is the instrument measuring what it is supposed to? ... – PowerPoint PPT presentation

Number of Views:188
Avg rating:3.0/5.0
Slides: 33
Provided by: frank283
Category:

less

Transcript and Presenter's Notes

Title: Reading Assessments for Elementary Schools


1
Reading Assessments for Elementary Schools
  • Tracey E. Hall
  • Center for Applied Special Technology
  • Marley W. Watkins
  • Pennsylvania State University
  • Frank C. Worrell
  • University of California, Berkeley

2
REVIEW Major Concepts
  • Nomothetic and Idiographic
  • Samples
  • Norms
  • Standardized Administration
  • Reliability
  • Validity

3
Nomothethic
  • Relating to the abstract, the universal, the
    general.
  • Nomothetic assessment focuses on the group as a
    unit.
  • Refers to finding principles that are applicable
    on a broad level.
  • For example, boys report higher math
    self-concepts than girls girls report more
    depressive symptoms than boys..

4
Idiographic
  • Relating to the concrete, the individual, the
    unique
  • Idiographic assessment focuses on the individual
    student
  • What type of phonemic awareness skills does Joe
    possess?

5
Populations and Samples I
  • A population consists of all the representatives
    of a particular domain that you are interested in
  • The domain could be people, behavior, curriculum
    (e.g. reading, math, spelling, ...

6
Populations and Samples II
  • A sample is a subgroup that you actually draw
    from the population of interest
  • Ideally, you want your sample to represent your
    population
  • people polled or examined, test content,
    manifestations of behavior

7
Samples
  • A random sample is one in which each member of
    the population had an equal and independent
    chance of being selected.
  • Random samples are important because the idea is
    to have a sample that represents the population
    fairly an unbiased sample.
  • A sample can be used to represent the population.
  • Sampling in which elements are drawn according to
    some known probability structure.
  • Probability samples are typically used in
    conjunction with subgroups (e.g., ethnicity,
    socioeconomic status, gender).

8
Norms I
  • Norms are examples of how the average
    individual performs.
  • Many of the tests and rating scales that are used
    to compare children in the US are
    norm-referenced.
  • An individual childs performance is compared to
    the norms established using a representative
    sample.

9
Norms II
  • For the score on a normed instrument to be valid,
    the person being assessed must belong to the
    population for which the test was normed
  • If we wish to apply the test to another group of
    people, we need to establish norms for the new
    group

10
Norms III
  • To create new norms, we need to do a number of
    things
  • Get a representative sample of new population
  • Administer the instrument to the sample in a
    standardized fashion.
  • Examine the reliability and validity of the
    instrument with that new sample
  • Determine how we are going to report on scores
    and create the appropriate tables

11
Standardized Administration
  • All measurement has error.
  • Standardized administration is one way to reduce
    error due to examiner/clinician effects.
  • For example, consider these questions with
    different facial expressions and tone
  • Please define a noun for me -)
  • DEFINE a noun if you can ? - (

12
Normal Curve
  • Many distributions of human traits form a normal
    curve
  • Most cases cluster near middle, with fewer
    individuals at extremes symmetrical
  • We know how the population is distributed based
    on the normal curve

13
Ways of Reporting Scores
  • Mean, standard deviation
  • Distribution of scores
  • 68.26 1 95.44 2 99.72 3
  • Stanines (1, 2, 3, 4, 5, 6, 7, 8, 9)
  • Standard scores - linear transformations of
    scores, but easier to interpret
  • Percentile ranks
  • Box and Whisker Plots

14
Percentiles
  • A way of reporting where a person falls on a
    distribution.
  • The percentile rank of a score tells you how many
    people obtained a score equal to or lower than
    that score.
  • Box and whisker plots are visual displays or
    graphic representations of the shape of a
    distribution using percentiles.

15
(No Transcript)
16
Correlation
  • We need to understand the correlation coefficient
    to understand the manual
  • The correlation coefficient, r, quantifies the
    relationship between two sets of scores
  • A correlation coefficient can have a range from
    -1 to 1
  • Zero means the two sets of scores are not
    related.
  • One means the two sets of scores are identical (a
    perfect correlation)

17
Correlation 2
  • Correlations can be positive or negative.
  • A correlation tells us that as one set of
    scores increases, the second set of scores also
    increases. they can be negative. Examples?
  • A negative correlation tells us that as one set
    of scores increases, the other set decreases.
    Think of some examples of variables with negative
    rs.
  • The absolute value of a correlation indicates the
    strength of the relationship. Thus .55 is equal
    in strength to -.55.

18
How would you describe the correlations shown by
these charts?
19
Reliability
  • Reliability addresses the stability, consistency,
    or reproducibility of scores.
  • Internal consistency
  • Split half, Cronbachs alpha
  • Test-retest
  • Parallel/Alternate forms
  • Inter-rater

20
Validity
  • Validity addresses the accuracy or truthfulness
    of scores. Are they measuring what we want them
    to?
  • Content
  • Criterion - Concurrent
  • Criterion - Predictive
  • Construct
  • Face
  • (Cash)

21
Content Validity
  • Is the assessment tool representative of the
    domain (behavior, curriculum) being measured?
  • An assessment tool is scrutinized for its (a)
    completeness or representativeness, (b)
    appropriateness, (c) format, and (d) bias
  • E.g., MSPAS

22
Criterion-related Validity
  • What is the correlation between our instrument,
    scale, or test and another variable that measures
    the same thing, or measures something that is
    very close to ours?
  • In concurrent validity, we compare scores on the
    instrument we are validating to scores on another
    variable that are obtained at the same time.
  • In predictive validity, we compare scores on the
    instrument we are validating to scores on another
    variable that are obtained at some future time.

23
Construct Validity
  • Overarching construct Is the instrument
    measuring what it is supposed to?
  • Dependent on reliability, content and
    criterion-related validity.
  • We also look at some other types of validity some
    times
  • Convergent validity r with similar construct
  • Discriminant validity r with unrelated construct
  • Structural validity What is the structure of the
    scores on this instrument?

24
Elementary Normative Sample
  • Stratified by educational region
  • Males and females represented equally.
  • School, class, and individuals chosen at random.
  • Final sample consists of 700 students (50
    female).

25
p. 2
26
p. 2
27
p. 2
28
p. 3
29
p. 3
30
p. 3
31
Measures
  • First and Second Year/Infants 1 and 2
  • Mountain Shadows Phonemic Awareness Scale
    (MS-PAS) - group administered.
  • Individual Phonemic Analysis
  • Second Year/Infant 2 to Standard 5
  • Oral Reading Fluency
  • Standards 1 and 2
  • The Cloze Procedure

32
Assessment Instruction Cycle
  • Initial Evaluation
  • Archival Assessment
  • Diagnostic Assessments
  • Formal Standardized Measures
  • Assessment
  • Determine starting point
  • Analyze Errors
  • Monitor Progress
  • Modify Instruction
  • Instructional Design
  • Determine Content
  • Select Language of Instruction
  • Select examples
  • Schedule scope and sequence
  • Provide for cumulative review
  • Instructional Delivery
  • Secure student attention
  • Pace instruction appropriately
  • Monitor student performance
  • Provide feedback

Madigan, Hall, Glang(1997)
Write a Comment
User Comments (0)
About PowerShow.com