Lecture by: Chris Ross - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Lecture by: Chris Ross

Description:

How Standardized Tests Are Used with Infants & Young Children Tests for Infants Apgar Scale = is ... Reading ability Testing room conditions Memory ... – PowerPoint PPT presentation

Number of Views:132
Avg rating:3.0/5.0
Slides: 26
Provided by: Christ1052
Category:
Tags: apgar | chris | lecture | ross | testing

less

Transcript and Presenter's Notes

Title: Lecture by: Chris Ross


1
Chapter 3 How Standardized Test.
  • Lecture by Chris Ross

2
How Standardized Tests Are Used with Infants
Young Children
  • Types of Standardized Tests
  • Ability gt current level of knowledge or skill in
    a particular areas.
  • Psychological tests like intelligence,
    achievement, aptitude test are used for ability
    as well.
  • Achievement gt related to the extend to which a
    person has acquired certain information or
    mastered identified skills.
  • Peabody Individual Achievement Test- Revised
    (measures achievement in math, reading
    recognition and comprehension, spelling and
    general information.

3
How Standardized Tests Are Used with Infants
Young Children
  • Types of Standardized Tests
  • Aptitude gt is the potential to learn or develop
    proficiency in some area, provided that certain
    conditions exist or training is available.
  • The Stanford-Binet Intelligence Scale
  • Personality tests gt measure a persons tendency
    to behave in a particular way.

4
How Standardized Tests Are Used with Infants
Young Children
  • Types of Standardized Tests
  • Interest inventories gt used to determine a
    persons interest in a certain area or vocation
    and are not used with very young children.
  • Attitude measure gt determines how a person is
    predisposed to think about or behave toward an
    object, event, institution, type of behavior, or
    person (group of people).

5
How Standardized Tests Are Used with Infants
Young Children
  • Tests for Infants
  • Apgar Scale gt is administered one and five
    minutes after birth to asses the health of the
    newborn.
  • Brazelton Neonatal Behavioral Assessment Scale gt
    measures temperamental differences, nervous
    system functions and capacity of the neonate to
    interact.
  • The Gesell Developmental Schedules gt first
    scales to measure infant development.
  • Several measures are discussed on pages 54-55

6
How Standardized Tests Are Used with Infants
Young Children
  • Tests for Preschool Children
  • Screening Tests
  • Denver II
  • Ages and Stages Questionnaire
  • Brisance Screens
  • First Step Screening Test for Evaluating
    Preschoolers
  • Devereux Early Childhood Assessment
  • Many More tests are discussed on pages 56-58 61

7
How Standardized Tests Are Used with Infants
Young Children
  • Diagnostic Tests (pgs 58-59 61)
  • Vineland Adaptive Behavior Scale
  • Standford-Binet Intelligence Sale
  • Battell Developmental Inventory-II
  • Language Tests (59-60 61)
  • Preschool Language Scale
  • Pre-LAS
  • Achievement Tests (60-62)
  • National Reporting System

8
How Standardized Tests Are Used with Infants
Young Children
  • Tests for School-Age Children (pg. 61-66)
  • Bilingual Syntax Measure II
  • Test of Visual-Motor Integration
  • Child Observation Record

9
Steps in Standardized Test Design
10
Specifying the Purpose of the Test
  • Purpose should be clearly defined
  • APA guidelines for including the tests purpose
    in the test manual. The standards are
  • The test manual should state explicitly the
    purpose and applications for which the test is
    recommended
  • The test manual should describe clearly the
    psychological, educational and other reasoning
    underlying the test and the nature of the
    characteristic it is intended to measure.

11
Determining Test Format
  • Remember not all younger children can write, so
    verbal tests or child must possess a way to
    complete the assessment fairly.
  • Older children may do written (if able).
  • Some tests are designed to be administered
    individually or in a group setting

12
Developing Experimental Forms
  • Process often involves writing, editing, trying
    out, and rewriting/revising the test items.
  • Preliminary test is assembled and given to a
    sample of students. Experimental test forms
    resemble the final form.

13
Assembling The Test
  • After the item analysis the final form of the
    test is created.
  • Test questions (or required behaviors) to measure
    each objective are selected.
  • Test directions are made final with instructions
    for the takers and administrators.

14
Standardizing the Test
  • The final version of the test is administered to
    a larger population to acquire normative data.
  • Norms gt provide the tool whereby childrens
    tests performance can be compared with the
    performance of a reference group.

15
Developing the Test Manual
  • The final step in test design
  • Test developers now must explain the
    standardizing information, describe the method
    used to select the norming group, give the number
    of individuals included in standardizing test is
    reported, geographic areas, communities,
    socioeconomic groups, and ethnic groups. Should
    also include the validity and reliability of the
    test

16
Validity Reliability
  • Validity gt degree to which the test serves the
    purpose for which it will be used.
  • Reliability gt extent to which a test is stable
    or consistent.
  • Content validity gt The extent to which the
    content of a test such as an achievement test
    represents the objectives of the instructional
    program it is designed to measure.

17
Validity Reliability
  • Criterion-related validity gt To establish
    validity of a test, scores are correlated with an
    external criterion, such as another established
    test of the same name.
  • Concurrent validity gt The extent to which test
    scores on two forms of a test measure are
    correlated when they are given at the same time.
  • Construct validity gt The extent to which a test
    measures a psychological trait or construct.
    Tests of personality, verbal ability, and
    critical thinking are examples of tests with
    construct validity.

18
Validity Reliability
  • Alternative-form reliability gt the correlation
    between results on alternative forms of a test.
    Reliability is the extent to which the two forms
    are consistent in measuring the same attributes.
  • Split-half reliability gt a measure of
    reliability whereby scores of equivalent sections
    of a single test are correlated for internal
    consistency.

19
Validity Reliability
  • Internal consistency gt the degree of
    relationship among items on a test. A type of
    reliability that indicates whether items on the
    test are positively correlated and measure the
    same trait or characteristic.
  • Test-retest reliability gt a type of reliability
    obtained by administering the same test a second
    time after a short interval and then correlating
    the two sets of scores.

20
Factors That Affect Validity Reliability
  • Some common factors are
  • Reading ability
  • Testing room conditions
  • Memory
  • Physical condition of test taker
  • Lack of adherence to time limits
  • Lack of consistency

21
Standard Error of Measurement
  • Standard error of measurement gt as estimate of
    the possible magnitude of error present in the
    test scores.
  • True score gt a hypothetical score on a test that
    is free of error. Because no standardized test
    is free of measurement error, a true score can
    never be obtained.

22
Standard Error of Measurement
  • What are some items that can impact the test
    reliability?
  • Population sample larger the sample will
    generally mean a more reliable test.
  • Length of test longer test are usually more
    reliable than shorter. More items to measure
    can enhance true score and reliability.
  • Range of test scores from the norming group the
    wider the spread of scores the more reliably the
    test can distinguish among them. The spread of
    test scores can be related to the number of
    students taking the test.

23
Considerations in Choosing Evaluating Tests
24
Considerations.
  • Brown (1983) factors that test users must
    consider
  • Purpose of test
  • Characteristics to be measured
  • How are test results to be used
  • Qualifications of people who interpret scores and
    use results
  • Practical constraints

25
Considerations.
  • Think of the quality of a test/measure/assessment.
    A good manual should include the following
    information
  • Purpose of the test
  • Test design
  • Establishment of validity and reliability
  • Test administration and scoring
Write a Comment
User Comments (0)
About PowerShow.com