Title: Critical%20Issues%20in%20Early%20Childhood%20Assessment%20and%20Accountability
1Critical Issues in Early Childhood Assessment and
Accountability
- Kathy Hebbeler
- ECO at SRI International
Early Childhood Outcomes Meeting Baltimore,
Maryland August 2007
2Terminology
- Assessment Single Tool
- Assessment Process
3What Is Assessment?
- Assessment is a generic term that refers to the
process of gathering information for
decision-making. - (McLean, Wolery, and Bailey, 2004)
4What Is Assessment?
- Early childhood assessment is a flexible,
collaborative decision-making process in which
teams of parents and professionals repeatedly
revise their judgments and reach consensus.... - Bagnato and Neisworth (1991)
- Quoted in DEC Recommended Practices (2005)
5Possible uses of assessment in EI/ECSE
- Eligibility determination
- Norm-referenced test delay
- Individual program planning
- Curriculum-based tools e.g., LAP, Carolina
- Ongoing individualized progress monitoring
- Curriculum-based tools?
- Accountability assessment/program improvement
- E.g., Head Start reporting system
6Differences in these uses
- Individualized vs. aggregate
- Who uses the data
- Who derives benefit
- Who suffers consequences of assessment done poorly
7Issue 1 Variation in assessment use by
practitioners
- Do not know how many EI/ECSE programs routinely
use assessment tools for anything other than
eligibility - Presumed eligibility?
- In some programs, the only formal assessment is
for eligibility - Appears to be much state-to-state variation
8Issue 2 Does each purpose require a different
tool?
- Can an assessment (process) being conducted for
eligibility determination also provide
information for - Program planning?
- Accountability?
- Can the same assessment (process) be used to plan
a program and monitor process?
9Accountability in particular
- Can assessments already being used by programs
for other purposes (whatever they are) be used
for accountability purposes? - Does this apply to all tools or only some
categories of tools? - Screening tools for accountability?
10Issue 3 Changing perspective on assessment in
general early childhood community
- Major changes in last 15 years in how assessment
of young children is viewed - Old position Do not test little kids
- New position Ongoing assessment is part of a
high quality early childhood program
11What changed
- New and different tools became available for
general EC - Curriculum-based assessments were developed,
e.g., Creative Curriculum, Work Sampling, etc. - Tools for 3-5 came first 0-3 tools are coming
now - Interesting sidebar Curriculum-based
assessments for programs serving children 0-5
with disabilities have been around for years
12What changed
- The purpose of assessment was redefined
- Not about sorting, labeling, using to deny
access - Now about Getting a rich picture of what
children can do and cant do and using that
information to help them acquire new skills - progress monitoring
13What changed
- Assessment had always been seen as a process with
multiple purposes - Distinctions have been made been good and bad
uses of assessment with young children - Good uses are now promoted
- For more information NAEYC web site (Position
statement on Curriculum, Assessment and
Evaluation)
14Position Statement of the National Association
for the Education of Young Children and the
National Association of Early Childhood
Specialists in State Departments of Education
(2003)
- Policymakers, early childhood professionals, and
others have a shared responsibility to - make ethical, appropriate, valid, and reliable
assessment a central part of all early childhood
programs.
15Interesting Irony
- Even though the disability community had
developed many curriculum-based assessment
tools, currently many? some? programs do not
practice ongoing assessment - The push for ongoing assessment to monitor how a
child is doing and plan for instruction/interventi
on is coming from the general education community
16Issue 4 Limitations of existing assessment
tools
- Assessment of young children poses greater
challenges than people generally
realize.assessment resultsin particular,
standardized tests, that reflect a given point in
timecan easily misrepresent childrens
learningThere is widespread dissatisfaction with
traditional norm-referenced standardized tests
which are based on early 20th century
psychological theory. - National Research Council, 2001
17Problem Nature of the young child
- Not well suited to a standardized testing
situation - Performance varies from day to day, place to
place, person to person - Dont perform well for strangers or on demand
- Growth is sporadic and uneven
18Problem Response capabilities of children with
disabilities
- Same issue as with school-age children
assessment assumes child who can see, hear and
understand spoken language, point, etc. - Few assessments include accommodations nor were
children with disabilities included in the
norming sample - Very little data on validity of accommodations
with young children
19Problem Impact of disability/delay on
development
- Typically developing children tend to develop in
multiple areas simultaneously - Language, cognition, motor skills march forward
more or less together - Even though development has been divided into
domains for assessment and research, much of
development is intertwined - These interconnections present challenges for
obtaining a pure domain score
20Problem Impact of disability/delay on
development
- More difficult to accurately portray the
development of children developing atypically
with available assessments, esp. children with
language delays - Do they understand the directions?
- Is the assessment tapping cognition or language?
- Are other behavioral/attentional factors
influencing performance?
21Problem Psychometric properties of existing
instruments
- Some of the most common instruments are being
used with limited or no reliability and validity
data - None have validity or reliability data reporting
when used for outcomes and accountability
22Response New forms of assessment
- Growing recognition that the only way to get a
valid picture of what a child can do/does to is
look at performance over a variety of settings
and people including what the child does
spontaneously with familiar adults and in
familiar situations - Cant base conclusions about childs capabilities
on elicited responses alone - Authentic assessment
23Position StatementNAEYC and NAECS/SDE
- To assess young childrens strengths, progress,
and needs, use assessment methods that are
developmentally appropriate, culturally and
linguistically responsive, tied to childrens
daily activities, supported by professional
development, inclusive of families, and connected
to specific, beneficial purposes
24Response Use multiple sources of information
(best practice)
- A single test, person, or occasion is not a
sufficient source of information. This means
that we must gather information from several
sources, instruments, settings and occasions to
produce the most valid description of the childs
status or progress - ---DEC Recommended Practices
25Issue 5 Strategies for synthesizing multiple
sources of information
- And just how is that information supposed to be
put together? - Especially for aggregated data (accountability/pro
gram improvement)
26Issue 6 Validity and Reliability
- Are not characteristics of an assessment per se
- Validity context dependent on the use of the
results - Individual vs. group decisions
- Validity , the degree that an assessment
measures what it purports to measure, relates to
the use of the test, rather than the test
itself. - Score Reliability, Pg. 113
27Issue 6 Validity and Reliability
- Reliability is a characteristic of a set of
scores, not of a test - reliability refers to the degree of consistency
of the information obtained from an information
gathering process - reliability of the scores provided by an
instrument or procedure may fluctuate depending
on how, when, and to whom the instrument or
procedure is administered.. - Joint Committee on Standards for Educational
Evaluation, 1994 - Quoted in Score Reliability, pg.95
28Implications for State Outcome Data Collections
- Cannot assume that your states scores are valid
and reliability because your state is using a
tool/process that has demonstrated validity or
reliability - Validity and reliability need to be established
for each use/context
29Validity in an accountability system
- Validity question Do the assessment results
lead to the right decisions? - Framework from the Council of Chief State School
Officers - How does one assess validity in an accountability
system? - How should a state determine the validity of its
child outcome data? The data being submitted to
OSEP?
30Issue 7 Validity vs. credibility dilemma for
accountability
- Strangers cant elicit valid data on young
childrens performance capabilities in a testing
situation - BUT
- can data produced by those who know the child and
whose programs are being evaluated, be credible
in an accountability system?
31States have spoken
- For child outcomes, states are collecting data
through those familiar with the child - Implications
- Data are subject to credibility challenge
- Need to put safeguards in place so you can defend
the credibility of your data
32Issue 8 How to use current assessment tools to
look at functional outcomes
- OSEP outcomes are functional and cut across
domains - Existing assessments provide scores for domains,
not 3 outcomes - Existing assessments vary in the extent to which
they assess functioning vs. isolated skills
33Outcomes Are Functional
- Functional outcomes
- Refer to things that are meaningful to the child
in the context of everyday living - Refer to an integrated series of behaviors or
skills that allow the child to achieve the
important everyday goals
34Question to ask
- Is the information provided by the assessment
really functional?
35Issue 9 Variation in provider knowledge of
assessment
- (Based on ECO work with states)
- Some practitioners are skilled in administering
and interpreting multiple assessment tools, some
in one, some rarely use any. - Many children served in programs for typically
developing children where knowledge and use of
assessment is limited. - How will practitioners be trained and supervised?
36Issue 10 Variation of provider knowledge of
child across settings
- Some practitioners only see children in clinic
settings or for a very short period of time - How can practitioners obtain more comprehensive
information about childrens behavior and daily
routines?
37Issue 11 Role of families in the assessment
process
- Families provide a unique perspective on the
childs functioning - Not all assessment tools have good procedures for
incorporating the familys perspective - Need good tools/procedures for learning about
child from the family
38Role of families in the assessment process
- Programs vary in how much and how they share
assessment data with families, especially with
regard to communicating developmental ages or
extent of a childs delay. - Some providers are soft pedaling the assessment
results - Providers may need training in
- eliciting information about childs day to day
functioning and - sharing results with families
39Issue 12 Multiple assessment systems
- Children 0-5 participating in IDEA programs also
will be participating in the required OSEP
reporting (approx. 1 million children) - Some of these children also may be participating
in other assessment systems
40Child Care
Head Start
OSEP Reporting
State Preschool
Participation in Multiple Accountability Systems??
41Bottom Line
- What needs to happen to make sure assessments
make a meaningful contribution to improved
outcomes and program improvement? - What can be done to insure assessment data used
for outcomes measurement are - Meaningful
- Valid
- Reliable
- Credible?
42Responsibilities as State Leaders
- Ensure that
- Practitioners understand recommended practices
with regard to assessment of young children - Practitioners have the skills necessary to engage
in recommended practices - Practitioners actively and appropriately involve
families in the assessment process
43More Responsibilities
- Ensure that
- Practitioners have the skills to sensitively and
accurately explain assessment results to families - Practitioners use ongoing assessment to monitor
childrens process AND to make adjustment to the
childs program based on the results
44And More
- Put mechanisms in place to promote quality
assessment for all purposes including
accountability - Supervisors, coaches
- Data checks and verification
- Create a culture promoting data use, assessment
data in particular - Involve the entire state in using data for
program improvement
45And More
- Collaborate with
- Higher education to ensure new practitioners are
entering the field with necessary knowledge and
skills related to assessment - Other programs serving the same children to learn
their message and possible requirements related
to assessment (Goal Families hear one message)
46- Hats off to you for leading the charge!
I Love Good Outcome Data