Evidence in Teacher Preparation - PowerPoint PPT Presentation

1 / 41
About This Presentation
Title:

Evidence in Teacher Preparation

Description:

Institutions collect data to assess teacher candidates ... Ohio. City University of New York System. Georgia. South Carolina. University of Wisconsin System ... – PowerPoint PPT presentation

Number of Views:69
Avg rating:3.0/5.0
Slides: 42
Provided by: McK102
Category:

less

Transcript and Presenter's Notes

Title: Evidence in Teacher Preparation


1
Evidence in Teacher Preparation Establishing a
Framework for Accountability
July 21, 2006
Mona Wineburg Director of Teacher
Education American Association of State
Colleges Universities (AASCU)
2
Description of project   Purpose To examine the
current state of knowledge and practice about how
universities provide credible and persuasive
evidence of the effectiveness of their programs
to schools, parents, policy makers, and the
public.   Focus on evidence-based teacher
education, the connection between teacher
preparation programs and subsequent P-12 pupil
learning.   Received funding from the Carnegie
Corporation of New York.
3
Description of Project
Structure   Answer three questions with respect
to evidence of effectiveness in teacher education
programs   (1) Whats Happening? (2) Whats
Promising? (3) Whats Believable?  
4
(1)    Whats Happening? AASCU campuses were
surveyed in January 2005 to find out what
campuses are doing to demonstrate the
effectiveness of their teacher education
programs.  
  • Received responses from 240 institutions (65 of
  • all AASCU institutions that prepare teachers)
  • FTE enrollments approaching
  • 2,000,000 students
  • Total teacher education degrees
  • conferred 65,000

5
(2)    Whats Promising? We convened a
national meeting at the Carnegie Foundation for
the Advancement of Teaching in Palo Alto, CA in
late April 2005.
a) Discussed critical issues in the collection
and interpretation of evidence b) Identified
some promising strategies c) Reported on efforts
underway throughout the states to develop
comprehensive state evidence systems for the
effectiveness of teacher education programs.
6
(3) Whats Believable? We met with policy
makers, state officials, legislators and others
in Washington, DC and at the Education Commission
of the States (ECS) Annual Policy Forum in
Denver, CO.
  • to discuss survey results
  • to seek their perspectives on the issue of
    evidence for teacher education effectiveness

7
Analysis of Survey Results
  • Institutions collect data to assess teacher
    candidates
  • Institutions compile voluminous data about
    teacher preparation programs
  • Institutions have difficulty accessing
  • P-12 pupil achievement data
  • Institutions are collecting information that
  • is of dubious utility

8
Analysis of Survey Results
  • Data collected is all over the map
  • Need to focus more on validity
  • of data and reliability of evidence
  • Energy and resources are going into the
  • data collection process
  • High level of frustration about data and
  • evidence processes

9
Overview of Survey
  • Focused on two areas
  • Results and Outcomes
  • Issues in Measurement

10
  • Results and Outcomes
  • Do you assess the content knowledge of your
    program graduates?
  • When?
  • Types of evidence
  • Tests? Other measures?
  • How do you use the results?
  • Do you assess the classroom performance of
  • your program graduates?
  • What types of evidence
  • Multiple sources?
  • Solicit outside judgments? From whom?

11
Results and Outcomes
  • Measures of P-12 learning to assess program
    graduates
  • Types of evidence
  • Multiple sources?
  • What measures? Who develops? Who administers?
  • Intended for district, school, or individual
    assessment?
  • Retention and participation of graduates
  • Do you track both?
  • How do you count participation/retention?
  • What methods?
  • Sources of evidence?

12
  • Issues in Measurement
  • Mandates and expectations
  • Do you have specific legislative or accreditation
    requirements, mandates or expectations that you
    will measure the performance of program
    graduates?
  • What is to be measured?
  • For whom?

13
Issues in Measurement
  • Data collection and analysis
  • Issues?
  • P-12 partner schools involved?
  • Use state or district P-12 standards?
  • Unit of Measurement
  • Individual graduates? Aggregate measures?
  • Access to data
  • Use of data

14
Primary Methods for Gathering Evidence for
Content, Classroom Performance, and P-12 Student
Learning
Observation systems (More or less supported by
rubrics and standards) Surveys Work
samples/portfolios Praxis III/State tests
15
  • Content Knowledge Assessment

Virtually all institutions report they assess
content knowledge. Those that do not assess
content knowledge now are working on it. Is the
issue of content knowledge therefore settled?
16
When content knowledge is assessed Prior to
entrance to program During content
courses During general education program Prior
to student teaching At program exit At end of
first year (receive test results from
state) Survey of program completers  Tests Pra
xis I and II State-developed tests
17
How content knowledge test results are
used Program entrance Benchmark/eligible for
next level in program Eligible to student
teach Requirement for graduation Recommendation
for certification Counsel students Program review
and improvement
18
AERA Panel on Research and Teacher Education
(Cochran Smith Zeichner, Ed. 2005)
Recommendations
  • Content and concurrent validity of tests
  • need to be assessed
  • Research is needed regarding predictive
  • and consequential validity

19
  • II. Classroom Performance Assessment
  • Measures
  • Teacher Work Samples
  • Portfolio reviews
  • Surveys
  • Anecdotal
  • Self-assessment

20
  • Observations by experts
  • (Used by all institutions)
  • Videos
  • Micro-teaching
  • Student teaching/practicum evaluation
  • Action research projects
  • State developed evaluation system
  • Framework for Teaching
  • National/State/System-wide testing
  • Praxis III/Pathwise
  • Induction programs

21
Types of evidence Classroom observation/Observat
ion protocols Teacher Work Sample Field
experience assignments Portfolios/E-portfolios Sur
vey/Narratives/Testimonials/Anecdotal Interview
data Checklist ratings Pre-post
testing Evaluations from residency year Praxis
III data Pathwise data Videotape analysis State
aggregated data Principals aggregated
data Self-report
22
How institutions addressed the issue of how
they ensured validity and reliability Many
did not address this question Some did not test
for validity or reliability Those that did
reported using a variety of methods
Levels of sophistication varied
23
Solicit judgments about the adequacy of teacher
education? From whom? State licensing
board/education agency National/Regional
accrediting agency Program graduates P-12
schools Cooperating teachers School
administrators Mentor teachers/induction
programs District superintendents Local
advisory boards Title II results/Praxis pass
rates College of Arts and Sciences NSSE
24
Types of data Survey/Narrative/Anecdotal/Inte
rview (50) Licensure/hiring reports Portfolios/
K-12 student work samples State assessment of
teaching scores Accreditation reports Reports
from districts/states   Almost all data is
aggregated. Data can be traced to programs.  
25
Validity of judgment data
25 did not respond Face validity, content
validity, and triangulation of data methods used
most often
State developed clinical performance assessments
relied on states to validate instruments
26
III. Measures of P-12 Learning to Assess Graduates
(50 either did not respond or did not
use) Measures Performance assessment/portfolio
Teacher Work Sample/pre-post test measures
(20) Observations/with rubrics Pathwise Fram
eworks for Teaching State-designed performance
assessment Teacher evaluations State tests
(10) Standard skills K-12 End-of-grade CSU
study (one time only) Norm-referenced
tests Criterion-referenced tests SPA reports
impact on P-12 learning
27
Types of evidence Disaggregated data by
district, school, and by program areas Teacher
Work Sample/ Pre-post testing Aggregated learning
and proficiency data Classroom observation/Observa
tion protocols Survey/Narratives/Testimonials/Ane
cdotal Field experience assignments Portfolios/E-p
ortfolios Interview data School test
scores State list of Highly Qualified teachers
28
Validity and Reliability of measures of P-12
student learning
  • 50 did not respond
  • 25 did not assess for validity and reliability
  • of their in-house developed instruments
  • State measures determined by state
  • Others used a range of methods
  • inter-rater reliability
  • correlation analysis

29
Some states have systems in place
Quality Counts 2005 (Education Week) 14 states
hold teacher education programs accountable for
graduates performance in classroom setting
(Louisiana, South Carolina, Connecticut,
Kentucky, Indiana, Oklahoma, North Carolina,
Ohio, Missouri, Alabama, Florida, Nevada,
Mississippi, Washington) Mandated induction
programs in many states
30
Issues in Measurement
Who was asking for evidence? What kinds of
evidence were they requesting? What did it look
like from the institutional perspective?
31
Specific concerns about data collection and
analysis
  • The amount of time to collect and analyze is
    considerable,
  • especially for a small program 2) Different data
    requirements
  • and differing definitions to be used in
    collecting and
  • summarizing data for all of the formal reporting
    requirements
  • of state, federal, and national accreditation
    make analysis
  • extremely time consuming and 3) It is a
  • continuing challenge to ensure the reliability
  • and validity of measures of classroom
  • performance by students and graduates.

32
Specific concerns about data collection and
analysis
Major issues include availability of data and
reluctance to release data, even aggregated,
because of union guidelines and confidentiality
issues.
Time, technology and personnel to input,
aggregate, analyze and interpret the volume of
data generated by a large program and
communicating the information in a timely
manner to all who need to have it to make
better decisions.
33
Specific concerns about data collection and
analysis
Database compatibility within and across
organizations program components assessed at
multiple times, by different stakeholders, so
results are not comparable requests, format and
questions worded differently for similar
information participant return rate on surveys
lack of state database support and access.
34
Consensus of Palo Alto Meeting Discussions
  Need to focus on What should be assessed?
At what levels? For what purposes?
35
  • What would a minimum collection of
  • evidence be?
  • What are the minimum numbers of issues that need
    to be assessed?
  • Content knowledge
  • Pedagogy
  • Student Achievement
  • Retention of teachers
  • Ethnicity of teachers
  • Supply of teachers

36
Vision of teacher competency is too narrow
  • It doesnt measure democratic skill development
  • It doesnt measure relationships with others
  • It doesnt measure academic subjects not tested
  • with standardized tests
  • It doesnt measure self-esteem or confidence as a
    learner
  • Etc.

Cost issues in demonstrating effectiveness
  • Time
  • Money
  • Etc.

37
Good state policy environments are necessary for
institutional success. Notable examples of
States/Systems that are moving forward in this
area
  • Louisiana
  • California State University System
  • Texas AM System
  • Ohio
  • City University of New York System
  • Georgia
  • South Carolina
  • University of Wisconsin System

38
Christa McAuliffe Excellence in Teacher Education
recipients
  • 2002
  • Bowling Green State
  • University
  • East Carolina University
  • Indiana State University
  • University of Maryland Baltimore County
  • 2003
  • Central Michigan University
  • San Diego State University
  • University of Nebraska
  • at Omaha
  • University of Toledo
  • 2004
  • Longwood University
  • University of Central Florida
  • Valdosta State University
  • 2005
  • Ball State University
  • Old Dominion University
  •  

39
Goals
National Framework for Credible Evidence
Developed collaboratively by all
stakeholders Operationalized state-by-state Broadl
y agreed upon Cost-effective Reliable and
Valid Measuring the effectiveness of
individual programs
40
  • Resulting in
  • Increased student achievement and more
    effective
  • schools
  • Increased numbers of highly effective teachers
  • Increased professionalization of teacher
  • education programs
  • Increased public confidence

Wineburg, M.S. (2006). Evidence in Teacher
Preparation Establishing a Framework for
Accountability. Journal of Teacher Education, 57
(1) 51-64.
41
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com