Standard 9 - Assessment of Candidate Competence - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

Standard 9 - Assessment of Candidate Competence

Description:

What problems do you see with using grades or candidates' GPA to measure ... Candidate dispositions is a type of data that was reported by only six programs. ... – PowerPoint PPT presentation

Number of Views:116
Avg rating:3.0/5.0
Slides: 23
Provided by: rpa91
Category:

less

Transcript and Presenter's Notes

Title: Standard 9 - Assessment of Candidate Competence


1
Standard 9 - Assessment of Candidate Competence
  • Candidates preparing to serve as professional
    school personnel know and demonstrate the
    professional knowledge and skills necessary to
    educate and support effectively all students in
    meeting the state-adopted academic standards.
    Assessments indicate that candidates meet the
    Commission-adopted competency requirements, as
    specified in the program standards.

2
Types of Data Used in Biennial Reports, 2008-09
3
Background
  • Staff reviewed 155 separate program reports
    representing more than 35 institutions
  • Biennial reports submitted in the fall of
    2008-09 represented every kind of credential
    program approved by the CTC
  • The most frequently reported credential programs
    were the single and multiple subjects
  • This report describes the types of data reported
    for MS/SS and for education admin programs.

4
Reporting the Data
  • Programs have the option of deciding how to
    organize, report and analyze data.
  • Some programs reported MS data separately from
    SS, whereas others reported data from the two
    programs together.
  • Similarly, some programs reported preliminary ed
    admin program data separately, whereas others
    reported the prelim. and professional ed admin
    data together.

5
Understanding the Count
  • Staff made notes in the feedback form for every
    program in every institution that submitted
    reports. Some of those notes were pretty cryptic.
    The tables represent our best attempts at
    categorizing data in meaningful ways.
  • Data were organized to identify when, in the
    program, the assessment was performed (e.g.,
    pre-student teaching, at the end of student
    teaching).
  • Every type of data from every report was counted
    as a single example.

6
Understanding the Count
  • Multiple sets of one type of data were counted as
    one example (e.g., grades from four courses in
    the same program).
  • Data from one course, pre-student teaching
    observations, and student-teaching observations
    from the same program were counted as three
    examples of data.
  • Grades from one course reported in one program at
    an institution and grades from one course
    reported in a second program at that institution
    were counted as two examples.

7
How to Use This Information
  • As we discuss the tables
  • Remember that types of data transcend program
    type. A type of data used in a MS program (e.g.,
    pre-student teaching observation) can model a
    type of data appropriate for a school nursing
    credential program.
  • Identify whether you currently use instruments
    similar to those on the tables.
  • Notice who does the assessment faculty member,
    candidate, clinic supervisor, etc.

8
How to use this information
  • If one of the instruments suggests something you
    might want to do, make note of it to share with
    your colleagues.
  • Similarly, if we identify problems with a
    particular type of data thats similar to
    something you plan to report, please say
    something about that during the webcast. Be
    assured that you wont be the only person with
    that kind of question.

9
MS/SS Data
  • Eight major categories of data
  • RICA and Candidate Knowledge represent tests or
    assignments designed to measure candidate content
    and pedagogical knowledge.
  • Grades were used as an indicator of candidate
    quality.
  • In virtually every example of these types of
    data, faculty were doing the assessments of
    candidate competence.
  • What problems do you see with using grades or
    candidates GPA to measure candidate competence
    or program quality? What problem could occur if
    an institution only used assessments from
    faculty?

10
MS/SS Data
  • Candidate dispositions is a type of data that was
    reported by only six programs.
  • What does this type of data tell you about a
    programs effectiveness?
  • Pre-student-teaching observations
  • These data were distinct from the other
    pre-student-teaching data because they used a
    standards-based rubric and were completed by
    faculty or district supervisor.
  • What makes these data more useful than previous
    types of data?

11
MS/SS Data
  • The second most frequent type of data was
    student-teaching evaluations.
  • The majority of these evaluations were
    standards-based (TPE, CSTP, institution-developed)
  • The evaluations were completed by faculty or a
    district supervisor.
  • In some cases, these assessments were used to
    provide both formative and summative feedback to
    the candidate.
  • What qualities of these data make them
    particularly informative? How can they be used?

12
MS/SS Data
  • Programs were required to report TPA data and
    nearly all did so.
  • Some programs reported the results of multiple
    test-taking attempts which could be analyzed to
    demonstrate remediation efforts.
  • Some programs utilized TPA standards (the TPEs)
    for assessing coursework and student teaching.
    What did this enable them to do? They were able
    to monitor candidate progress throughout the
    entire educator preparation program. They were
    able to measure program impact on candidate
    development.

13
MS/SS Data
  • Program evaluation surveys were the most common
    type of instrument used.
  • Of these, the most common was the CSU exit survey
    and one year out survey. Some non-CSU
    institutions have adopted it or something
    similar.
  • The majority of individuals who provided this
    data were candidates (course evaluations) or
    program completers.
  • District-employed supervisors and employers
    provided some of the information.

14
MS/SS Data, summary
  • Overall, candidates and program completers
    (former candidates) provided the majority of the
    information. Faculty provided the next greatest
    amount of information by evaluating coursework
    and half of the student teaching evaluations.
    District-employed supervisors had two means for
    providing feedback student teaching evaluations
    and program evaluations

15
MS/SS Data Summary
  • The most frequently used measures for MS/SS
    programs were evaluation surveys.
  • The second most frequently used were student
    teaching evaluations.
  • Assessments of candidate knowledge were the third
    most frequently used sources of data.
  • Questions or comments?

16
Education Admin Data
  • The most common source of data was coursework and
    the most frequent rater was faculty.
  • Fieldwork was also a source of data but, unlike
    for teacher prep programs, there was little
    uniformity regarding the standards used. CPSELS
    was used and was MINDSCAPES.

17
Education Admin Data
  • The second most frequent type of data was
    evaluation surveys. The surveys provided
    feedback on courses, programs, and
    practicum/fieldwork experiences. Unlike for
    teacher preparation programs, there are no
    institution-wide completer or employer survey
    efforts.

18
Education Admin Data Summary
  • Ed Admin Programs have less institutional
    guidance for assessing candidate competencies or
    program effectiveness.
  • Types of data are more likely to reflect
    program-specific emphases and the needs of
    district partners or of individual candidates.
    In addition, the data are muddled because some
    programs integrated preliminary and professional
    ed admin program data.
  • Questions or comments?

19
Using Individual-Level Data for Program Evaluation
  • Every type of data in the tables is
    individual-level data. Even evaluation survey
    data reflects the perspectives of one individual.
  • There are two main types of data weve discussed.
    Candidate competence and evaluation data.
  • Evaluation data, generally collected through
    surveys, is intended to be reported in the
    aggregate.
  • Also, evaluations generally reflect on something
    thats in the past reflect back on an
    experience or on the competencies of an
    individual trained by a program.
  • How might a program use candidate competency data
    for program evaluation?

20
Using Individual-Level Data for Program Evaluation
  • If candidate assessment data measures
    competencies described in standards and can be
    summarized (quantitatively or qualitatively), it
    can be aggregated to the program level.
  • However, how do you know whether the competency
    level of the candidates is due to the program?
  • What if the candidates come into the program with
    those skills? What if your admission
    requirements screen for those skills?
  • One way to significantly reduce uncertainty about
    what results of candidate assessments mean is to
    measure candidate competencies multiple times.

21
Using Individual-Level Data for Program Evaluation
  • How can you do that?
  • For example, assess candidates with a
    standards-based measure early in the program, to
    provide a baseline.
  • Prior to student teaching, measure those
    attributes again, using the same instrument or
    another instrument but against the same
    standards.
  • At the end of student-teaching, assess the
    candidates again.
  • Change between the scores gives an indication of
    program impact. This may not be true for any
    individual candidate, but across a group of
    candidates, it can be indicative of program
    quality.
  • In a minute, Cheryl will provide examples.

22
Using Individual-Level Data for Program Evaluation
  • Gathering data from multiple stakeholders also
    ensures that your data realistically reflects
    your program.
  • As you plan your system, build in opportunities
    for multiple, informed stakeholders to provide
    feedback on candidate competencies and on program
    quality (evaluation).
  • If all data points in the same direction, you can
    make program modifications with confidence. If
    data points in different directions, you may need
    to reassess your instruments, or wait another
    year before modifying assessment and evaluation
    system.
  • Comments or questions?
Write a Comment
User Comments (0)
About PowerShow.com