How am I doing ' Virtually - PowerPoint PPT Presentation

1 / 28
About This Presentation
Title:

How am I doing ' Virtually

Description:

Seek expert, outside assistance when necessary or appropriate. W W W RESOURCES ... Hammermesh, D. S., & Parker, A. M., (2003) 'Beauty in the ... – PowerPoint PPT presentation

Number of Views:53
Avg rating:3.0/5.0
Slides: 29
Provided by: ysu9
Category:
Tags: doing | virtually

less

Transcript and Presenter's Notes

Title: How am I doing ' Virtually


1
How am I doing .Virtually?
  • A TLT Group webcast series
  • co-sponsored by the
  • Professional and Organizational Development (POD)
    Network
  • in Higher Education

2
Part I On-line EvaluationBasic Good Practice
  • Michael Theall
  • Youngstown State University

3
A BASIC PREMISE
  • All faculty evaluation is local.
  • In other words, though years of research and
    practice have shown how to develop and sustain
    effective systems for faculty evaluation, it is
    the local implementation,
  • support, maintenance, assessment, and
    ESPECIALLY the local context, that make or break
    evaluation systems.

4
ANOTHER PREMISE
  • EVALUATION WITHOUT
  • DEVELOPMENT
  • IS PUNITIVE!
  • The perception that evaluation is a process used
    primarily to sort those who are favored from
    those who are not, will effectively kill any
    chance for a successful system

5
AND A CORROLARY
  • DEVELOPMENT WITHOUT
  • EVALUATION
  • IS GUESSWORK!
  • Resources for teaching and learning are always
    useful, but they achieve maximum benefit only
    when assessment and evaluation provide
    information about what works and what needs
    improvement.

6
A FINAL PREMISE
  • EVALUATION AND DEVELOPMENT SYSTEMS WILL NOT BE
  • COMPLETE UNTIL THEY ARE BASED ON AN UNDERSTANDING
    OF THE WORK THAT
  • FACUTY ARE EXPECTED TO DO, AND THE SKILLS THAT
    ARE REQUIRED TO DO THAT WORK!

7
Summary matrix
8
Purposes of Evaluation and Kinds of Data
  • FORMATIVE
  • (for information, revision, improvement)
  • SUMMATIVE
  • (for decisions about merit or worth)
  • INSTRUMENTAL
  • (process and activities)
  • CONSEQUENTIAL
  • (outcomes and effects)

9
Concerns in Student Ratings of Teaching
  • RELIABILITY
  • (consistent, useful)
  • VALIDITY
  • (logical, robust, meaningful)
  • GENERALIZABILITY
  • (applicable, accurately comparable)
  • SKULDUGGERY
  • (no systemic or systematic manipulation)

10
MYTHS ABOUT STUDENT RATINGS
  • 1. Students are not qualified to make judgments
    about teaching competence.
  • 2. Student ratings are popularity contests.
  • 3. Students are not able to make accurate
    judgments until they have been away from the
    course for several years.
  • 4. Student ratings are unreliable.

11
RESEARCH EVIDENCE
  • 1. Students are qualified to rate certain
    dimensions of teaching.
  • 2. Students do discriminate among dimensions of
    teaching and do
  • not judge solely on the personal
    popularity of instructors.
  • 3. Ratings by current students are highly
    correlated with those of
  • former students (alumni).
  • 4. Student ratings are reliable in terms of both
    agreement (similarity
  • among students rating a course and the
    instructor) and stability
  • (the extent to which the same student
    rates the course and the
  • instructor similarly at two different
    times).

12
MYTHS ABOUT STUDENT RATINGS
  • 5. Student ratings are invalid.
  • 6. Students rate instructors on the basis of the
    grades they receive.
  • 7. Extraneous variables and conditions affect
    student ratings.

13
RESEARCH EVIDENCE
  • 5. Student ratings are valid, as measured against
    a number of criteria, particularly students'
    learning.
  • 6. Student ratings are not unduly influenced by
    the grades students receive or expect to receive.
  • 7. Student ratings are not unduly affected by
    such factors as student characteristics, course
    characteristics, and teacher characteristics.

14
QUALITY OF PERSONNEL DECISIONSDEPENDS ON
  • Fair personnel practices
  • clear policies and practices
  • consensus on expectations and criteria
  • protections for all
  • Interpretive skills of decision-makers
  • knowledge of evaluation methods
  • quantitative skills
  • knowledge of post secondary teaching
  • practice and technique
  • Quality of the information decision-makers
    receive
  • validity - measures relevant aspects of
  • teaching skill or instructional quality.
  • reliability - precision
  • comprehensibility' - message design and
    contents
  • are appropriate for the skills of users,
    including any
  • needed decision support

15
Good practice is
  • Comprehensive
  • Takes into account full range of
    responsibilities and activities
  • Systematic
  • Is purposive, organized, and standardized
  • Public
  • Has known criteria and procedures, and is
    documented Flexible Can accommodate change
    take advantage of individuals' talents
    capabilities as well as serving the needs of the
    academic unit
  • Evaluated Its processes and products are
    monitored for efficiency
  • and effectiveness
  • Local
  • Sensitive and responsive to local history, needs,
    realities,
  • and especially, local stakeholders.

16
Guidelines 1(do your homework)
  • Establish the purpose of the evaluation and the
    uses and users of ratings beforehand.
  • Include all stakeholders in decisions about
    evaluation process and policy.
  • Keep a balance between individual and
    institutional needs in mind.
  • Build a real "system" for evaluation, not a
    haphazard and unsystematic process.

17
Guidelines 2(establish protection for all)
  • Publicly present clear information about the
    evaluation criteria, process, and procedures.
  • Establish legally defensible process and a system
    for grievances.
  • Establish clear lines of responsibility/
    reporting for those who administer the system.
  • Produce reports that can be easily and accurately
    understood.

18
Guidelines 3(make it positive, not punitive)
  • Absolutely include resources for improvement and
    support of teaching and teachers.
  • Educate the users of ratings results to avoid
    misuse and misinterpretation.
  • Keep formative evaluation confidential and
    separate from summative decision making.
  • In summative decisions, compare teachers on the
    basis of data from similar situations.
  • Consider the appropriate use of evaluation data
    for assessment and other purposes.

19
Guidelines 4(verify maintain the system)
  • Use, adapt, or develop instrumentation suited to
    institutional/individual needs.
  • Use multiple sources of information from several
    situations.
  • Keep ratings data and validate the instruments
    used.
  • Invest in the evaluation system and evaluate it
    regularly.
  • Seek expert, outside assistance when necessary or
    appropriate.

20
W W W RESOURCES
  • http//www.aera.net/Default.aspx?id901
  • The link to the AERA Special Interest
    Group in Faculty Teaching,
  • Evaluation Development
  • http//podnetwork.org/index.htm
  • The link to the POD Network website
  • http//www.cedanet.com/meta/
  • The link to the Center for Educational
    Development Assessment
  • meta-profession website
  • http//www.idea.ksu.edu/
  • The link to the IDEA Center at Kansas
    State
  • http//ntlf.com/pod/index.html
  • The link to the National Teaching
    Learning Forum resource on
  • evaluation

21
W W W RESOURCES
  • http//ecourseevaluation.com/products/ETS.learn
  • The ETS site for their SIR II
    instrument
  • http//www.byu.edu/fc/pages/tchlrfr.html
  • The BYU evaluation website at the BYU
    Faculty Center
  • http//aer.arizona.edu/AER/teaching/questionnares/
    ques_main.htm
  • The U of AZ site for their evaluation
    system
  • http//www.washington.edu/oea/services/course_eval
    /index.html
  • The U of Washington Office of
    Assessment evaluation system
  • http//www.oir.uiuc.edu/index.htm
  • The U of Illinois Urbana-Champaign
    Center for Teaching
  • Excellence evaluation portal

22
Notes about references
  • Items in standard Arial font (like this) are
    established references that concur with the vast
    array of solid research on ratings.
  • Items in Arial italic (like this) are solid
    studies that raise interesting issues and have
    been misinterpreted as evidence of ratings
    invalidity.
  • Items in Book Antiqua font (like this) are
    studies that are weak and have been harshly
    criticized but nonetheless used as evidence of
    ratings invalidity.

23
References
  • Ambady, N. Rosenthal, R. (1993) Half a minute
    predicting
  • teacher evaluations from thin slices of
    nonverbal behavior and
  • physical attractiveness. Journal of
    Personality and Social
  • Psychology, 64, 431-41, (March).
  • Arreola, R. A. (2000) Developing a
    comprehensive faculty
  • evaluation system. (2nd ed.) Bolton, MA
    Anker Publishing
  • Company .
  • Arreola, R. A., Theall, M. Aleamoni, L. M.
    (2003) Beyond
  • Scholarship Recognizing the Multiple
    Roles of the Professoriate.
  • Paper presented at the annual meeting of
    the American
  • Educational Research Association.
    Chicago April 22. Available
  • at http//www.cedanet.com/meta.

24
References
  • Chickering, A. W. Gamson, Z. F. 1991)
    "Applying the seven
  • principles for good practice in
    undergraduate education. New
  • Directions for Teaching and Learning
    47. San Francisco
  • Jossey Bass.
  • Cohen, P. A. (1981). Student ratings of
    instruction and student
  • achievement A meta-analysis of
    multisection validity studies.
  • Review of Educational Research, 51,
    281-309.
  • Feldman, K. A. (1989). The association between
    student ratings of
  • specific instructional dimensions and
    student achievement
  • Refining and extending the synthesis of
    data from multisection
  • validity studies. Research in Higher
    Education, 30, 583-645.

25
References
  • Feldman, K. A. (1998) Reflections on the
    effective study of college
  • teaching and student ratings one
    continuing quest and two
  • unresolved issues. In J. C. Smart
    (ed.) Higher education
  • handbook of theory and research. New
    York Agathon Press.
  • Franklin, J. L., Theall, M. (1994) Student
    ratings of instruction
  • and sex differences revisited. Paper
    presented at the 75th
  • annual meeting of the American
    Educational Research
  • Association. New Orleans April 7.
  • Hammermesh, D. S., Parker, A. M., (2003)
    "Beauty in the
  • Classroom Professors' Pulchritude and
    Putative Pedagogical
  • Productivity. NBER Working Paper No.
    W9853. Abstract
  • available at http//ssrn.com/abstract42
    5589

26
References
  • Haskell, R. E. (1997). Academic freedom, tenure,
    and student
  • evaluation of faculty Galloping polls
    in the 21st century.
  • Education Policy Analysis Archives,
    5(6). Available at
  • http//olam.ed.asu.edu/epaa/v5n6.html.
  • Johnson, V. (2003) Grade inflation a crisis in
    higher education.
  • New York Springer Verlag.
  • Marsh, H. W. (1987). Students evaluations of
    university teaching
  • Research findings, methodological
    issues, and directions for
  • future research. International Journal
    of Educational Research,
  • 11, 253-388.

27
References
  • Pascarella, E. T. Terenzini, P. T. (1991) How
    college affects
  • students. San Francisco Jossey Bass.
  • Pascarella, E. T. Terenzini, P. T. (2005) How
    college affects
  • students. Volume 2. San Francisco
    Jossey Bass.
  • Sorenson, L. Johnson, T. (2004) Online student
    ratings of
  • instruction. New Directions for
    teaching and learning 96. San
  • Francisco Jossey Bass.

28
References
  • Theall, M. (2002) Leadership in faculty
    evaluation and
  • development some thoughts on why and
    how the meta-
  • profession can control its own destiny.
    Invited address at the
  • annual meeting of the American
    Educational Research
  • Association. New Orleans April 3.
    Available at
  • http//www.cedanet.com/meta/
  • Theall, M., Franklin, J. L. (1990). Student
    ratings in the context
  • of complex evaluation systems. In M.
    Theall J. Franklin
  • Eds.), Student ratings of instruction
    Issues for improving
  • practice. New Directions for Teaching
    and Learning 43. San
  • Francisco Jossey Bass.
  • Williams, W. M., Ceci, S. J. (1997). Howm I
    doing? Problems
  • with student ratings of instructors and
    courses. Change, 29(5),
  • 13-23.
Write a Comment
User Comments (0)
About PowerShow.com