Enhancing student learning through low stakes summative assessment with formative feedback Damian Pa - PowerPoint PPT Presentation

1 / 6
About This Presentation
Title:

Enhancing student learning through low stakes summative assessment with formative feedback Damian Pa

Description:

As a learning-oriented assessment' (Carless, 2003) the strategy was deemed by ... Carless, D. (2003) Learning-oriented assessment. ... – PowerPoint PPT presentation

Number of Views:85
Avg rating:3.0/5.0
Slides: 7
Provided by: hop8
Category:

less

Transcript and Presenter's Notes

Title: Enhancing student learning through low stakes summative assessment with formative feedback Damian Pa


1
Enhancing student learning through low stakes
summative assessment with formative
feedbackDamian Parry, Carl Larsen and Cathy
Walsh Health and Applied Social Sciences,
Liverpool Hope University, Hope Park, Liverpool,
UK
Abstract Students must be able to construct
meaning from the high quality feedback given by
tutors and be reflective in deriving feed forward
from it. Formative assessment is often seen as
the most productive way to accelerate learning
however in this study we investigated the use of
an assessment strategy which couples low stakes
summative assessment with formative feedback. A
specifically selected, cohort of second year
bioscience students submitted three laboratory
practical reports based on scientific inquiry.
The pieces contributed 5, 10 and 20
respectively of the module mark each submission
was closely followed by an arranged feedback
appointment in taught session time based around
written and oral dialogue. Students achievement
was found to be markedly improved using this
strategy the mean grade achieved for submitting
work increased from a grade D to a grade C.
Student learning was enhanced in fundamental or
first order skills and knowledge however, they
found it more difficult to transfer feedback that
required them to engage in second or higher order
thinking. Nevertheless, the intervention was
deemed to be a worthwhile enhancing student
achievement, improving the relationship between
staff and students and encouraging students to
view assessment as part of their
learning. Keywords low stakes summative
intervention strategy, feedback, feed forward,
enhancing student achievement, practically based
bioscience module,
Corresponding author Damian Parryparryd1_at_hope.ac
.uk
2
Enhancing student learning through low stakes
summative assessment with formative
feedbackDamian Parry, Carl Larsen and Cathy
Walsh Health and Applied Social Sciences,
Liverpool Hope University, Hope Park, Liverpool,
UK
Introduction One of the guiding principles in
curriculum design is the need to achieve
constructive alignment between intended learning
outcomes, learning activities and assessment
(Biggs 1999). Indeed studies carried out in the
1970s (Snyder,1971 Miller Parlett, 1974 cited
in Gibbs and Simpson, 2004) reported on the
hidden curriculum it was not in fact teaching
but assessment that played the largest part in
guiding students learning. Students dedicate
more hours to their learning in weeks when
assessment is due (Brown et al 2003) and is
further emphasised by Boud (1986) who noted that
assessment methods and requirements probably
have a greater influence on how and what students
learn than any other single factor. In a
recent FAST survey 400 student respondents felt
that whilst they did engage with feedback they
often did not use it to inform their future
learning (Glover and Brown 2006). The authors
comment further that students perceive feedback
as being assignment specific rather than as
feed-forward for other pieces of assessment. This
point highlights just one of a number of barriers
to assessment as an integrated part of learning
feedback must be timely (Juwah et al 2004), of
high quality (Brown and Knight 1994), personally
delivered (Merry and Orsmond 2007), expressed in
appropriate language (Merry and Orsmond 2007) and
owned equally by those receiving and those
providing it. The Higher Education Academy
espouse seven tenants of good practice relating
to feedback which can be used to overcome the
barriers highlighted above (Juwah et al 2004). In
particular, teacher and peer dialogue around
learning, clarification of standards and
expectations, and goal setting to close the gap
between current and desired performance are
pivotal. In the research literature there is
indeed evidence that students do not understand
the feedback that they are given and are
therefore not able to close the gap between
feedback and feed forward (Chanock 2000, Hyland
2000). It is important therefore that lecturers
give appropriate feedback to form a cornerstone
on which to enhance students as self-critical and
reflective learners teacher feedback also
serves as an authoritative external reference
point against which students can evaluate, and
self-correct their progress and their own
internal goals (Nicol and
Macfarlane-Dick 2004 pg 1). Nicol and
Macfarlane-Dick (2004) argue that the key to
constructive alignment is therefore ensuring that
lecturers and students form a partnership in
which dialogue through feedback is fundamental to
learning and teaching activities. The challenge
for academics is formed from the individual needs
of students and the institutional drive towards
quality and achievement. Stowell (2001, cited in
Race 2005) provides a solution to this problem in
recommending that lecturers ensure that their
feedback is tailored to suit the learning needs
of the student receiving the feedback. An
extension of this recommendation might be that
lecturers ensure that the principles of
communication inform an open dialogue to ensure
that the student constructs appropriate meaning
from the feedback. By definition, communication
may take verbal or non-verbal forms, applying
this to practice in assessment would seem
therefore to recommend that lecturers consider
developing feedback through communication
channels that are both linguistic and
non-linguistic. Orsmond, Merry and Reiling (2002)
affirm that there is a need for strategies which
complement written materials with simple verbal
explanation. Many tutors see the best
opportunity for open communication to be around
formative assessment (Race 2005) and there is a
considerable weight of opinion in the academic
community encouraging increased emphasis on
formative feedback to enhance learning (Dearing
1997, Park and Crook 2007). Formative assessment
is traditionally early in a module and directed
toward acceleration of learning in contrast
summative assessment is often later in the course
and allows students to benchmark their
achievements. In contrast, many students are most
focused on summative assessment the need to
balance study with financial and personal
commitments is seen by some as a burden.
Additionally, there are issues around marking
loads for staff when formative assessment is
introduced in large volume across all levels of
the curriculum. This apparent tension is framed
within the knowledge that assessment can be used
successfully to drive student learning. This
study, carried out in a bioscience module in both
2006-7 and 2007-8 refocused tutor and student
reflective practice around open dialogue
reinforced through feedback on assessment. Three
laboratory practical write-ups were used to
assess students mastery and critical application
of core scientific principles the practicals
carried out consecutively in a cascade of rolling
feedback were of sequential weighting (5, 10 and
20 of the module mark) but were all summative.
Corresponding author Damian Parryparryd1_at_hope.ac
.uk
3
Enhancing student learning through low stakes
summative assessment with formative
feedbackDamian Parry, Carl Larsen and Cathy
Walsh Health and Applied Social Sciences,
Liverpool Hope University, Hope Park, Liverpool,
UK
  • Method
  • Detail is limited to that relating directly to
    the feedback intervention
  • Context There is a general issue with the level
    of preparedness of students for laboratory work
    (Thin 2006). Students frequently do not
    contribute effectively to generic feedback
    sessions similarly many student report a
    preference for individualised feedback. A Level I
    laboratory based module was specifically chosen
    for its manageable student cohort in 2006-7 13
    students were registered in 2007-8 there are 29
    students registered. Students were informed that
    the module team was actively involved in
    pedagogical research no ethical issues were
    identified by the researchers or the University
    research ethics procedure. The module guide gave
    detailed information on a cascade of assessment
    for practical reports. highlighting that
    assessment strategies had been designed to create
    a cascade of linked feedback which whilst
    summative was initially low stakes building to a
    final piece of high stakes heavily weighted work.
    Students were informed in writing that practical
    1 carried 5 of the overall module grade,
    practical 2 carried 10 and practical 3 20.
  • Assessment guidance Students were given general
    guidance on constructing a practical report
    before submission of the first laboratory report.
    In brief, the format of the report followed a
    standard structure an introduction leading into
    appropriate aims/hypothesis methodology was not
    required other than changes to standard
    procedure data presentation with associated
    commentary was followed by integration of data
    with published literature in the form of a
    discussion section. The word count on each
    assignment was limited to 800 words to ensure a
    concise approach. The assessment criteria were
    chosen to ensure constructive alignment but were
    limited in number to prevent the process becoming
    directed by these criteria (Sadler 1998) (Table
    1).
  • Table 1 Each assessment criterion was explained
    to students in terms of tutor expectations of
    excellent, good and poor performance. Students
    then constructed a report. It is standard
    practice for work to be submitted with the cover
    sheets and identifiers at the end of the work to
    ensure anonymity of marking. In order to provide
    some distance between the teaching and
    research element of this experiment the team
    did not conduct any analysis of student
    achievement until the module had been completed.
    In order to maintain this standpoint data from
    the 2007-8 cohort have not yet been analysed and
    the data presented on this poster relates to the
    2006-7 cohort of 13 students.
  • Timing and feedback One week after submission
    written feedback was supplied each student was
    then given a 15 minute individual appointment
    with a module tutor during a scheduled teaching
    session approximately two weeks after submission
    of the written work. These meetings were
    constructed to enable student and tutor to
    discuss in depth information a. specific to the
    practical and b. feed forward for the next
    assignment.

Corresponding author Damian Parryparryd1_at_hope.ac
.uk
4
Results Figure 1.
Grades achieved by students submitting work for
each practical expressed as a for (a) practical
1,
(b) practical 2 and (c) practical 3 (n13).
For practical one (Figure 1a) the lowest
mark was a fail and the highest a high B grade
24 of students achieved a grade C or above with
no student achieving a grade A 46 of students
achieved a grade D with 30 scoring grade E or
below. When considering practical 2 (Figure 1b)
an increased proportion of students (50)
achieved a grade C or above there were no grade
A pieces of work. A reduced number of students
achieving a grade D was noted in practical 2
(21) when compared to practical 1. However,
the number of students achieving a grade E or F
was unaltered (28). For practical 3, 82 of
students achieved a grade C or above with 18
recording grade A (Figure 1c) both values are
higher than for practicals 1 and 2. The number of
students achieving a grade D was reduced (9)
when compared to practicals 1 and 2 similarly,
the number of students recording E and F grades
was reduced for practical 3 (9).
Figure 2. Comparison of
the mean results for all students practical 1
practical 2 and practical 3 (n13). Error bars
represent the standard error of the
mean. The mean mark achieved by all
students submitting practical 1 was 48,
equivalent to a mean of grade D. Before beginning
work on practical 2, 75 of these students took
up the opportunity to gain verbal feedback on
their reports, all students received individual
written guidance. The mean mark achieved for
practical 2 was also 48. After practical 2, and
before practical 3, 86 of students obtained
verbal as well as written feedback the mean mark
awarded for practical 3 was 55. The increase in
the mean grade achieved in practical 3 compared
to practicals 1 and 2 did not reach statistical
significance (ANOVA pgt0.05) this lack of
significance can largely be attributed to the
natural variability resulting from the range of
marks awarded within grade boundaries and also
between students in a cohort. A repeated
measures ANOVA was performed on the data. Results
showed that differences between practicals were
unlikely to have occurred by sampling error (F
6.48, df 2,18 p 0.016). A paired t-test
showed that there was a significant difference in
the correct direction between the mean scores of
students in practical 1 and 3 (t 4.98, df 1,
p 0.001) and practical 2 and 3 (t 2.77, df
1, p 0.022) but not between practicals 1 and 2
(t 0.30. df 1, p 0.774). Students showed
significant improvement in performance between
practical 1 and 3, and 2 and 3, but not between 1
and 2.
Figure 3. The results for each practical broken
down to show the change in mean mark for each
assessed component. For practical 1 for
practical 2 for practical 3
(n13). Finally, the mark for each
practical was deconstructed to enable achievement
in each of the learning outcomes to be
investigated. The mean mark awarded for each
component is recorded in Figure 3. Successive
improvements were recorded in learning outcomes
1, 2, 3, 6 and 7. A two-way ANOVA with repeated
measures was performed on this data. Pair-wise
analysis using Sidak correction showed that there
was a significant difference in the correct
direction between the mean scores of students in
practical 1 and 3 (p 0.002) but not between
practicals 1 and 2 (p 0.99), or practical 2 and
3 (p 0.06). Students showed significant
improvement in performance between practical 1
and 3, but not between 1 and 2, there was a
tendency for an effect of practical on student
performance between practicals 2 and 3. There
was no significant interaction between practical
number and assessment criteria (F 2.4, df
2.14, p 0.07). Discussion Recognising that
students on the module may differ in their aims,
learning histories and skills in learning
(Hounsell and McCune 2002 pg. 8) a decision was
taken to disseminate feedback in both written and
oral format to assist students in the process of
decoding and internalising meaning (Nicol and
Milligan 2006).
Figure 1a Figure 1b
Figure 1c
5
Given that students had become enculturated into
the laboratory atmosphere and that they were
being encouraged to develop a positive
relationship with assessment as a learning tool
it was decided that individual appointments
should be carried out in the usual teaching
laboratory during class time. In this way
staff-student contact time was not significantly
increased and students were encouraged in the
view that everything that is learnt in the class
room is not taught (open discussion at the
Bioscience event on assessment as learning
January 2008). It was interesting to note that
when offered the opportunity to engage in
personal dialogue about their work, students
largely responded positively. This somewhat
contradicts perceptions in academia that students
are at times driven by grades rather than
reflective practice (Dweck and Elliot 1988) but
supports the findings of the three year long
study reported by Higgins et al 2002 (Pg. 53) in
which students valued formative feedback as a
means of engaging with their subject in a deep
way. Furthermore, the methodology reported here
addresses reported barriers to student learning
created when students move onto the next
assignment before they have been able to reflect
on the guidance from previous submissions (Nicol
and McFarlane-Dick 2004) and gives prominence to
the process of positive engagement with
reflective practice as emphasized by Boud (2000).
In practice, many students used the individual
appointments to ask fundamental questions about
referencing, formatting tables, scientific
writing style and interrogation of data. The
majority specifically asked how they could use
their feedback to enhance their performance
staff noted that students were confident in
asking quite basic questions that would not have
been raised in a more open forum. Many students
commented that they were glad the first piece
of assessment was almost worth almost nothing
and that their poor performance in the first
submission could be overcome in more heavily
weighted pieces. Students find writing up
laboratory practical reports particularly
challenging because they require them to engage
in second order thinking (Gipps and McGilchrist
1999). Seers and Wood (2005) reviewed the
aspects and ways of thinking and practicing in
biology model developed by Hounsell and McCune
(2002) in this model it is recognized that
students develop first order skills that are
akin to mastery of the basics of the bioscience
language and its understanding. The higher order
skills of application, interconnectivity,
critical evaluation and reflection develop from
these basic principles. In this study components
which generated particularly disappointing
results in practical one, structure, aims and
referencing, improved in quality in response to
feedback and guidance (Figure 3). These
assessment criteria might be viewed as relevant
to more generic skills the fundamentals.
Students were able to reflect on feedback given
in these areas and were skilled to respond
meaningfully. Those criteria for which
improvement was less consistent, including data
handling, application of knowledge and discussion
of data, were more practical specific this
suggests that students struggled to apply
specific feedback as feed forward in these areas.
This perhaps explains the apparent disparity in
the data between practicals 1 and 2 improvement
was noted between practicals 1 and 3 and 2 and 3
in key prescriptive or fundamental areas.
Although students improved in response to
feedback in areas of structure and aims, and
remained constant in referencing and
originality, their achievement was lower in
areas of data interrogation, discussion,
academic writing and application. This
reaffirms our assertions that when making
meaning of feedback students require more
confidence and practice in skills related to more
second order skills. Conclusion As a
learning-oriented assessment (Carless, 2003)
the strategy was deemed by the teaching team to
be successful in its aims to promote good
performance, to enhance reflective learning
through assessment and to encourage student
dialogue. However, students found it difficult to
transfer the guidance where feedback could not be
directly converted into feed forward. Students
were less able to action the feedback in these
areas despite the apparent similarity been the
practical tasks. It would seem likely that
students were more able to evaluate their own
work in areas where guidance was very specific
and directed for example, thoroughly reference
each new idea paraphrasing here is not
appropriate use a wider range of source
material are prescriptive interventions. It is
more difficult however to transfer feedback into
feed forward in a different topic area this
requires more independence, confidence and
willingness on the students part (Sadler,
1989). On reflection the study has highlighted
the value of formative assessment when integrated
with the summative the integration of feedback
on assessment within the learning and teaching
strategy of the module was valued by the team. In
addition, it was not considered that this
intervention had generated either excessive
burden on staff time or resources in fact the
team enjoyed the personal interaction with
students and felt that this fostered a more
supportive learning environment. The data from a
second cohort is awaiting analysis but will not
be considered until all assessment tasks are
complete for ethical reasons.
6
Enhancing student learning through low stakes
summative assessment with formative
feedbackDamian Parry, Carl Larsen and Cathy
Walsh Health and Applied Social Sciences,
Liverpool Hope University, Hope Park, Liverpool,
UK
References Biggs, J. (1999) Teaching for quality
learning at university what the student does,
Buckingham Society for Research in to Higher
Education and Open University Press Boud, D.
(1986) Implementing Student Self-Assessment.
Sydney Higher Education Research and Development
Society of Australia. Boud, D. (2000)
Sustainable assessment rethinking assessment for
the learning society. Studies in Continuing
Education. 22 (2), 151-167. Brown E., Gibbs G.,
Glover C., (2003) Evaluation tools for
investigating the impact of assessment regimes on
student learning Bioscience Education E-journal
Vol 2 http//bio.ltsn.ac.uk/ journal/vol2/beej-2-5
.htm Accessed on January 23rd 2008. Glover C.,
Evelyn Brown E., (2006) Written Feedback for
Students too much, too detailed or too
incomprehensible to be effective?
http//www.bioscience.heacademy.ac.uk/journal/vol7
/beej-7-3.htm Accessed January 23rd 2008 Carless,
D. (2003) Learning-oriented assessment. Paper
presented at the Evaluation and Assessment
Conference, University of South Australia,
Adelaide, November 25, 2003. Chanock, K. (2000)
Comments on essays do students understand what
tutors write? Teaching in Higher Education. 5
(1), 95-105. Dearing, R. (1997) Higher Education
in the Learning Society Report of the National
Committee of Enquiry into Higher Education.
London HMSO Dweck, C. Elliot, E. (1988)
Goals An approach to motivation and achievement.
Journal of Personality and Social Psychology.
54, 5-12. Gibbs G., Simpson C., Conditions Under
(2004) Which Assessment Supports Students
Learning Learning and Teaching in Higher
Education, Issue 1 3-31 Gipps, C.
McGilchrist, B. (1999) Primary school learners .
In P. Mortimer (Ed.). Understanding pedagogy and
its impact on learning (pp. 4667). Glover C.
and Brown E. (2006) Written Feedback for
Students too much, too detailed or too
incomprehensible to be effective? Bioscience
Education E-Journal. Volume 7 Higgins, R.,
Skelton, A., Hartley, P. (2002) The
Conscientious Consumer reconsidering the role of
assessment feedback in student learning, Studies
in Higher Education 27 1 53- 64.Hounsell, D.
and McCune, V. (2002) Teaching-Learning
Environments in Undergraduate Biology Initial
Perspectives and Findings. ETL project Occasional
Report 2 Available at http//www.ed.ac.uk/etl/publ
ications.html (Accessed 23rd January
2008) Hyland, P. 2000 Learning from feedback on
assessment. In A. Booth and P. Hyland (eds.), The
practice of university history teaching.
Manchester Manchester University Press. Juwah,
C., MacfarlaneDick, D., Matthew, R., Nicol, D.,
Ross, D., Smith, B., (2004). Enhancing Student
Learning Through Effective Formative Feedback.
In Enhancing Student Learning Through Effective
Formative Feedback. Higher Education Academy
Generic Centre Lumsden P., (2007) Students
perceptions of feedback http//www.bioscience.heac
ademy.ac.uk/ftp/events/newcastle07/foster.pdf
Accessed January 23rd 2008 Maclellen, E. (2001)
Assessment for learning the different
perceptions of tutors and students. Assessment
and Evaluation in Higher Education 26,4,
pp307-318 Merry S., Orsmond P., (2007)
Students responses to academic feedback provided
via MP3 audio files ftp//www.bioscience.heacadem
y.ac.uk/events/newcastle07/merry.pdf2007 Nicol,
D. J. Milligan, C. (2006), Rethinking
technology-supported assessment in terms of the
seven principles of good feedback practice. In
Innovative Assessment in Higher Education, eds.
Bryan, C. and Clegg, K., pp 1-14. London, UK
Taylor and Francis. Nicol, D.J.
Macfarlane-Dick, D. (2004).  Rethinking formative
assessment in HE a theoretical model and seven
principles of good feedback practice. In, C.
Juwah, D. Macfarlane-Dick, B. Matthew, D. Nicol,
D. Smith, B. (2004) Enhancing student learning
though effective formative feedback, York, The
Higher Education Academy. Orsmond, P., Merry, S.
Reiling, K. (2002) The use of formative
feedback when using student derived marking
criteria in peer and self-assessment. Assessment
Evaluation in Higher Education. 27 (4),
309-323. Park J., Crook A., (2007) Understanding
the Individual Student Assessment Experience
http//www.bioscience.heacademy.ac.uk/ftp/events/n
ewcastle07/park.pdf Accessed on January 23rd
2008 Race P. (2005) Learning through Feedback,
Chapter 5, London, Sage Ryder, J. and Leach, J.
(1996) A Summary of Findings and Recommendations
Arising from the Research Project Study. Working
Paper 8. Leeds CSSME, Undergraduate Learning in
Science Project. Sadler, D.R. (1989) Formative
assessment and the design of instructional
systems. Instructional Science. 18,
119-144. Sadler, D.R. (1998) Formative
assessment revisiting the territory. Assessment
in Education. 5 (1), 77-84. Sears H.J., Wood
E.J., (2005) Linking Teaching and Research in the
Biosciences Education e-Journal Volume 5
Available at http//www.bioscience.heacademy.ac.uk
/journal/vol5/beej-5-4.htm Accessed January 23rd
2008 Snyder B.R, (1971) The hidden curriculum New
York Alfred A Knopf Thin A., 2006 Using on-line
Microassessments to drive student learning
Bioscience Education e-Journal Volume 7-7
http//www.bioscience.heacademy.ac.uk/journal/vol7
/beej-7-7.htm Accessed March 14th 2008
Corresponding author Damian Parryparryd1_at_hope.ac
.uk
Write a Comment
User Comments (0)
About PowerShow.com