Tuesday 29 June 2021

What Is The Impact Of SLA Marking On Student Attainment In Science?

As part of the Extending Influence and Impacting Others module of the PGDE via Teachfirst, second year trainees have to carry out an action-research-style project on an area of their choosing. This is Jenny's summary of the evaluation she has carried out in Science over the past two terms.

By Jennifer Scott

In my final module for my PGDE, I recently investigated the research question: What is the impact of detailed SLA marking on students’ attainment in science? Here I will outline the investigation, in the hope of sparking some discussions around marking across the school! Motivation: My main motivation for this investigation was that day-to-day discussions with other teachers suggested that our written feedback in Science was not having the impact that it ‘should’. Therefore, I wanted to investigate whether the approach to SLA marking we were using in Science could be improved. Literature summary (in brief): The EEF has a wide-ranging Marking Review including several suggestions. Some which we have already embedded into the marking policy itself are: to provide specific, actionable feedback; and to allow students time in lessons to respond. There is also a word of caution: if students are producing “superficial responses” then the impact of marking is likely to be smaller. A lack of motivation appears to be a significant factor in whether students respond in sufficient depth to their actions One potential cause of this could be their mindset: as described by Dweck (2006) (as well as on the poster in every classroom!), students with a ‘fixed mindset’ are less likely to believe that they can improve their ability, whereas students with a ‘growth mindset’ believe that they can improve with effort and practice, and thus are more likely to make the most of SLA feedback. Also, Henderson and Harper (2009) noted that whilst teachers generally viewed assessments as formative, students viewed them as summative, and therefore were less motivated to improve their knowledge on a given subject after the assessment. A suggestion from the EEF which we have not yet embedded is to distinguish between “mistakes” (caused by carelessness) and “errors” (caused by lack of understanding).


Methodology and Results: Part 1: Surveys to collect teacher and student views All Science teachers completed a survey on their views of science SLA marking, as did 83 Year 9 students. The results bore out five key messages about our current SLA marking: A high proportion of students do not complete their SLA responses fully; More than 80% of students feel they understand the purpose of SLA feedback (despite only 50% of teachers thinking they did!), but they tend to frame it negatively e.g. “to see what we got wrong” (as opposed to “to see how to improve” etc.). This provides some evidence of a lack of student ‘growth mindset’;

  • Consistency in marking methods is present between teachers, but teachers are not sure if these methods have a positive impact on students;
  • Teacher workload: if the feedback is not having a positive impact on students, time spent marking feels wasted.
  • Accessibility of actions: teachers felt that content often needed to be retaught in order for students to respond to actions, while students equally felt that if they did not know the answer in the test, they still would not know the answer for the SLA. Some teachers have already been working to improve this by providing students with specific resources to use when completing their actions.
All these points served to reinforce my motivation and informed some trial changes made in part 2. Part 2: Comparing a ‘normal’ SLA, no SLA, and a new SLA I chose to carry out this investigation with my two Year 9 classes in parallel. After a regular end-of-topic test, each class received feedback before being given a ‘retest’. This took place in two stages (at the end of two consecutive topics). 9Z4 were the ‘control’ group, completing the usual style of SLA after going through a few ‘frequently missed’ questions with the class, before completing the retest the following day. They did the same process in both stages. This consists of a Strength I identify, a specific Literacy target, and 2-3 specific Actions, which are usually questions or tasks which ask students to consolidate upon or expand upon areas of misconception I had picked up from the marking. 9X4, on the other hand, did not complete an SLA in stage 1: they only went through frequently missed questions as a class, before immediately completing the retest. In the second stage, 9X4 completed an SLA with some trial changes, before completing the retest the following day. Neither class was given much warning of the retest, to increase the likelihood that any improvement was due to the feedback and not extra study! The main change in the updated version of the SLA involved students completing a ‘review table’ of all questions, noting why they lost marks and giving an explanation/correction (example seen below). This is strongly weighted towards student metacognition rather than personalised teacher feedback. I tried to incorporate a teacher-directed action as well, but it felt like too many different tasks for students to complete within the SLA lesson. The surprising results of the two stages can be seen below: there was no statistically significant difference in the percentage of students improving their score, regardless of whether they had completed the SLA or just gone through some questions (stage 1). It should also be noted that the average percentage score improvement was roughly the same at ~13%. Equally in stage 2, there was no significant difference in percentage of students improving between the current and updated SLA methods. This could be for various reasons, not detailed here for brevity.

Conclusions and looking forward: Overall, it can be said that the current SLA marking in the science department ‘works’ (for most students; in the short term), in that it does allow students to improve their knowledge on the given topic. However, it does not give a significantly different improvement in students’ attainment than simply going through the most frequently missed test questions with the class. This raises the following questions:

  • How can we ensure that students engage fully with marking?
  • How can we frame end-of-topic tests more formatively, so that students are more likely to see the importance of working to improve on them?
  • What is the best approach to marking end-of-topic tests to ensure they have maximum impact on student learning?
I would be very interested to hear others’ views on the questions posed here, how they apply within their own subjects and any suggestions in response (I have a few of my own)!

References:

EEF (2016) A marked improvement? Education Endowment Foundation. Available at: https://educationendowmentfoundation.org.uk/public/files/Presentations/Publications/EEF_Marking_Review_April_2016.pdf.

Dweck, C. (2017) Mindset - Updated Edition: Changing The Way You Think To Fulfil Your Potential. Hachette UK. Available at: https://play.google.com/store/books/details?id=ckoKDQAAQBAJ.

Henderson, C. and Harper, K. A. (2009) ‘Quiz Corrections: Improving Learning by Encouraging Students to Reflect on Their Mistakes’, Physics Teacher, 47(9), pp. 581–586. doi: 10.1119/1.3264589.


2 comments:

  1. Thanks so much for sharing Jennifer, this is *really* interesting. It's been a common frustration of teachers at all my schools that "students don't act on their feedback". Your action research provides some interesting insights into why that might be. Of course, as a scientist you might (like me!) find it frustrating to try and carry out research with so many variables simultaneously. One obvious challenge when considering how well students utilise feedback is to control for the *quality* of the feedback. What happens, for example, if the same class of students is given feedback from a science teacher and their humanities teacher? Science feedback is often quantitative/knowledge-based and my observation has been that students often act it much more effectively (in terms of improving assessment outcomes) than they act on humanities (and English) feedback, which is often qualitative in nature and relies (I would content) on exemplars to make their point. [As an aside, I'm very much in the "Dweck sceptics" camp. See for example https://www.tes.com/news/growth-mindset-where-did-it-go-wrong. But that's a whole other discussion...] Thanks again, Paul

    ReplyDelete
    Replies
    1. Thanks for the feedback Paul! You make a really interesting point about the different subjects. And I'd be interested to know more about the 'Dweck scepticism' - I'll have a read!
      Jenny

      Delete