As part of the Extending Influence and Impacting Others module of the PGDE via Teachfirst, second year trainees have to carry out an action-research-style project on an area of their choosing. This is Jenny's summary of the evaluation she has carried out in Science over the past two terms.
By Jennifer Scott
In my final module for my PGDE, I recently investigated the research question: What is the impact of detailed SLA marking on students’ attainment in science? Here I will outline the investigation, in the hope of sparking some discussions around marking across the school!
Motivation:
My main motivation for this investigation was that day-to-day discussions with other teachers suggested that our written feedback in Science was not having the impact that it ‘should’. Therefore, I wanted to investigate whether the approach to SLA marking we were using in Science could be improved.
Literature summary (in brief):
The EEF has a wide-ranging Marking Review including several suggestions. Some which we have already embedded into the marking policy itself are: to provide specific, actionable feedback; and to allow students time in lessons to respond. There is also a word of caution: if students are producing “superficial responses” then the impact of marking is likely to be smaller.
A lack of motivation appears to be a significant factor in whether students respond in sufficient depth to their actions One potential cause of this could be their mindset: as described by Dweck (2006) (as well as on the poster in every classroom!), students with a ‘fixed mindset’ are less likely to believe that they can improve their ability, whereas students with a ‘growth mindset’ believe that they can improve with effort and practice, and thus are more likely to make the most of SLA feedback. Also, Henderson and Harper (2009) noted that whilst teachers generally viewed assessments as formative, students viewed them as summative, and therefore were less motivated to improve their knowledge on a given subject after the assessment.
A suggestion from the EEF which we have not yet embedded is to distinguish between “mistakes” (caused by carelessness) and “errors” (caused by lack of understanding).
Methodology and Results: Part 1: Surveys to collect teacher and student views All Science teachers completed a survey on their views of science SLA marking, as did 83 Year 9 students. The results bore out five key messages about our current SLA marking: A high proportion of students do not complete their SLA responses fully; More than 80% of students feel they understand the purpose of SLA feedback (despite only 50% of teachers thinking they did!), but they tend to frame it negatively e.g. “to see what we got wrong” (as opposed to “to see how to improve” etc.). This provides some evidence of a lack of student ‘growth mindset’;
- Consistency in marking methods is present between teachers, but teachers are not sure if these methods have a positive impact on students;
- Teacher workload: if the feedback is not having a positive impact on students, time spent marking feels wasted.
- Accessibility of actions: teachers felt that content often needed to be retaught in order for students to respond to actions, while students equally felt that if they did not know the answer in the test, they still would not know the answer for the SLA. Some teachers have already been working to improve this by providing students with specific resources to use when completing their actions.
Conclusions and looking forward: Overall, it can be said that the current SLA marking in the science department ‘works’ (for most students; in the short term), in that it does allow students to improve their knowledge on the given topic. However, it does not give a significantly different improvement in students’ attainment than simply going through the most frequently missed test questions with the class. This raises the following questions:
- How can we ensure that students engage fully with marking?
- How can we frame end-of-topic tests more formatively, so that students are more likely to see the importance of working to improve on them?
- What is the best approach to marking end-of-topic tests to ensure they have maximum impact on student learning?
Thanks so much for sharing Jennifer, this is *really* interesting. It's been a common frustration of teachers at all my schools that "students don't act on their feedback". Your action research provides some interesting insights into why that might be. Of course, as a scientist you might (like me!) find it frustrating to try and carry out research with so many variables simultaneously. One obvious challenge when considering how well students utilise feedback is to control for the *quality* of the feedback. What happens, for example, if the same class of students is given feedback from a science teacher and their humanities teacher? Science feedback is often quantitative/knowledge-based and my observation has been that students often act it much more effectively (in terms of improving assessment outcomes) than they act on humanities (and English) feedback, which is often qualitative in nature and relies (I would content) on exemplars to make their point. [As an aside, I'm very much in the "Dweck sceptics" camp. See for example https://www.tes.com/news/growth-mindset-where-did-it-go-wrong. But that's a whole other discussion...] Thanks again, Paul
ReplyDeleteThanks for the feedback Paul! You make a really interesting point about the different subjects. And I'd be interested to know more about the 'Dweck scepticism' - I'll have a read!
DeleteJenny