How did it go?

With the first Latin GCSE done and dusted, “how did it go?” is probably a question that every candidate has been asked and answered multiple times. This week, I have found myself wondering to what extent their self-evaluations are accurate.

Curious to discover an answer, I turned to the internet without much hope of finding one, yet came across a psychology study reported by The Learning Scientists, a group of cognitive scientists who focus on research in education. What’s particularly interesting about the study is that it attempts to evaluate students’ success at making what they call “predictions”, which the psychologists define as a student’s projection of their likely performance prior to a test, as well as their “postdictions”, by which they mean a student’s evaluation of their performance afterwards. The study attempted to make an intervention in that process, in other words they tried to improve students’ ability to make both “predictions” and “postdictions” about their own performance. The results are interesting.

The study was performed with a group of undergraduates, and the psychologists made several interventions in an attempt to improve their students’ ability to self-evaluate. They taught them specific techniques for making the most of feedback and they ensured that they took a practice test one week before each of the three exams that they sat, inviting students to self-score the practice test and reflect on any errors. The undergraduates were then encouraged to examine reasons why their “predictions” and their “postdictions” may have been inaccurate on the first two exams, and make adjustments. All of this was with the aim of improving their ability to self-evaluate.

The study found that while the undergraduates’ “postdictions” (i.e. their report on their own performance after the test) remained slightly more accurate than their own “predictions” (their projection of their likely performance), the above interventions resulted in no improvement in the accuracy of students’ “postdictions” over time. While the accuracy of some students’ “predictions” did improve somewhat, none of the undergraduates showed any significant improvement in their ability to make “postdictions”. The students’ ability to evaluate their own performance after each test remained as varied as they had been prior to the interventions.

As the authors conclude, “this study demonstrates … that improving the accuracy of students’ self-evaluations is very difficult.” This is genuinely interesting and certainly fits with my own anecdotal experience of my own ability to assess how I have performed after an examination, as well as the huge number of students that I have worked with over the years. A student’s own feelings after a test may be affected by a myriad of compounding factors and if I had a £1 for every student who felt that an examination had gone dismally who then turned out a perfectly respectable grade, I’d be a wealthy woman. In my experience, some students may over-estimate their “predictions” but most students underestimate their “postdictions”. It is interesting that those “postdictions” appear to be elusive when it comes to intervention and that the cognitive scientists have not – as yet – found a method of helping students to assess their own performance more accurately. I suspect that is because it is too emotive.

It is not obvious from the study how high-stakes the tests were – the psychologists do not make clear, for example, whether the test results contributed significantly (or indeed at all) to the assessment of the undergraduates’ own degree. This to me is something of an oversight, as an obvious compounding factor in any student’s ability to assess their own performance has to be their emotional response to it. Low-stakes testing as part of an experiment is a very different ball-game to the high-stakes testing of an examination that counts towards a GCSE, an A level or a degree class.

My conclusion for now, especially for my highest-achieving students, is to remain unconvinced that they know how well they have done. I could name countless students who have been deeply distressed after an examination, only to discover that they achieved a mark well above 90%. Even in the most seemingly disastrous of circumstances this can be the case. I know of students who missed out a whole question or indeed even a whole page of questions and still achieved an excellent grade overall, so solid was their performance on the rest of the paper and the other papers which counted towards their grade.

Much as it remains an important emotional connection to engage with every student about how they feel their exam went, they’re not a good barometer for what will be on the slip of paper when they open their envelope in August.

Photo by Siora Photography on Unsplash

Author: Emma Williams

Latin tutor with 21 years' experience in the classroom. Outstanding track record with student attainment and progress.

Leave a Reply