“You have to consider the human element of life and the way that circumstances and chance upset everything – even the most accurate and clearly reported data.”
– Sir Alex Ferguson
When working with assessment data, as I often do, the above quote is frequently at the forefront of my mind. I have regularly consulted with leaders on their use of standardised tests to collate and digest internal data, so that they’re able to understand cohort performance and pupil gaps in learning. However, this process tends to be accompanied by one crucial omission – test invigilation. Can ‘standardised tests’ really be all that insightful – or even standardised at all – without a consistent and reliable invigilation process in place?
My own view is that without robust invigilation, our true understanding of pupil gaps in learning will be significantly reduced, given the heightened possibility of producing results that don’t necessarily reflect pupils’ actual abilities.
Ensuring consistency
Good invigilation ought to provide schools with an opportunity to ensure consistency in their testing, whilst also alleviating high stakes pressure on students by making the process part of their everyday school assessment culture.
How do you ensure that students and teachers aren’t exposed to test papers ahead of time? How can you make it so that students aren’t over-aided by test administrators or adults working with students one-to-one? Are breaks in testing, be they planned or unexpected, managed strategically and consistently across school? These are just a few factors among many, but they’re vital to consider.
In my experience, schools will often leave the process of invigilation to their statutory testing year groups, which in primary schools effectively means only year 6 during the summer term. The rationale schools have for doing this usually relates to the challenges that come with the process, such as the need to deploy an appropriate number of adults who understand the invigilation policy during whole school assessment week, or the building space required to ensure that relevant access arrangements can be implemented correctly.
Rigour over data
Reducing the problems that come with ‘high stakes’ testing calls for a strategy that can address the above challenges. Following on from Ofsted’s recent announcement that it will no longer expect to see schools’ internal tracking data, I feel that now is the perfect time to replace the vast amount of timeconsuming objective data collection currently happening within schools with a more rigorous process of properly invigilated testing over two or three streamlined assessment points. This could be carried out by designated individuals not typically part of the normal classroom setting, but readily available when the tests are administered, having received appropriate training beforehand – maybe externally appointed candidates, or perhaps members of the school governance.
These assessments should be purposeful, involve minimal workload burden and prepare students for routine test conditions in a way that enables them to perform at their best. With this in place, rich and detailed insight into their progress should naturally follow, allowing teachers to have a significant impact on pupil learning.
Admittedly, one-off standardised assessments carried out with threshold measures aren’t the only way to understand gaps in learning, and the invigilation challenges schools face will vary depending on the setting. However, ensuring consistency across three key, overlapping factors – purposeful assessments, access arrangements and standards of test invigilation – should provide you with a structure for considering whether your assessments are sufficiently standardised, and thus able to meaningfully inform your teaching and learning.
Tyrone Samuel is an education data, systems and insight professional.