top of page
  • Writer's pictureRuth Ashbee

Student Voice, Assessment Theory and an Epistemology for School Leadership



We’re all familiar now with the fact that Ofsted use talking to students or “student voice” as one of their means of finding out about a school, alongside lesson visits, speaking to staff, looking at external outcomes, and looking at key documents and student work. Schools are increasingly conducting their own, internal student voice as a response.


Using student voice in this way is ultimately a form of assessment. Assessment theorists ask questions like these:


1. What are the constructs the assessment is designed to assess? (What precisely are we trying to find out about?)

2. What are the available methods of assessment?

3. How reliable is this method of assessment? (Will it give us the same results each time we use it to find out about the same thing)?

4. How valid is this method of assessment? (Does it really tell us about the construct? Or does it actually tell us about something else instead?)


I think it’s important that we ask these questions about student voice.



1. What are the constructs we are trying to assess?

Ofsted have a big long list of constructs, detailed in the EIF. They want to know things like how well teachers use assessment, how well the curriculum is constructed, how effective teaching is, how safe children feel at the school, how effective behaviour policies are, and so on.


Schools also want to know about all of these things, so that we can prioritise and plan, and so that we can evaluate the impact of our actions and change our approaches if necessary. So let’s take the EIF as a decent list of constructs – individual schools may have additional details they want to find out about, such as an emphasis on faith, or the implementation of routines and systems designed to bring about the constructs in the EIF.


2. What are the available methods of assessment?

Ofsted have rightly recognised that no single method of investigation can be used in isolation to evaluate the constructs of the EIF and they therefore undertake triangulation using the range of methods described above.


As schools, we have access to all of these methods, and more besides. Crucially, we can look at internal assessments.


There are very good reasons for Ofsted removing internal assessment data as part of what they look at during inspection. In the past there was a completely ludicrous conflict in the purposes of assessment data – do we want data we can show to Ofsted so we can get a good inspection, or do we want data that tells us what students can and can’t do? Because you couldn't have both. Although I actually knew of a school which undertook double data-tracking – one set that was real and used by the school for intervention and so on, and one that was… shown to Ofsted. Most schools just had to either forgo any intelligent use of assessment or sacrifice themselves on the altar of Requires Improvement or worse. And of course we had the crushing, soul-destroying workload of data-drops every 5 minutes because somehow this would show progress and help to pass inspection. It's good, to say the least, that these things have gone.


Now we are free from this nightmare, we should be using internal assessment to help us to build a picture of the quality of education in our school. Often “data” alone won’t tell us much, if we've been sensible and expunged flightpaths from our practice and moved to percentages or similar. What’s often more informative is to look at the scripts alongside the quantitative data. What are our students currently producing in French/English/Maths/etc.? How does this compare with benchmarks/expectations for this year? If we’d expect all of year 7 to be able to describe and explain the process of digestion, and they can’t according to this test, then what might the reasons be and what might we do in response, both at the specific level of that particular area of knowledge, and at the more general level of curriculum planning/teacher practise/ habits of homework/ behaviour in lessons/ etc.? Student assessments are really just a type of student voice on paper, that Ofsted don’t look at, but that we can, and should.



3. How reliable is this method of assessment?

It’s hard to determine the reliability of student voice without large-scale study, but I think we should start from an assumption that it is pretty low-reliability. There are so many factors that contribute to what children will want to say/ be able to articulate. Using student voice as just one to help build a picture/ open up lines of enquiry is the only rational response to this. We wouldn’t want to not speak to students because doing so is a low-reliability form of assessment – equally, nothing should hang on what students say – there should always be additional ways of looking at the same construct, to help to compensate for the assumed low-reliability.


4. How valid is this method of assessment?

How accurately can student voice tell us about the constructs we want to find out about? The answer to this question varies wildly, and depends on two factors: the construct and the questions we ask. If we want to find out about behaviour in the school, we can ask things like “What would happen if a student shouted out when a teacher was talking to the class?” or “have you ever seen a student be unkind to another student?”. Students are likely to able to understand these questions and to answer from their own experience. There is a high chance that what they say will match reality and will tell us about what we are trying to find out about. In other words, student voice in this context has high validity.


Suppose we want to know about the quality of curriculum in the school. We might ask “What are you learning about at the moment, and how does what you have learned about previously help you to learn this?” But this is problematic. Year 8 may be learning about digestion, and their knowledge of cells from Year 7 is key to their understanding of this topic. Students could have learned about cells really well in Year 7, and be learning about digestion really well now in Year 8. If you ask them “what is the purpose of digestion”, they’ll be able to answer it, at length, drawing on their understanding of cells to do so. It does not follow that they’ll be able to say “we learnt about cells in Year 7 and that helps me learn about digestion in year 8”. They just don’t tend to think in those terms -- and why should they? They’re learning science, not curriculum theory.



If our student voice shows that the students can’t answer questions like this, the only way to make them be able to answer those questions is to teach them to answer them. Cue slides for every lesson: Today we’re learning X and it relates to Y that you did in Year Z. This is the new “memorise your target grade and what you need to do to improve ready for the inspectors”. What a depressing diet for a child. And what a devastating opportunity cost: sorry kids, instead of using this time to learn more science, you've got to learn about how your teachers have designed your curriculum and why. Curriculum design should not be the curriculum intent. No good curriculum sets out to teach students its own design principles. But if we are using students' time in this way then curriculum theory becomes the de facto curriculum, and this robs our students of their entitlement.


Thus the question “How has what you have learned previously help you learn now?” has low validity for the construct of quality of education. A student may be able to answer really well because they have learned their “learning journey” instead of lots of wonderful knowledge about digestion – or they could be unable to answer because they’ve actually been learning that wonderful knowledge about digestion. In fact we could even be experiencing negative validity, where answers we perceive to indicate good quality of education are actually caused by the opposite.




Conclusions

From the above come five key points we should remember as leaders:


1. Student voice can be a powerful way to gain qualitative insights into student experiences.

2. Many of the questions we would like to ask will give low or even negative validity: being able to answer them may indicate coaching at the opportunity cost of actually learning great curriculum.

3. In choosing our questions we must ask ourselves: what would students who have been learning lots of great maths/art/drama/etc. be able to answer? Often, this will be subject based: “Can you tell me about surds/impressionism/the fourth wall/etc.?”.

4. Internal assessments are a powerful way to gain insight as well – we should not neglect them just because Ofsted don’t use them.

5. As a method of assessment for quality of education, we should assume student voice has low reliability and therefore not draw firm conclusions based on student voice alone. Rather, we should treat it almost as a set of case studies, individual stories that may or may not reflect important truths or widespread patterns in the school.


As with most things in education, if we uncritically pursue surface features or methods we risk creating false idols, spiralling workload and a poorer education for our students. If we seek to build our understanding through engagement in research, critical evaluation and discussion as a profession, we are better placed to be able to effectively learn about the complex systems that are our schools, and to be more likely to make better-informed decisions as leaders. Knowledge-building about our own schools requires an appropriate model for building that knowledge-- it needs a proper epistemology. Assessment theory should be key as we seek to build an epistemology for school leadership.





2,947 views1 comment

Recent Posts

See All
bottom of page