For a number of years, in many schools, assessment practice has been driven by one key question: ‘What does Ofsted expect us to do?’ And more often than not, this has led to the word ‘assessment’ being seen as synonymous with ‘data’.

The new Ofsted Education Inspection Framework represents some real shifts (for the better) in the inspectorate’s approach to assessment and opens the door for schools to take some fundamental steps forward in developing assessment practice.

There are a number of key paragraphs in the new School Inspection Handbook that relate to assessment. I shall explore a few that I think are particularly significant.

Firstly, regarding the implementation of the curriculum: “The most important factors in how, and how effectively, the curriculum is taught and assessed are that: teachers check pupils’ understanding effectively, and identify and correct misunderstandings… teachers use assessment to check pupils’ understanding in order to inform teaching.”

In relation to the school’s use of assessment, the handbook states: “When used effectively, assessment helps pupils to embed knowledge and use it fluently, and assists teachers in producing clear next steps for pupils.”

Looking at these points, we can see a clear emphasis on what is going on in the classroom. So we need to ask ourselves the questions, how well are teachers are using formative assessment techniques to:

  • find out what children already know so that they can build on this
  • unpick children’s misconceptions
  • check learning within (as well as at the end of) lesson
  • provide effective feedback to move learning forwards

In addition to these points about classroom assessment, the handbook says that inspectors will use to gain evidence about curriculum implementation “discussions with curriculum and subject leaders and teachers about… their view of how those pupils are progressing through the curriculum”.

So we also need to ask this additional question: how do teachers (and subject leaders) evaluate whether children are where they should be in their learning journey through the school curriculum?

In answering that last question, though, we need to take note of the key message that data collection and analysis should not create excessive or unnecessary workload. It’s not that practices such as tracking pupil attainment over time need necessarily disappear altogether, but they must be proportionate and purposeful. More on this later, but for now let’s keep the focus on the formative.

Formative assessment

The first four of the above questions are all about formative assessment, while the fifth focuses on the summative. For me, that ratio of 4:1, formative to summative, is a good guide to how we should be investing our assessment energy: overwhelmingly focusing on the formative, as that is where the greatest benefits to learning will lie.

There is plenty of evidence that formative assessment, or ‘assessment for learning’ (AfL) if you prefer, can have powerful impact on learning, when its core principles are at the heart of teachers’ practice.

But AfL can sometimes be misinterpreted when it comes to classroom practice or school policy, and can end up being distilled into a set of strategies or rules that might be somewhat divorced from the core principles. As an example, let me focus on one area: feedback.

Effective feedback, whether it be written or verbal, should move the learning forwards. It needs to identify what has been done well, what needs improving and how that improvement can be achieved.

In some schools, marking/feedback policies might dictate particular practices, such as using particular colours of highlighter pens, or giving ‘three stars and a wish’ comments. But I would argue that all of those things are peripheral. The most important thing to consider is the impact of the feedback: has it caused thinking to take place?

The content of the feedback is crucial, as is the timing – it should be focused on the intended purpose of the learning, and the more immediate the better. And it’s important to note that feedback might need to look different according to the learner.

So let’s not worry unduly about consistency in terms of what things look like. School policy can, and should, dictate consistency in terms of the principles to which all staff are expected to adhere, but should allow the flexibility for teachers to use their professional knowledge of the child and give the feedback that is going to be of greatest benefit to that individual at that point in time.

Feedback to a novice learner may very well look quite different to that which is given to a learner with a more advanced understanding of a particular concept. For some learners, a coaching approach might be hugely beneficial, although a learner already needs a pretty good grasp of where they are in their learning and where they want to get to for a coaching approach to be effective.

Prior knowledge

To return to my earlier questions, effective techniques to gauge children’s understanding of subject content prior to teaching a unit of work, during the teaching and at the end are also very important.

Knowledge organisers, concept cartoons, mind maps, true/false quizzes and so on can all be useful tools to use as starting points for class discussion and illuminate areas of the subject where the knowledge is already secure, where misconceptions lie, and where the knowledge is lacking. Such techniques are particularly useful when there is a spiral curriculum in place.

For example, children will have learnt about the topic of ‘animals including humans’ during KS1, and will likely revisit this topic at various points in KS2. It will be important for teachers to ascertain what has been remembered from the previous teaching when planning the next unit.

You can’t rely on a simple record of what was taught, or even what appeared to have been understood, in a previous year as a guarantee that those concepts made it into the long-term memories of all the children. Good classroom assessment techniques play a vital role here.

Hinge point

During teaching, the idea of the ‘hinge-point question’, explained by Dylan Wiliam in brief video clips at and, is a hugely powerful strategy to enable teachers to make evidence-based decisions about the direction of the lesson.

Time spent devising really good hinge-point questions to use in lessons is particularly worthwhile. As Dylan Wiliam explains, the trick to devising a good multiple choice hinge question is to consider the possible misconceptions that different learners may have. The wrong answers should be those typically given when common misconceptions are held, and the question should be designed so that it is extremely unlikely that someone could arrive at the right answer but for the wrong reason.

In terms of end-of-unit assessment – the means by which teachers seek to determine how well the students have learned the material – thought needs to be given to both:

  • the particular areas of knowledge, skills and concepts that we wish to assess; and
  • the range of approaches that we might use as vehicles for the children to demonstrate their learning.

On the first of those points, this is where the curriculum mapping is essential. Across your school, is there a clear map in place that shows in which year groups you expect children to learn key concepts?

Are there particular milestones, in terms of skill progression or areas of knowledge?

For example, which key skills do you expect children in Y4 to develop in art (and how does this build upon their Y3 learning)? What are the expectations of a Y1 child in geography? These will vary from school to school.

Tim Oates encourages us to teach fewer things in greater depth, and each school will make different choices about what to prioritise in its curriculum.

The curriculum choices you make will determine the key milestones upon which your summative assessment should focus. They will probably also determine the choices you make about what data to collect.


In terms of the range of approaches, the only limit is your imagination. Whether you ask your children to produce a dramatic presentation, a cartoon, a poem, a website, a poster, a podcast, an assembly, a piece of writing or even to take a good old-fashioned test, one thing we do need to ensure is that we are maintaining the integrity of the subject.

Cross-curricular work can be wonderful, but we need to be clear about the focus of our assessment. If we ask our children to write, for example, a diary entry or letter from the perspective of a particular historical character, are we assessing it from a literacy perspective, or looking for accurate historical knowledge, or both?

Supporting role

There is a real opportunity here for us to refocus our thinking on the assessment that goes on in our schools. Its place is to support and inform our teaching of the curriculum, not to drive it. Our curriculum should not be determined by what’s going to come up on a test. The curriculum must come first and should be the master. Assessment should always be the servant.

Summative assessment

So what of summative assessment and the extent to which subject leaders in schools need data to be able to demonstrate the progress pupils are making across the school?

The School Inspection Handbook indicates that inspectors will consider whether data collections are “proportionate, represent an efficient use of school resources, and are sustainable for staff”.

Crucially, any data collection should serve a purpose – it should inform clear actions, for example indicating areas of the curriculum that require greater teaching focus, or a professional development need, or groups of pupils that need further support, and so on.

I would encourage school leaders to think about their current data collection practice and carry out a quick cost/benefit analysis. Consider how much teacher time and energy goes into each data collection, and consider what benefits they bring. Do the benefits justify the costs?

The handbook mentions that inspectors will evaluate how assessment is used to support teaching, but that they will also be looking to see that assessment practice is not substantially increasing teacher workload. The Making Data Work report recommends a maximum of three ‘data drops’ per year.

It is particularly important to note the following, from the handbook:

  • Inspectors will not look at non-statutory internal progress and attainment data.
  • Inspectors will be interested in the conclusions drawn and actions taken from any internal assessment information, but they will not examine or verify that information first hand.

This is significant. In the past, data has been used for multiple (sometimes conflicting) purposes. But if the internal data is exactly that – internal – we can encourage teachers to be brutally honest in their assessments without worrying how it might look to others.

Indeed, we need this brutal honesty because the fundamental point of the assessment is to provide an accurate basis for decisions.

If boys’ standards in maths in Y4 are slipping a bit, or girls’ progress in reading across the school is not as strong as you would hope, then leaders need to know about this so they can consider what actions may be required – resourcing, training, targeted support, and so on.

Of course, understanding how well a subject is taught across the school is far deeper than just analysing numerical data. Observing the teaching, talking to the pupils about their learning and looking at the work they produce are all essential.

Ben Fuller is lead assessment adviser at Herts for Learning.