PrimaryAssessment

Reception baseline assessment – History, controversy and analysis

Girl in school uniform completing Reception baseline assessment with teacher

We delve into the inception of the Reception baseline assessment, its implications for teachers and children and the government’s 2016 U-turn…

Teachwire
by Teachwire
DOWNLOAD A FREE RESOURCE! Amazing Handwriting Worksheets for EYFS & KS1
PrimaryEnglish

In a landscape where educational policies often raise more questions than answers, we dissect the Reception baseline assessment and its practical significance for educational outcomes…

What is Reception baseline assessment?

First introduced in September 2015, baseline assessment is a school-level progress measure from Reception to Year 6. Teachers use it to assess four- and five-year-olds within the first few weeks of them entering Reception.

The Reception baseline assessment is not a test and there is no pass mark. It’s a one-to-one, practitioner-led assessment of a child’s maths, language, communication and literacy skills when they start school.

Children don’t need to prepare for it in any way and it takes about 20 minutes. You need to administer the assessment within the first six weeks of Reception.

Teachers receive a series of short, narrative statements that tell them how their pupils performed in the assessment. You can use these to inform your teaching within the first term.

“The Reception baseline assessment is not a test and there is no pass mark”

According to the DfE, information about progress allows a fairer measure of the quality of education provided by primary schools, taking into account pupils’ proper attainment.

Before this, the government took the baseline for progress measures from KS1 assessments. This didn’t take into account the work that schools did with pupils between Reception and Year 2.

Initial introduction

Initally, schools could choose short-term observation or computer-based assessments from one of three providers (Early Excellence, CEM and NFER). Baseline assessment was not mandatory at this stage.

Opposition to baseline assessment

The move to introduce baseline assessments as a formal accountancy measure encountered some considerable opposition.

In February 2016 the NUT and ATL unions published the results of a joint research project. This characterised baseline assessments as ‘problematic at best, and potentially damaging at worst’.

The research drew on five anonymised school case studies and an online survey completed by 1,131 teachers, leaders and support staff.

The report criticised baseline assessment on a number of fronts, including:

  • its accuracy
  • its suitability for all students
  • the time it took to complete
  • the timing of the assessment
  • training costs

2016 government U-turn

The government’s original plan with baseline assessments would have seen observation- or computer-based assessment used to assign children entering Reception a single score.

This would then be measured against a subsequent score at the end of Y6. This would gauge the progress each child had made throughout primary school. It would serve as the only officially recognised means by which schools could demonstrate ‘value added’.

A Reception baseline comparability study by the Standards and Testing Agency examined whether the three baseline assessments offered by a trio of approved providers (Early Excellence, CEM and NFER) were comparable enough to serve as a starting point for measuring children’s subsequent progress.

Based on a sample of 4690 pupils spread across 122 schools, the study concluded that, ‘There is insufficient comparability between the 3 reception baseline assessments to enable them to be used in the accountability system.’

The study’s findings prompted the DfE to confirm that, ‘As a result, the results cannot be used as the baseline for progress measures, as it would be inappropriate and unfair to schools.’

Reception baseline assessment 2021 developments

The Standards and Testing Agency (STA) contracted the National Foundation for Educational Research (NFER) to develop and deliver a new Reception baseline assessment.

The Reception Baseline Assessment became statutory in England in September 2021. Schools were able to familiarise themselves with it from autumn 2019 onwards.


2025 onwards

From September 2025, pupils will need to respond to some of the baseline assessment questions on a touchscreen device.

The government will publish the first school-level progress measures calculated from the Reception baseline assessment in the summer of 2028. This is when pupils who entered Reception in the academic year 2021/2022 reach the end of KS2.


Does a seven-year progress measure tell us anything?

Richard Selfridge vanishes down the rabbit hole of school progress measures

To understand why a seven-year progress measure is largely meaningless, consider the way in which these measures will be calculated. Children will be given a single numerical score based on assessments made in the first few weeks of Reception.

These scores will vary wildly, based as they are on very small children, often at very different levels of development, with a wide spread of exposure to schooling prior to arriving in the classroom.

Seven years later, using endpoint scores from high-stakes tests which suffer from all of the well-known issues which distort any accountability measure, children will be compared to their peers nationally with similar starting points seven years previously. Or at least, the children who are still in the state school system will be compared in this way.

And they will only be compared to children who have both starting and ending scores, not those who have entered or left the country, moved into or out of state schooling or simply whose data has been lost or corrupted.

Complicated factors

In addition, no one seems to know how to cater for the difference between schools with nurseries and without, or with children who predominantly attend on-site but independent pre-schools.

As for junior schools? Once again, education turns out to be somewhat more complicated than those trying to manage it might have guessed.

Given the noise at both ends of this measurement, progress measures will be largely composed of error and little else.

“Education turns out to be somewhat more complicated than those trying to manage it might have guessed”

They won’t tell schools much, if anything, as to how to help the children in their charge, and they certainly won’t help parents to decide which school to choose for their Reception-age children – the two most common justifications for collecting and analysing data in this way.

Time lag

At best, by the time schools and prospective new parents get the information, a seven-year progress measure might possibly tell you something about the cohort which started eight years previously.

This time lag is worth noting, because it is longer than the average tenure of a primary headteacher. Where progress scores don’t show a school in a favourable light, any school leader worth their salt will be able to quietly distance themselves from previous regimes, or to rubbish badly-implemented assessments at either end of the accountability process.

All of this is, on balance, might be a good thing. Having fuzzy data measured at two points with a huge time gap in between seems to suit both the government and schools.

Ministers can claim that they are using objective measures to hold schools to account, and schools can get on with helping young children to learn, knowing that the numbers being generated say very little about what they actually do.

Richard Selfridge is a primary teacher, Driver Youth Trust programme facilitator and writer on the use of data in education. Find him at icingonthecakeblog.weebly.com and follow him on Twitter at @databusting.


Why there’s no shortcut to establishing children’s baselines

Wooden number blocks, representing Reception baseline assessment

Policy-makers stubbornly refuse to acknowledge the reality of young children, and how this relates to the process and content of assessment, argues Jan Dubiel…

The idea of a ‘simple task-based assessment’ we can use to measure progress is a seductive solution. However, it belies our knowledge of the reality of children of Reception age, their perception and understanding of the world and how this manifests itself.

Testing – and by that I mean an assessment that relies on a response to a preset question to which there is a right answer – may or may not be an effective methodology for older children. It is quite clearly not for children in the early childhood age bracket.

“The idea of a ‘simple task-based assessment’ we can use to measure progress is a seductive solution”

When we subject older children to a test scenario they are aware of how the process works. Most importantly they know there is a ‘right answer’ and the purpose of the test is to get as many right answers as possible.

We learn the rituals of testing: what the ‘tester’ or ‘examiner’ wants us to do in order to get that right answer, how to ‘second guess’ what they are looking for and provide it for them.

Faulty data

Young children do not have the knowledge of the rituals of the test process, generally aren’t aware that there is a ‘right’ answer, and often respond to the questions with their own unconventional or creatively uninformed perspective. This creates data that doesn’t necessarily demonstrate what they really know.

As long as policy-makers deflect reality because the nature of young children is untidy, unpredictable and takes time, skill, patience and reflection to ascertain, then these tensions and conflicts will continue.

As long as they place statistical convenience above accurate, albeit challenging processes of assessment then there will always be a nagging antagonism between the necessary creation of accountability data and EYFS specialists and practitioners.

Although not convenient, effective assessment for accountability must include all the aspects that contribute to developmental trajectories and likely outcomes.

Although by no means a perfect process, teacher-led observational assessment, properly supported and moderated, does provide accurate and reliable information that can be used to effectively establish starting points from which accountability can be judged.

Jan Dubiel is National Director at Early Excellence.

You might also be interested in...