My sense is that people across the education system have yet to fully appreciate the implications of the slow-creep bell-curve hold that’s been applied to school outcomes. Talking to folk from OfQual on a couple of occasions recently, they have reiterated the view that, even in systems regarded as successful, year on year improvement might only be on the scale of 1% in terms of outcomes across a national cohort. The improvement in Maths GCSE outcomes of 0.7% A*-C last year is regarded as generous; certainly not overly harsh. To a great extent, this equates to a zero sum scenario (more poetic than a 0.7% sum). If school A increases maths outcomes by 10% in a year, a school B somewhere must have seen maths outcomes go down by 9.3% OR 9 schools must have gone down by 1% each (assuming same-size cohort.) We can’t all be winners.

The philosophy in the zero sum, post-gaming era is that any rise in outcomes needs to reflect a genuine improvement in standards. It’s hard to argue with that. We all know that this hasn’t been the case for the 20 years up to 2013/4. Even if schools were getting better, with students and teachers working hard etc, the scale of improvements in that time were inflated; they weren’t a true reflection on the actual scale of improvement in educational standards. I can think of lots of schools that experienced rapid improvement up to 2013/4 that have since fallen back down dramatically. They are not worse schools – they might even be better- it’s just that the outcomes are now more authentic; a closer representation of what students actually know and can do in academic terms, not the result of various misplaced equivalences, re-takes and coursework ‘interventions’.

A year ago I raged about this in my Nicky Morgan vs The Bell Curve blog. At the highest level, ministers and DFE folk have not appreciated the implications of what is happening. They still appear to possess the demented mindset where ‘below average = failing’ and they have yet to produce any model where all schools can be above the threshold for coasting without returning to grade inflation. They are still talking in disparaging terms about Grades 1-4 even though, *by definition, *a high proportion of each cohort must get those grades, however hard they try. Despite the new heavily stabilised regime, with Speaking and Listening gone, coursework gone and equivalences gone, there remains a widespread notion that anything other than year on year improvement is a failure; it’s still believed that ‘rapid improvement’ should be attainable; ie where large rises in outcomes are gained within a single year. The fact is that, in the zero-sum era, this is highly unlikely.

In biology, the concept of limiting factors and saturation is well understood. In photosynthesis, plants can only absorb so much light or carbon dioxide; there is a limit to how much more photosynthesis you can generate if you increase light or CO2 levels. Below certain thresholds these variables limiting factors but, after a point, they exist in excess. The same is true of vitamins in our diet. We all need a bit of vitamin C; but you can’t improve the functions that it supports beyond a certain limit. You only need *enough* vitamin C – to avoid a deficiency level; but having more than you need doesn’t help you to be any healthier. The same must be true in the multi-variable world of education – if only we knew the thresholds where saturation kicks in! You can only give so much feedback; you can only run so many catch-up clinics and interventions; you can only reach a certain level of optimal syllabus/revision time planning or give so many motivational assemblies; you can only improve teacher confidence in various areas of their practice to a certain degree within a given timeframe. In this context, we’re more likely to gain success by identifying and addressing the limiting factors; those things that are below their optimal threshold; things holding us back – rather than looking to bang away at doing ever more of things that we think ‘work’.

In the new era, schools that have already addressed the deficiencies, where all the low hanging fruit have been picked (to quote a HT colleague), that have reached saturation levels in most of their endeavours, are going to find the gains harder to find. Stable results for good schools (in raw and value-added terms) will actually look like a saw tooth with year on year fluctuations up and down. Let’s be ready for that and understand the success that this really is. And let’s understand that, with this model, rapid improvement is only likely to happen where a) the school was performing significantly poorly in the first place b) the cohort has changed significantly or c) something dodgy is going on.

More positively, an outcome of this will be that chasing data outcomes will cease to be the be-all and end-all of defining success. As good schools produce their year on year fluctuations, they will start looking to all the other things we value but can’t measure to celebrate; to define their success. That can only be a good thing.

I really hope your final paragraph comes to pass. I can unfortunately envisage a different narrative emerging based on a lack of understanding of what you are pointing out in this blog post.

LikeLiked by 1 person

Sadly, you could be right! We’ve got our work cut out.

LikeLike

Reblogged this on The Echo Chamber.

LikeLike

I really like this concept of looking at the limiting factors for schools for improvement. In my experience schools have always tried to focus on the next ‘big thing’ in education. It is usually something progressive with some logic behind it but never really makes a huge amount of difference to learning outcomes. To me this seems like a very limited and narrow focus. Being able to analyse where schools are doing something wrong across the whole of the school and set realistic targets to improve seems like a much more productive step forward.

LikeLike

Dear Tom

I’m not sure the analysis in the first paragraph stands up to scrutiny

Using the August 2015 Joint Council published results as a basis of my calculations to show how an individual school’s GCSE maths results can improve my 10% yet every other school’s maths results could still improve broadly in line with the national improvement rates. So here goes:

In 2015 761,230 pupils sat GCSE Maths, with the 63.3% of pupils obtaining grade A*-C, compared to 62.4% in 2014.

This represents an absolute increase of 6851 pupils gaining grade A*-C

So if school A has a cohort of 200 Y11 pupils taking GCSE Maths, then a 10% increase in the number of pupils gaining A*-C leads to an additional 20 pupils gaining grade A*-C.

As a result, there still remains 6831 pupils to be spread across all the schools – which is the equivalent to an 0.89% improvement

Indeed,using my analysis, approximately 340 schools with a Y11 cohort of 200 pupils could increase their GCSE Maths success by 10% without this having any impact on any other school’s pupils gaining A*-C

I’d be more than happy for my analysis to be subject to further scrutiny

Gary Jones

LikeLike

My example is a meant to be an illustration the average effect; not literally the case for one school . Your example actually reinforces the point. Only ~7000 extra C+ grades is about 2 or 3 per school. Your example requires ~3500 lots of ~200 students not to improve. That’s not going to happen. The point you’ve underlined is that the scope to improve is limited so it has to be shared. Infinite permutations for how that could happen but the near zero sum effect is real enough. Within that a few massive swings would be possible – but on average, winners here (ie above the small average % gains) require losers elsewhere.

LikeLike

Dear Tom

I think we can agree, that where there is a zero-sum game, an increase in the ‘share of the cake’ for one party, will lead to the remaining share being smaller. In that situation, a version of your worked example could be used.

However, the situation you describe is not a zero-sum game, and as such the example you use to illustrate your point is mathematically incorrect, as illustrated by my worked example. I get the point you are trying to make, though in trying to make the example simple to follow, it has lost its accuracy. In the context your quote, it is entirely possible for everyone to improve – and this I where I think we can agree – though not to the same extent. So in this context, we may have a number of ‘relative losers’ rather than ‘absolute losers.’

I hope this makes sense

Gary

LikeLike

I think you’ve missed the gist of what I’ve said. Of course it is mathematically possible for everyone to improve – eg with an average of .89% each. The point is that your idea of ‘entirely possible’ is actually extremely unlikely. Not all schools can even improve by 1%, nevermind 2% or more; these are modest gains. If we modelled all schools as the same size; half of schools can only improve by 2% or more at the expense of the other half. In practice there is obviously a greater range of fluctuations. You scenario is almost certain not to happen.

LikeLike

So theoretically we could all flood the system with entrants who will fail and lower the grade boundaries all round! Would you like me to enter our nursery class for gcse maths?

LikeLiked by 1 person