Supporting student success in Higher Education

This blog piece was originally published on the Transforming Transitions site (link) (June 2018)


When university students underperform or drop out, a typical response is to question whether the individuals or groups who are struggling have had enough support with their studies. Are there systems in place to help the students who need it? And are these systems available and accessible to all?

One of the key findings to emerge from the interviews we undertook with HE students was how aware they were of the support on offer at university. They knew that they had a personal tutor who they could talk to and that lecturers offered office hours and drop-in sessions. They also knew that there are services to help with a range of academic practice skills as well as social and welfare issues.

But frequently those who had reported having challenges during their first year told us that they did not access this support. Some expressed regret at this, believing that if they had used the help on offer, their end of year performance may have been better. This finding is important as it prompts us to ask why students choose not to utilise this support.

The students we talked to often struggled to articulate clearly what had prevented them from accessing support, whether that be meetings with lecturers or tutors, attending sessions run to improve academic literacy or numeracy, or participating in mentoring schemes. What did emerge though was a sense of a stigma attached to active engagement with support opportunities at university, a sense of embarrassment at having to ask for help. Some also indicated that a lack of confidence prevented them from seeking help when they needed it, echoing the views of some HE lecturers too.

Crucially, if we believe that the support on offer to students is of value, these findings encourage us to think about how universities can develop and improve their existing systems to ensure that those who most need it actually access it. Might there even be a need for universities to compel engagement with support for groups we know struggle with this?

These findings have been influential in informing the interventions designed as part of the second phase of this project. One example includes the implementation of a more rigorous personal tutoring system, including additional meetings for students, increased guidance for tutors and new methods for monitoring engagement. Another intervention involves the creation of an online module targeted at BTEC students, accessible from the pre-enrolment stage, and designed to offer support with a range of academic practice skills.

A key focus in developing these has been to consider how we can encourage increased student participation and engagement. Does making an aspect of support mandatory ensure that it happens? Do information or incentive strategies help to encourage support take-up?   And what does the targeting of support mean for inclusion and equality of opportunity?

These are just some of the challenges and tensions that are being negotiated through the current implementation and evaluation stages of this project. We’ll be following up soon with further blogs on how the interventions have worked in practice.

Advertisements

School progress measures are a missed opportunity for a fairer and more informative approach

This blog piece was originally published on The University of Birmingham’s Social Sciences  Blog (link) (May, 2018)


The Progress 8 measures of school performance compare pupils’ GCSE results across 8 subjects to those of other pupils with the same primary school SATs results. There are many reasons behind the differences we see in the scores, many of which have nothing to do with school quality. They tell us surprisingly little about school performance.

It is easy to find fault in performance measures, and there is certainly a lot of ammunition to do this in the case of the Progress 8 measures. However, in my research, I work towards improving education policy and practice, rather than merely criticising the status quo. So here are five ways to reform the school Progress measures:

1. Fix the ability bias:

Progress measures help us see pupil progress relative to similar-ability peers in other schools. But, as my recent research has shown, error in the measures used to estimate ability means that a sizable ‘ability bias’ remains, giving a particularly large advantage to selective grammar schools. Ability bias can be rectified by looking at average school starting points as well the starting points for individual pupils (or more sophisticated methods which estimate and correct for measurement unreliability).

2. Replace or supplement Progress 8 with a measure that  takes context into account:

My paper, published when the first Progress measures were introduced in 2016 found that around a third of the variation in Progress 8 can be accounted for by a small number of school intake characteristics such as the proportion of pupils on free school meals and/or with English as an additional language (EAL). Using more sophisticated measures would reveal further predictable differences unrelated to school quality (e.g. considering EAL groups, the number of years pupils are on Free School Meals, or factors such as parental education). We must guard against measures excusing lower standards for disadvantaged groups. But not providing contextualised estimates leaves us with little more than impressions on how interacting contextual factors influence outcomes and no one in a position to make data-informed judgements.

3. Create measures which smooth volatility over several years:

Progress 8 is highly affected by unreliable assessments and statistical noise. Only a tiny portion of Progress 8 variation is likely to be due to differences in school performance. A large body of research has revealed very low rates of stability for progress (value-added) measures over time. Judging a school using a single year’s data is like judging my golfing ability from a single hole. A school performance measure based on a 2- or 3-year rolling average would smooth out volatility in the measure and discourage short-termism.

4. Show the spread, not the average:

It is not necessary to present a single score rather than, for example, the proportion of children in each of 5 progress bands, from high to low. Using a single score means that, against the intentions of the measure, the scores of individual students can be masked by the overall average and downplayed. Similarly, the measure attempts to summarise school performance across all subjects using a single number. Schools have strengths and weaknesses and correlations between performances across subject are moderate at best.

5. Give health warnings and support for interpretation:

Publishing the Progress measures in ‘school performance tables’ and inviting parents to ‘compare school performance’ does little to encourage and support parents to consider the numerous reasons why the scores are not reflective of school performance. Experts have called for, measures to be accompanied by prominent ‘health warnings’ if published. Confidence intervals are not enough. The DfE should issue and prominently display guidance to discourage too much weight being placed on the measures.

Researchers in this field has worked hard to make the limitations of the Progress measures known. The above  recommendations chime with many studies and professional groups calling for change.

Trustworthy measures of school performance are currently not a realistic prospect. The only way I can see them being informative is through use in a professional context, alongside many other sources of evidence – and even then I have my doubts.

Time for an honest debate about grammar schools

This blog piece was co-authored with Rebecca Morris (@BeckyM1983). It was originally published on The Conversation (link) (July, 2016)


With Theresa May as the new prime minister at the helm of the Conservatives, speculation is already mounting about whether her support for a new academically selective grammar school in her own constituency will translate into national educational policy. This will be a big question for her newly appointed secretary of state for education, Justine Greening.

The debate between those who support the reintroduction of grammar schools and those who would like them abolished is a longstanding one with no foreseeable end in sight. In 1998 the Labour prime minister Tony Blair attempted to draw a line under the issue by preventing the creation of any new selective schools while allowing for the maintenance of existing grammar schools in England. Before becoming prime minister, David Cameron dismissed Tory MPs angry at his party’s withdrawal of support for grammar schools by calling the debate “entirely pointless”.

This is an issue that continues to resurface and recently became even more pressing with the government’s decision in October 2015 to allow a school in Tonbridge, Kent, to open up an annexe in Sevenoaks, ten miles away. This decision has led to the very real possibility of existing grammar schools applying for similar expansions. Whether or not more will be given permission to do so in the years ahead, many existing grammar schools are currently expanding their intakes.

This latest resurgence of the debate is playing out in an educational landscape which has been radically reformed since 2010. Old arguments about what type of school the government should favour have little traction or meaning in an education system deliberately set up around the principles of autonomy, diversity and choice. Meanwhile, much of the debate continues to ignore and distort the bodies of evidence on crucial issues such as the effectiveness of selection, fair access and social mobility.

It is against this backdrop that we completed a recent review looking at how reforms to the education system affect the grammar school debate and examined the evidence underpinning arguments on both sides.

Old debates, new system

Grammar schools have been re-positioning themselves within the newly-reformed landscape. Notably, 85% of grammar schools have now become academies – giving them more autonomy. Turning a grammar school into an academy is now literally a tick-box exercise. Adopting a legal status that ostensibly keeps the state at arms-length while granting autonomy over curriculum and admissions policies has had a strong appeal for grammar schools.

New potential roles for grammar schools have also opened up. We are seeing the emergence of new structures and forms of collaboration, such as multi-academy trusts, federations and other partnerships. Supporters argue that grammar schools can play a positive role within these new structures, offering leadership within the system. Notable examples include the King Edward VI foundation in Birmingham which in 2010 took over the poorly-performing Sheldon Heath Community Arts College. There are also current proposals for the foundation to become a multi-academy trust.

Construction of the new annexe for a grammar school in Sevenoaks. Gareth Fuller/PA Archive

The Cameron government’s quasi-market approach to making education policy – and that of New Labour before him – has favoured looser and often overlapping structures that allow for a diversity of provision and responsiveness to demand. The central focus is on standards rather than a state-approved blueprint for all schools. This involves intervention where standards are low and expansion where standards are high and there is demand for places.

This policy approach neither supports nor opposes grammar schools – it tries to sidestep the question entirely, leaving many concerns about fair access and the impact of academic selection unanswered.

Fair access

A disproportionately small number of disadvantaged pupils attend grammar schools. Contrary to the claims of grammar school proponents, the evidence shows that these disparities in intake are not entirely accounted for by the fact that grammar schools are located in more affluent areas nor by their high-attaining intake.

Yet grammar schools (and academies) are in a position to use their control over admissions policies and application procedures to seek more balanced intakes and fairer access should they wish to do so. Whether or not they have the ability or inclination remains to be seen.

The issue of school admissions is a good example of where the public debate has failed to keep pace with the realities of the system. Previous research by the Sutton Trust found that some of the most socially selective schools in the country are comprehensive schools, at least in name.

Pitting comprehensive schools against grammar schools, therefore, only loosely grasps the issue of social selectivity. With its emphasis on school types this distracts from the larger issue: the content of the school admissions code and how to ensure compliance with it. If more balanced school intakes are desired, the focus should be on rules around admissions and permissible over-subscription criteria for all schools.

What are grammar schools the answer to?

Much of the current debate is predicated on the superior effectiveness of grammar schools. But the evidence we have reviewed suggests that the academic benefit of attending a grammar school is relatively small. Even these estimates are likely to be inflated by differences in intake that are not taken into account in the statistics.

Evidence on selection, both as part of the education system itself and within schools through setting or streaming, suggest there is little overall benefit to children’s academic achievement. The overall effect is, at best, zero-sum and most likely negative, with higher-attaining pupils benefiting at the expense of lower-attaining pupils, leading to an increase in inequality. The question remains whether that is a price worth paying.

Why new school performance tables tell us very little about school performance

This blog piece was originally published on The Conversation (link) (January, 2017)


The latest performance tables for secondary and primary schools in England have been released – with parents and educators alike looking to the tables to understand and compare schools in their area.

Schools will also be keen to see if they have met a new set of national standards set by the government. These new standards now include “progress” measures, which are a type of “value-added measure”. These compare pupils’ results with other pupils who got the same exam scores as them at the end of primary school.

Previously, secondary schools were rated mainly by raw GCSE results. This was based on the number of pupils getting five A to C GCSEs. But because GCSE results are strongly linked to how well pupils perform in primary school, it tended to be that these previous performance tables told us more about school intakes than actual performance. So under the new measures, schools are judged by how much progress students make compared to other pupils of a similar ability.

This means that it is now easier to identify schools that have good results despite low starting points. As well as schools with very able students who are making relatively little progress compared to able pupils at other schools.

But even with these fairer headline measures, the tables still tell us relatively little about school performance. This is because there are serious problems with the use of these types of “value-added measures” to judge school performance – as my new research shows. I have outlined the main issues below:

Intake biases

Taking pupils’ starting points into account when judging school performance is a step in the right direction, because this means that schools are held accountable for the progress pupils make while at the school. It also focuses schools’ efforts on all pupils making progress rather than just those on the C/D grade borderline which was so crucial for success in the previous measure.

But school intakes differ by more than their prior exam results. My study finds that over a third of the variation in the new secondary school scores can be accounted for by a small number of factors such as the number of disadvantaged pupils at a school, or pupils at the school for whom English is not their first language. This means the new measure is still some way off “levelling the playing field” when comparing school performance.

New measures don’t level the playing field. Shutterstock

In my research, I examined how much school scores would change if these differences in context were taken into account. While schools with a “typical” intake of pupils may be largely unaffected, schools working in the most or least challenging areas could see their scores shifting dramatically. I found this could be by as much as an average of five GCSE grades per pupil across their best eight subjects. And these are just the “biases” we know about and have measures for.

Unstable over time

My research also replicated previous research which found that secondary school performance is only moderately “stable” over time when looking at relative progress. This can be seen in the fact that less than a quarter of the variation in school scores can be accounted for by school performance three years earlier. I also extended this to primary school level where I found stability to be lower still.

The recent “value-added” progress measures are slightly more stable than the former “contextualised” measure – which took many pupil characteristics as well as previous exam results into account. But given “biases” relating to intakes, such as strong links with pupil disadvantage, higher stability is probably not a good thing and most likely reflects differences in school intakes. The real test is whether the measure is stable when these “predictable biases” are removed.

Poorly reflect range of pupils

League tables by their very nature give the scores for a single group in a single year. This means the performance of the year group that left the school last year (as given in the performance tables) reveals very little about the performance of other year groups – and my research supports this. I looked at pupils in years three to nine – ages seven to 14 – to examine the performance of different year groups in the same school at a given point in time.

Even very high or low performing schools tend to have a huge range of pupil scores. Shutterstock

I found that even the performance of consecutive year groups – so years six and five – were only moderately similar. For cohorts separated by two or more years, levels of similarity were also found to be low. This inconsistency can also be seen within a single year – where even very high or low performing schools tend to have a huge range of pupil scores.

This all goes to show that school performance tables are not a true or fair reflection of a school’s performance. While there is certainly room to improve this situation, my research suggests that relative progress measures will never be a fair and accurate guide to school performance on their own.

Progress 8, Ability bias and the ‘phantom’ grammar school effect

As discussed in an excellent Education Datalab post (here), the government is judging schools using Progress metrics which are strongly related to schools’ average intake attainment and have a large grammar school ‘effect’. As the article’s author, Dr (now Professor) Rebecca Allen notes, ‘​The problem is that we don’t know why.’

My most recent research provides an answer. Namely, that this effect is almost entirely down to a measurement bias (rather than genuine differences in school effectiveness).

I wrote an ‘Expert Piece’ in Schools Week about this to try and get the message out. Understandably, my word limit was tight and diagrams or any semi-technical terms were not permitted. So this blog fills in the gap between the full research paper (also available here) and my Schools Week opinion piece, providing an accessible summary of what the problem is and why you should care.

The Ability Bias and Grammar School Effect

Take a look at the relationship between Progress 8 and average KS2 APS in the 2017 (final) secondary school scores:

ability bias 2017

One hardly needs the trend line to see an upwards pointing arrow in the school data points. As one moves to the right (average KS2 APS is higher), Progress 8 tends to increase. As Education Datalab’s Dave Thomson points out, likely explanations include (I quote):

  1. Their pupils tend to differ from other pupils with similar prior attainment in ways that have an effect on Key Stage 4 outcomes. They may tend to receive more support at home, for example;
  2. Their pupils have an effect on each other. Competition between pupils may be driving up their performance. There may be more time for teaching and learning due to pupils creating a more ordered environment for teaching and learning through better behaviour; or
  3. They may actually be more effective. They may be able to recruit better teachers, for example, because they tend to be the type of school the best teachers want to work.

There is one explanation missing from this list: namely, measurement error prevents the Progress scores fully correcting for intake ability and biases remain in the school-level scores.

In my paper I conclude that this is the most likely explanation and measurement error alone produces bias remarkably similar to that seen in the graph above. To find out why, read on:

What causes this bias?

Technical Explanation:

A technical answer is that this the observed peer ability effect is caused by a so-called ‘Phantom’ compositional effect produced by regression attenuation bias (NB. For stats nerds, I realise the Progress scores are not produced using regressions and discuss in the paper how ‘attenuation bias is not a peculiarity of certain regression equations or value-added models and will also affect the English ‘Progress’ scores’. Also see non-technical explanation below).

Non-technical Explanation (based on my Schools Week piece):

Put very simply, if we have imperfect measures of prior attainment, we will get an incomplete correction for ability. We will end up with some middle ground between the original KS4 scores – which are strongly correlated with intake prior attainment – and a perfect school value-added score for which intake ability doesn’t matter.

The problem is caused by two issues:

Issue 1: Shrinking expectations (technically, regression attenuation bias)

Progress 8 expectations rely on the relationship between KS2 and KS4 scores. As error increases, this relationship breaks down as pupils of different ability levels get mixed up. Imagine that the KS2 scores were so riddled with error that didn’t predict KS4 scores at all. In this scenario, our best expectation would be the national average. At the other extreme, imagine a perfect KS2 measure. Our expectations would be perfectly tailored to all pupils’ actual ability levels. With normal levels of error, we end up at an interim position between these where the relationship moderately breaks down and the expectations shrink a little to the national average (and consider what happens to the value-added as they do).

If you are interested in exactly what causes this – I have written a short explanation here (with a handy diagram).

Issue 2: ‘Phantom’ effects

Up until now, the conventional wisdom was that there is going to be some level of unreliability in the test scores, but much of this would cancel out in the school averages (some students get lucky, others unlucky, but it evens out as cohorts get larger) and tests were designed to avoid systematic biases (although of course this is contested). So there was no reason to think that random error could produce systematic bias.

It turns out this is wrong. Here’s why:

The second issue here is that of ‘Phantom’ effects, where measurement error creates relationships between school average scores (e.g. between KS2 and KS4 school scores) despite the relationship being corrected using the pupil scores. Researchers have known about this for some time and have been wrestling with how to measure the effect of school composition (e.g. average ability) without falling for phantoms.

A big conceptual barrier for thinking clearly about this is that relationships and numbers can behave very differently depending whether you are using averages or individual scores. The English Progress measures use individual pupil Key Stage scores. School averages are created afterwards. The data points on the Education Datalab graph mentioned above (here) are all school averages.

The designers of the Progress measures did a great job of eliminating all observable bias in the pupil scores. They had their hands tied however when designing the measures (see p.6 here) when it came to correcting anything else. When we work out the school averages, the relationship between KS2 and KS4 pops up again – a ‘phantom’ effect!

Why does this happen? If a gremlin added lots of random errors to your pupil’s KS2 scores overnight, this would play havok with any pupil targets/expectations based on KS2 scores, but it might have little effect on your school average KS2 score. The handy thing about averages is that a lot of the pupil errors cancel out.

This applies to relationships as well. It is not just the school average scores which hold remarkably firm as we introduce errors into pupil scores, it is the relationships between them. As we introduce errors into the pupils’ scores – due to errors cancelling out – each school’s average would be left largely unaffected – leaving the relationship between school average KS2 and KS4 intact.

In other words, as measurement error is increased, a relationship will break down more in the pupil scores than for school averages. This means that to some extent, the school level KS2-KS4 relationship will be what left over from an incomplete correction of the pupil scores and an apparent (phantom) ‘compositional’ effect pops up. In the words of Harker And Tymms (2004) – the school averages ‘mop-up’ the relationship obscured by error at the pupil-level.

The Progress 8 measures only correct for pupil scores. They do not take school averages into account. If there is KS2 measures error – which there will be (especially judging it be recent years events!) – the correction at pupil level will inevitably be incomplete. The school averages will therefore mop this up, resulting in an ability bias.

Okay, so that’s the theory. Does this matter in practice?

This effect is inevitable to some extent. But it might not be serious. The big question I set out to answer in my research paper was how big the bias will be in the English Progress measures for typical rates of KS2 measurement error.

I used reliability estimates based on Ofqual research and ran simulations using the National Pupil Database. I used several levels of measurement error which I labelled small, medium and large (where the medium was the best estimate based on Ofqual data).

I found that KS2 measurement error produces a serious ability bias, complete with a ‘phantom grammar school effect’. For the ‘medium’ error, the ability bias and the (completely spurious) grammar school effect were remarkably similar to the one seen in the actual data (as shown in the graph above).

I also looked at the grammar school effect in DfE data from 2004-2016, finding that it lurched about from year to year and with changes in the value-added measure (to CVA and back) and underlying assessments.

What should we do about this?

There is a quick and easy fix for this: correct for the school-level relationship, as shown visually in the aforementioned Education Datalab post. (There are also so more technical fixes involving estimating baseline measurement reliability and then correcting for it in the statistical models). Using the quick-fix method, I estimate in my paper that about 90% of the bias can be removed by adjusting for school average prior attainment.

Here’s what that would look like (using final 2017 data):

adjusted by prior attainment 2017

While this is technically easy to do, there are enormous political and practical ramifications of making a correction which would substantially shift all Progress scores – primary and secondary – across the board and would eliminate the grammar school effect entirely. Schools with particularly low or high ability intakes, grammar schools especially, will find themselves with a markedly different Progress scores. This might prove controversial…

But it is in keeping with the clear principle behind the Progress measures: schools should be judged by the progress their pupils make rather than the starting points of their pupils. We just need to add that schools should not be advantaged or disadvantaged by the average prior attainment of their intake any more than that of individual pupils.

There is a whole other argument about other differences in intakes (so-called contextual factors). Other researchers and I have examined the (substantial) impact on the school scores of ignoring contextual factors (e.g. here) and there is strong general agreement amongst researchers that contextual factors matter and have predictable and significant effects on pupil performance.

Here the issue is more fundamental: by only taking pupil-level prior attainment scores into account, the current Progress measures do not even level the playing field in terms of prior attainment (!)

Links and References

  • See my the Schools Week article here
  • The article will be in print shortly and is currently available online for those with university library logons and, for those without university library access, the accepted manuscript is here
  • I have also produced a 1-page summary here

I have not provided citations/references within this blog post but – as detailed in my paper – my study builds on and is informed by many other researchers to whom I am very grateful. Full references can be found on my journal paper (see link above).