New Times,
New Thinking.

  1. Politics
11 August 2021

Our fear of algorithms has undermined this year’s A-level grades and failed students

Using AI became politically impossible after last year’s exams fiasco – but young people will suffer from rampant grade inflation.

By Sam Freedman

As expected, A-level grades hit record highs this year. Almost 45 per cent of young people got an A* or A, up from 25 per cent two years ago. At private schools more than 70 per cent achieved these grades.

This was the inevitable result of the government imposing a system that forced schools to decide pupils’ grades, but without setting any proper parameters for doing so. As schools could use any kind of work, carried out under any kind of conditions, as evidence, they had no real choice but to err on the side of generosity. Had they not, they would have put their pupils’ life chances at risk.

The short-term political fallout will be far less serious than last year, when a botched algorithm led to a national outcry, and the departure of both the Department for Education’s permanent secretary and Ofqual’s chief regulator. This is because the system applied this year will allow most young people to attend a university of their choice, and because the unfairness is hidden behind an opaque system, rather than being blindingly obvious.

But the unfairness is still there. Some schools will have marked tougher than others. Pupils who missed the top grades will never know if they would have done so at a different school. And there are long-term consequences that will play out over the coming years. Highly selective universities have had to accept far more students than usual, which will impact on the experiences of those students, and will raise more questions around the value for money of tuition fees. Less selective universities will end up with fewer students, and some may be pushed into bankruptcy.

Moreover, restoring confidence in the exams system after the pandemic will not be easy. In-person exams will hopefully return next year, but how will they be graded? How will universities and employers be expected to compare year groups who got their grades under completely different systems?

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

Looking back at how we got into this situation also raises several important points that have wider public policy significance – the first being the government’s view on contingency planning. In the summer and autumn of 2020, the Department for Education could have put plans in place for another exam cancellation this year. Instead, officials chose to assume there would be no second wave of Covid-19 and that exams would continue as planned. In a recent report a senior civil servant explained that “having a contingency plan if things go wrong is seen by some ministers as a negative thought. If you plan for the worst, you are probably going to get it.”

[see also: Is status more important than earning potential for today’s A-level students?]

This is a nonsensical approach to risk management, but we have seen it time and time again throughout the pandemic. The DfE failed to prepare properly for school closures. The Department for Transport has been caught out repeatedly on foreign travel rules. And of course, most importantly, the Prime Minister refused to take the necessary steps to prevent the second wave happening in the first place. If we ever get round to the Covid inquiry, the entire government approach to contingency planning and risk needs to be front and centre.

But the other reason we’re in this exams situation has even more profound implications. Last year’s algorithm failure was avoidable. Had Ofqual adjusted the most obvious anomalies before results were released, they could have made it work (although it would still have been less fair than a normal exam series). In Ireland, an algorithmic approach did work by allowing a little bit of inflation, dealing with obvious errors and not tying results too tightly to school’s previous levels of attainment.

[Hear more on the New Statesman podcast]

The failure of the algorithm in England led to a huge overreaction. Heart-breaking stories of high-achieving pupils who had been assigned fail grades due to the performance of previous cohorts sparked a furious reaction that has made any use of algorithms politically impossible. This year Ofqual wasn’t allowed to use any serious statistical modelling at all to adjust grades, which is one reason they are so high. But it is hardly surprising politicians are so scared of algorithmic approaches when the public are so sceptical of them. People seem much more comfortable with significant human error than with an impersonal computer making a smaller mistake (even though computers make mistakes because of the way they’ve been programmed).

This has long-term implications that go far beyond the pandemic, because AI marking is becoming increasingly accurate and commonplace in the education world. In fact, it has the potential to significantly improve the reliability of exams and reduce their £300m-plus annual cost to schools. But Ofqual has indefinitely paused a study into AI, due to the noise around last year’s results.

There are similar challenges in other policy areas. For instance, a study last year found an AI was more accurate at diagnosing breast cancer from mammograms than a human doctor. But if we’re not prepared to have exams marked via a computer programme, how will we get comfortable with AI medicine?

Rather than run away from the challenges of public perception around algorithms and AI, we need the government to start having a serious conversation about how it can be used to improve our lives while being regulated and managed to prevent its illiberal or discriminatory use. The experience of school assessment in the pandemic suggests this could be some way off.

[see also: Covid Cohort: How do A-level students feel about results day 2021?]

Content from our partners
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on
The death - and rebirth - of public sector consultancy