New Times,
New Thinking.

  1. Politics
23 August 2012

Why falling exam results are not entirely a bad thing

A fall in GCSE grades.

By Tom Davies

Today marked the first fall in the percentage of GCSE students gaining a “C” grade or above since the qualification was originally introduced. 69.4 per cent of papers were awarded at least a “C” grade compared to a figure of 69.8 per cent last year.

Given that OFQUAL (the independent exams regulator) has recently stated that exam boards must put an end to “grade inflation”, it is unsurprising that we have not been met with the customary cheery August headlines. Whilst the new system of “comparable outcomes” was expected to see results stagnate for the foreseeable future, a fall in exam performance of this magnitude was not expected.

The reforms come in response to claims that continued improvements in exam performance are the result of the “dumbing down of exams”. Consequently, employers and universities are gradually losing faith in the credibility of GCSE and A-level qualifications. Last week, recruitment group Adecco published figures stating that 65 per cent of employers considered that A-levels did not provide adequate preparation for employment.  

Some sceptics point to the fact that schools are free to choose which exam boards their students use. These for-profit organisations such as OCR and Edexcel charge a fee per pupil. As a result, in a damaging “race to the bottom”, it is in the best interests of exam boards to offer an attractive curriculum with the easiest exams possible.

Previously, OFQUAL operated a system of “comparable performance” or “criterion referencing” in which exam papers were marked in accordance to “the knowledge, skills and understanding that students must show in the exam”. In other words, if exactly the same cohort sat two different exams, but one exam was harder than the other, “comparable performance” implies that those sitting the harder paper would be unfairly disadvantaged. In the case where the introduction of a new syllabus meant that teachers were less experienced at teaching the new exam format, grade boundaries could be adjusted. In spite of this, as years passed by and teachers became more accustomed to this new syllabus, we saw repeated improvements in national exam performance.

Critics suggest that the successive increase in pass rates witnessed over the last 27 years has largely been as a result of this “grade inflation”. Accordingly, this improvement in exam scores does not represent a “real” improvement in performance. In a recent study by Cambridge Assessment, 87 per cent of lecturers declared that “too much teaching to the test” was a significant factor in undergraduates being underprepared for study at university.

In an attempt to stem this perceived grade inflation, OFQUAL has introduced a “comparable outcomes” system where the percentage of students obtaining each grade will largely remain the same. Exam boards are now required to justify any increases in national exam performance with evidence that the cohort in question are more “able” than those in previous years. In the case of verifying the A-level grade distribution in a particular year, that cohort’s GCSE grade distribution will be used as a reference point. For GCSE’s the benchmark is KS2 performance.

Give a gift subscription to the New Statesman this Christmas from just £49

The justification behind this is that exam results should remain constant across time, as there is no definitive evidence that the base “ability” of students, which is what employers and universities are really interested in, changes year-on-year:

“You would expect outcomes to remain consistent year to year unless there are changes in terms of the cohort or the syllabuses, or in terms of other extraneous factors”, said Simon Lebus of Cambridge Assessment, parent company of the OCR exam board.”

In 2010 OFQUAL prioritized “comparable outcomes” over “comparable performance”. The same increases in the percentage of students obtaining “A” grades was not seen.

This is an interesting statement of intent from OFQUAL as it raises questions over the very nature of exam grades. By definition, with “norm-referencing”, pass rates will always be the same. According to these reforms, if all the teachers in the country were to put in double the amount of work and all the students in the country knew the syllabus material twice as well, exam results would not change. The government can no longer use results to measure standards in education. By preventing grade inflation and restoring the credibility of academic qualifications, Gove is removing a potential weapon from his political armoury.

The reforms have also been met with opposition from students and teachers, due to the fact that some GCSE papers, particularly in English, which would have received a “C” under the old regime, would now be awarded a “D”. Some exam boards have unexpectedly increased this grade boundary by over 10 marks from last year.

If the ideals of comparable outcomes are to be upheld, it should work in both directions. Accordingly, the fall in the percentage of A*-C grades in English (1.5 per cent), English Literature (2.1 per cent) and Science (2.2 per cent) are somewhat perplexing. The Joint Council for Qualifications points to the “more demanding standard” of exams that have recently been requested in Whitehall.
A reversal of previous grade inflation is a necessary evil, but Gove has raised suspicions over the manner in which corrective action has been carried out. Earlier this academic year, the “floor standard”- the minimum percentage of A*-C’s required for a school to be judged as not “underperforming”- was raised from 35 per cent to 40 per cent. Schools failing to meet this criterion are put under increased pressure to convert to academies. Consistent underperformance enables the government to make conversion mandatory. Today’s decline will see many schools fail this metric and the government intends to raise this figure to 50 per cent in the future. Yet, as mentioned earlier, comparable outcomes invalidates the use of exam results as a benchmark.

Some may say that OFQUAL are “fiddling” results and not awarding grades solely on the basis of merit. Indeed, the unexpected manner in which the new standards have been imposed will leave thousands of students, teachers and parents disappointed. In spite of this, adjusting grade distributions is entirely necessary to preserve the integrity of standardized testing.

Exam results are intended to signal your relative ability and not absolute ability. An “A” grade in maths is meaningless by itself. Employers don’t understand the meaning of an “A” grade without others obtaining “B” grades, “C” grades etc. Similarly, if so many students are gaining “A” grades such that employers and university admission staff lose faith in the entire system, then an “A” is equally as meaningless. Irrespective of systematic improvements in standards, the credibility of the grading system is undermined if everyone achieves top marks.

“Comparable outcomes” does not prevent students and teachers from being recognized and rewarded for hard work. It does, however, remove the wholesale improvement in exam performance that results from teachers becoming more familiar with syllabus material- an irrelevance where students are judged relative to one another.

Today, there has been a lack of transparency in the grading of papers, but falling exam results are not unambiguously a bad thing. What the reforms do signify is that improving standards are a zero-sum game. It is a statistical law that, for one school to outperform, another must underperform, therefore Gove should think twice before threatening the latter.  

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football