Sunday, May 31, 2020

The worst way to give 2020 GCSE grades … apart from all the others


I have read several comments over the past week or so suggesting that the grades this summer will be rather accurate. They won’t. And I wish people would stop pretending they will.

My prediction (as I wrote about in a previous post) of the way grades would be awarded was largely accurate and I cannot see any other sensible way to generate such grades. In that post I raised some potential problems with this system and these problems appear to have been forgotten or ignored.

What is the problem?

This year’s grading system is (rightly) not going to be used to grade schools.
But the system will be closer to being accurate on a school level than it will on an individual pupil level.
If it’s not right on a pupil level then it’s unfair.
But we need something and this is the least-worst method, so what can we do about it?

In this post I am going to set out some of the reasons why this system is likely to be inaccurate and unfair. And something that I think will make it less bad (although not everyone will agree!).

Various issues

The grades a school gets overall will reflect the grades the school would have got from those pupils had they done as well as their counterparts at the same school over the previous couple of years. 

Problems with this:

1)  This might not be accurate as far as the school’s overall grades are concerned.  If a school is on an upward trajectory, or if a department has made changes (either recently, or to KS3 a few years ago) then this year’s students might be expected to do better.  Ofqual have said they will not take this into account because: “any statistical model is likely to be unacceptably unreliable in predicting trends in performance in 2020”.

2)  Subjects with small numbers of entries in some schools are likely to have a larger variance in their grades each year. This year that will lead to a larger risk of error in the grades that are assigned.

3)  Subjects being offered for the first time in a school will be difficult to manage fairly.

4)  New schools won’t have prior data to use.

Teachers are being asked to provide a Centre Assessment Grade (the grade they think each student would have got had they taken the exam) and a rank order of the students. The exam boards will then use the grades they think the school ‘should’ get and will move the grade boundaries as necessary for each school, maintaining the order of students submitted by the school.

It is worth noting that the grades which are provided are essentially irrelevant: the rank order is the only thing that is important here.  (It might be helpful to start with grades in order to help create the rank order, but the fact remains that the grades will not be used by the exam boards.)

It is also worth noting that this process could have been set up to happen the other way around. It would be possible for the exam boards to tell each school how many of each grade are available in each subject and for the schools then to distribute them amongst the students.

The key thing is getting the students in the _right order_.

A 2015 report by Cambridge Assessment compared predicted grades for the 2014 OCR GCSE exams with the actual grades achieved. The report found that overall 44% of the grades were ‘accurate’, while 42% were ‘optimistic’ and 14% were ‘pessimistic’.  These exams in 2014 were based on the old specifications that, in many cases, had been used in schools for many years, unlike the fairly new exams we have now.
The report found that overall 13% of the results were more than one grade different from the predicted grade. 

5)  One in eight exams results were two or more grades away from the predicted grade. Given that it was impossible to be more than one grade too optimistic when considering a grade of A or A*, or more than one grade too pessimistic when considering a grade G, this suggests that grades in the middle are less easy to assign accurately.

I have long-argued that it is irrelevant to talk of predicted grades as being ‘right’ or ‘wrong’, because between the predicted grades being produced in March and the exam being taken in May/June a lot can change.  This year, schools were told not to collect any additional data after lockdown began, so centre assessed grades, while being submitted in early June, are based on data from prior to 20 March, just like the previous predicted grades were.

6)  This is a problem because it is extremely difficult to tell how much a pupil will change between their predicted grade and the actual exam. Should we try to factor that in when making our Centre Assessment Grades?  Or should we only use the mock exams and other work that we have records for?  This seems to risk disadvantaging those students who would have worked hard after 20 March.

This then is the major problem for me:

7)  While the grades awarded to the school may largely be ‘right’ (with the caveats above), we are stupendously bad as a profession at giving them to the correct students.  This leads to the ludicrous situation whereby: 
a.       The school results are ‘right’ overall – but the school won’t be judged on these.
b.       The individual results are incredibly iffy – and the students _will_ be judged on these.

That is plain unfair for the students.

Surely the opportunity to take the exams in the autumn means this is OK?

No it isn’t.  A further post on this will appear over the next couple of days.

A footnote to the results

Football fans will know that Liverpool were on the cusp of winning their first league title in several decades. They have dominated the league this year and have won 27 out of the 29 leagues games played so far. They are so far ahead of the rest that they only need two wins in the final 9 matches to guarantee they win the league, and if their nearest rivals lose matches even that won’t be required.

In the early days of lockdown a frequent topic of conversation on the radio was that it would be unfortunate were the season not to be completed and for Liverpool to have an asterisk or a footnote next to their league title for this year.

Similarly, I have seen it argued that it would be unfair on the class of 2020 were their grades to be treated differently from those of other years.  In fact, I think they _should_ be treated differently, for the benefit of those students.

We just don’t know that they will be accurate and it would be wrong to pretend they will be.  A student getting a grade 5 this summer in maths might have been someone who would have got a grade 5 in the exam, or might have got a grade 4 or a grade 6 or might have got a grade 3 or a grade 7 (or further away from grade 5). We cannot tell, yet by giving _a_ grade we will be saying we think we do know.

In an ‘ordinary’ year the grade a student gets might not be an accurate reflection of their ability in that subject.  Perhaps they misunderstand the way a question is phrased and lose marks when they do understand the subject content. Perhaps the vagaries of a mark scheme affect their results (see the bizarre marking of the KS2 ‘semi-colon’ question), or perhaps they happen to revise a topic over lunchtime that then appeared as question 1 in the afternoon paper, or perhaps the marking was done incorrectly (this certainly happens because post-exam appeals are sometimes successful).

We know this, though.  We know that an exam result is just that: a result on an exam. While we might use a GCSE grade in a subject as a proxy for how ‘good’ a student is at that subject, all we really know is the grade they got in that exam.  This year everything is different.  Different schools will interpret the guidance in different ways, the number of high grades available to schools might not be ‘fair’ (see earlier in this post) and the students might not get the grade they ‘should’.

I would love for us to be able to put confidence intervals on grades – in normal years as well as this year.  This would be too complicated to understand, however, and the middle value would inevitably be used as ‘the grade’ the student got. 

My suggestion then is to put a symbol next the grades to show they have been created in a different way and are not exam results.  Maybe ~B could refer to an A-level B grade that was arrived at by this year’s system.  ~4 could be a GCSE grade from this year.  This wouldn’t be to devalue the grades, but rather to point out that they are just less likely to be accurate.  If a student currently in Yr 11 needs a grade 7 to be allowed to take A-level maths, for example, the sixth form could look carefully at those with a grade ~6, rather than that automatically disbarring the student.  If a Yr 13 student needs three A grades to meet their university offer, the university could consider carefully those who get A,A,~B

To sum up:

This year’s system is not good for the very people it is supposed to be there for: the students.  There isn’t a realistic alternative, however, so we need to find a way to make this work to serve the needs of those students and those who would ordinarily make use of exam grades.


1 comment:

EPF India Guru said...

The Jharkhand Board has released the model question paper 2021 for JAC Class 10. The model question paper has been released as Set 1'. Students preparing for the JAC examinations must go through the entire JAC 10th Sample paper to know more JAC 10th Model Paper 2021 Set 1 of JAC 10th model question paper has been released on the official website of the State Board. Jharkhand Academic Council (JAC) has reduced the JAC 10th Class syllabus for the 2021 session.