- by

Those of you with an interest in teaching may have seen a very interesting article which popped up recently in the Guardian about teachers feeling under enough pressure to ‘inflate’ grades given to pupils by their Local Education Authority (LEA).

tutorhub

Public Concern at Work (Britain’s biggest whistleblowing charity) reported a staggering 80% increase in the number of complaints to their anonymous helpline from the education sector over the last year.  The increase has been cited as possibly down to larger numbers of people involved in academies ringing to report issues.  Indeed, there was a great row over Runshaw College in Lancashire when it was discovered that teachers were helping A Level biology students too much to prepare for the exams – the AQA exam board deemed the students to have gained an unfair advantage from the teachers.

According to the article, many said that were asked to grade coursework and internal exams highly, even if the work didn’t merit such high grades.  Unions say intra-school competition is so extreme that teachers are finding themselves pressured into boosting grades.

PCW have previously reported claims that there was a growing culture of headteachers ruling with an iron fist, “unethical” sickness policies for staff and unfair dismissals.  

It got me thinking though…. with ‘high-stakes’ testing and a culture of everything being driven by exam results, do teachers cheat?  If so, why?

It just so happens I have a book which looks at this very issue.  I introduce to you the book Freakonomics, written by Steven D. Levitt and Stephen J. Dubner.  The book has developed an incredible following across the world and is a multi-million dollar property.  It has even been backed up with columns from the writers in the New York Times.

The book’s opening chapter looks at incentives and cheating and provides us with a handy investigation into – usefully enough – teachers and school grades in the United States.

In 1996, Chicago Public School introduced the concept of high-stakes testing for all high schools in the area (representing some 400,000 pupils.)  Thanks to the No Child Left Behind (NCLB) law signed by President Bush in 2002, federal law made such testing compulsory across the whole country.

Under previous rules, kids only failed a year if they were particularly difficult, lazy or inept.  Under the NCLB law, children at the ages of 8, 11 and 13 are subjected to a standard, multiple-choice test at the end of their year in order to advance to the next ‘grade’ of high school.  In the case of 13 year olds, you needed to pass to get into what the USA calls the Freshman year.

Levitt and Dubner asked the Chicago Public School system to provide information about school results in annual exams given to students (there were exams that students were given every year.)  The CPS obliged and gave them every test result for every student in every grade between 1993 and 2000.  That equates to 30,000 students, 700,000 individual tests and 100 million individual answers.

The next stage was to develop an algorithm that identified potential behaviour to suggest there was cheating going on.  There were a few potential things to look out for when spotting a cheating teacher’s classroom:

  • Firstly, in test answers there might be blocks of identical answers, especially on the harder questions.  Very bright students scoring well on the first few questions (typically your easiest questions) isn’t an issue – the good are going to get the easy right.  However, those who didn’t perform so well overall getting the difficult ones right might be worth looking into.

  • Any strange pattern in the student’s answers could be suspicious.  A student getting the tricky ones right but getting the easier ones wrong.

  • The algorithm looked over time too.  A dramatic one-year improvement, far above the expectations set, could be accredited to a good teacher.  However, a drop the next year suggests that something was unusual about the successful year.

Levitt and Dubner found examples which certainly opened up my eyes:

  • In one class, there were 22 students.  None of the students were particularly exceptional based on the early part of the test – very few had managed to get six consecutive correct answers anywhere on the test.  Magically, 15 of the 22 students managed to post 6 consecutive correct answers towards the end of the test, where the tougher questions lay.  Add in the fact that student’s results were previously uncorrelated, you begin to see something is amiss.
    Oh, and three of the students in the class had left more than one question blank before reeling off the correct string and then ended the test with a succession of blank answers…. Doesn’t that seem a little suspicious?
    It gets weirder, believe me.  Out of the 15 students who had the suspicious answer pattern, nine of them had an identical string preceding that pattern including (get this) three out of four incorrect answers!

What on earth was going on there?  Was the teachers being strategic or did they simply not know the answer themselves?

The latter highlights the reason as to why some teachers feel the need to cheat for the student: they aren’t a very good teacher themselves…

The investigation also found some other interesting conclusions:

  • The class’s overall performance can be heavily scrutinised.  In each grade, children are expected to reach a certain level (grade 5 pupils must reach grade 5.8, grade 7s must reach 7.8 and so on.)
    There was one example that really stood out.
    Class of 6th graders who averaged 5.8 on their test for that year meant that they were on average one whole grade behind.  A year earlier, they had fared even worse, averaging a 4.1 when they should have been at 5.8.  This isn’t exactly a class of stop students then.
    At 6th grade?  They managed a whopping 1.7 grade improvement, scoring 7.5 on their test.
    Impressive, you might think.  That is, until you look at their 7th grade scores:  an average of 5.5!  That’s more than two years behind and worse than they managed in the two preceding years.

Somehow our pupils went from struggling to excelling and then to really struggling across two years.  Bit odd, don’t you think?

Overall, the data presented found evidence of cheating in over 200 classrooms in Chicago, nearly 5%.  That’s probably a conservative estimate, given that Levitt and Dubner concede the algorithm could only spot the most blatant of cheating.

The writers also cited a North Carolina survey where 35% of teachers said they had witnessed classroom cheating by other teachers.  The methods ranged from:

  • Giving students extra time in exams.

  • Suggesting answers to the students.

  • At worst, even physically changing the answers.

To really get to the root of the problem, the CPS readministered the same test to 120 schools, barring teachers from any kind of contact with the exam papers or questions.  They selected a group of those who were suspected of cheating and a group of those who weren’t.  No-one was told why the retest took place.  What they found was that the algorithm had a point – the teachers who were believed to be OK saw their classes maintain the grades from the first time round.  These guys were genuinely good.

In contrast, the teachers suspected of cheating?  Their class’s grades plummeted by a grade, sometimes more.

In the end, the evidence was strong enough that CPS fired a dozen teachers and many more were warned.  

The next year?  Cheating claims by the computer model fell by 30%.

The evidence is clear that cheating and malpractice from teachers does take place – the testimonials alone from the PCW provide us with some clear suggestions that teachers are involved in some collusion or bad behaviour.

But why?  Why risk it all?

  • Teachers get promotions for good results.  There is nothing better for a school than a good set of results, so why not reward the teachers punching out great results by putting them in a position of more power?

  • It is suggested that teacher’s pay should be linked to performance more – suddenly a good set of results for your class might earn you a cheeky pay rise without having to take on any more responsibility.  In Levitt and Dubner’s investigation, they cited

  • If the results aren’t good, teachers can get overlooked for rises and moving up the ladder.  If the results are especially bad for long periods of time, teachers could find themselves being put on special measures or even fired.

  • Obviously there is a pressure from schools too – good results improve a school’s reputation and get more access to funding.  A bad set of results calls future and current funding into question.

Whilst the standard of education needs to be kept high, I think teachers cheating cannot continue.  High-stakes testing keeps results high, but at the same time this comes with an added risk.

Have you seen instances of teachers cheating in schools?  Maybe you’ve been tempted to yourself?  If so, drop us an email and we will anonymously post your stories.

0 Comments

Leave a Reply

  • (will not be published)