Does evaluation distort teacher effort and decisions? Quasi-experimental evidence from a policy of retesting students
Performance evaluation may change employee effort and decisions in unintended ways, for example, in multitask jobs where the evaluation measure captures only a subset of (differentially weights) the job tasks. We show evidence of this multitask distortion in schools, with teachers allocating effort across students (tasks). Teachers are evaluated based on student test scores; students who fail the test are retested 2-3 weeks later; and only the higher of the two scores is used in the teachers’ evaluations. This retesting feature creates a sharp difference in the returns to teacher effort directed at failing versus passing students, even though both barely failing and barely passing students have arguably equal educational claim on (returns to) teacher effort. Using RD methods, we show that students who barely fail the end of school-year 𝑡 math test, and are then retested, score higher one year later (𝑡+1) compared to those who barely pass. This difference in scores occurs during the four years of the retest policy, but not in the years before or after. We find no evidence that the results arise from retesting per se, or from changes in students’ own behavior alone. The results suggest teachers give more effort to some students (tasks) simply because of the evaluation system incentives.
1 April 2019 Paper Number CEPDP1612
This CEP discussion paper is published under the centre's Education and skills programme.