Enhancing the Efficacy of Teacher Incentives
Loss-aversion incentives increased math test scores between 0.2 and 0.4 standard deviations
In recent years, a number of U.S. states and school districts have implemented teacher financial-incentive plans, also known as merit pay, with the goal of increasing student achievement. Some past studies have shown that such reform attempts, which pay teachers bonuses after their students hit certain goals, have had limited effects on student achievement.
In Enhancing the Efficacy of Teacher Incentives Through Loss Aversion: A Field Experiment (NBER Working Paper No. 18237), co-authors Ronald Fryer, Jr., Steven Levitt, John List, and Sally Sadoff find that using an alternative "loss aversion" incentive -- with teachers being paid bonuses in advance and asked to give money back if students don't achieve specific results -- significantly improves student achievement. Their results also suggest that loss-aversion incentives might be used in the corporate world in the pursuit of profits.
Although earlier studies have confirmed a correlation between teacher quality and student achievement, the challenge to date has been identifying quality teachers and providing proper incentives for all teachers to successfully strive for improved and lasting student achievement. At least ten states and numerous school districts in the United States have adopted programs that reward teachers with extra pay after students achieve certain goals on tests or report cards, but these "traditional" incentive programs generally have not had large effects on long-term student performance.
Fryer and his co-authors conduct a field experiment of teacher incentives using the concept of loss aversion -- that is, by framing incentives as losses rather than gains. They worked with schools in Chicago Heights, Illinois, which is located thirty miles south of Chicago and has nine K-8 schools with a total of about 3,200 students, during the 2010-11 school year. Chicago Heights' schools are made up of primarily low-income minority students who struggle with low achievement rates.'
In cooperation with school administrators and the teachers' union, the authors randomly selected 150 volunteer teachers and divided them into two main categories. The "gain" group was subject to traditional merit-pay incentives distributed after student achievement levels were determined and met; the "loss" group was subject to loss-aversion incentives that gave bonuses in advance, with the stipulation that money would be returned by teachers if students didn't hit stipulated test goals at the end of the school year. With a pool of $632,960 to distribute in incentive payments, the authors further subdivided the "gain" and "loss" groups in order to measure individual-based and team-based teacher incentives.
Using benchmarks from prior student test scores and final end-of-school-year ThinkLink Predictive Assessment test results, the authors find that loss-aversion incentives increased math test scores between 0.2 and 0.4 standard deviations, or the equivalent of increasing teacher quality by more than one standard deviation. The traditional "gain" incentives yielded "smaller and statistically insignificant results." Similar patterns were found in reading test scores -- and in both individual-based and team-based teacher incentive approaches. The authors did not identify any other factors, such as student absenteeism or outright cheating in test scores, which could explain the differences in achievement results.
--Jay Fitzgerald
Thursday, December 5, 2013
Math scores & loss aversion
Subscribe to:
Post Comments (Atom)
4 comments:
Oh my god, how demeaning. This may get math scores up the first time it is tried, but it is going to kill employee morale.
Guess and check.
It always makes me think of my old programming students. Instead of examining each line of code, they just change something to see if they can get the program to work better. (I reduced errors by .25 standard deviations!) One needs to find some specific number that's wrong and then track it backwards to it's source. The method described in this article just assumes that bad math scores is a teacher problem, not systemic curriculum and philosophy problems.
What, exactly, is the problem? What number is bad? Where does that number come from? Do the test questions give you information that you can trace back to the source? OK, so the bad number relates to "problem solving". What do you do? Spend more time on "problem solving"? Heaven forbid that one just tests for simple skills like manipulating fractions; something that can give you proper feedback for a problem you can fix. No. Math is some sort of complex thinking process that allows schools to not take responsibility for results. Then, as in this case, they turn around and put the onus for statistical success on individual teachers. They should use loss aversion on the schools. At least that would tackle systemic problems. As a parent, I see it as a school problem, but behind the school veil, teachers and administrations are trying to pin the blame on each other.
Problems can also come in the form of anecdotes for specific students. Everyone trashes anecdotes, but they are something you can carefully analyze to find the problem, and, it's very likely that many other students have the same problem. (This is how you debug programs.) Everyone is looking for some single solution using guess and check techniques. It's the tyranny of statistics. Everyone is looking at relative standard deviations rather than individual and absolute problems and solutions. Education is about individuals, not statistics. There will be many solutions for the many different types of students. You have to solve the problem of education from the bottom up, not the top down. It's just that so many people can't handle details.
Steve is soooo right about anecdotes.
This may get math scores up the first time it is tried, but it is going to kill employee morale.
I have no idea what to think…..
I was slightly agog when I read this abstract, and still am, pretty much.
Post a Comment