Check out this blog by David Patten, an award-winning former history teacher. It's filled with interesting information about value-added measures and rank-ordering teachers:
Research on VAM and its efficacy in teacher evaluation is plentiful; and it does not bode well for anyone. Dr. Linda Darling-Hammond, initially a supporter of the value added concept, and Dr. Edward Haertel headed up a 2012 study on VAM at Stanford University. They were stunned by the flaws they uncovered. They found wild swings in year to year teacher ratings, dramatic shifts in ratings based solely upon the students assigned to a given teacher, and no control over the life forces outside the classroom that impact a student’s life and hence his or her test scores.
Their conclusion: “value added scores should not be used in high stakes evaluations of teachers.” That statement mirrored the findings of the National Research Council, which dovetail with the research done by the American Statistical Association. In a 2014 report, ASA found that “teachers account for about 1% to 14% of the variability in test scores” and that “the majority of variation in test scores is attributable to factors outside of the teacher’s control such as student and family background, poverty, curriculum and unmeasured influences.”
Dr. Stephen J Caldas, a statistician with vast experience in the value added model, gets directly to the point. He deems VAM as “psychometrically indefensible.” Then goes on to say, “A grave injustice is being foisted from the top down on educators who are caught up in the most recent crush of reform initiatives.”