Dr. Luca de Alfaro and Michael Shavlovsky did a study on the factors that influence errors in peer grading. They analyzed 288 assignments with 25,633 submissions and 113,169 reviews conducted with CrowdGrader. They found that large grading errors are generally more closely correlated with hard-to-grade submission, rather than with imprecise students. They also found a clear evidence of tit-for-tat behavior when students give feedback on the reviews they received.
Dr. Kidd and her teaching assistant, Julia Morris conducted an informal survey in ODU to investigate what students think about anonymity in peer review. They found out that peer review was viewed to bring more benefit when it wasn’t anonymous.