PI forum

The lead PI is the founder of the TUES-supported PI Forum on Peer Assessment, a group of nearly 40 researchers who meet online about once a month to discuss new research in peer-assessment software and pedagogy. The forum’s ultimate goal is to improve techniques for, and measurably increase the use of, formal and informal peer assessment in STEM courses, for both formative and summative purposes. To improve engagement of STEM students through social dialogue. To bring recognition to peer assessment as an important contributor to STEM learning. The collaboration focuses on several complementary aspects of peer assessment. It examines how best to connect peer assessment techniques to specific STEM outcomes, e.g., how peer review may be used for learning to think like a scientist. The members study learning in high-school biology, and college courses in biology, chemistry, physics, and psychology research. They train pre-service STEM teachers to provide more effective formative feedback and summative evaluations
through peer reviewing. Others focus on integrating the knowledge and wisdom from the Writing-Across-the-Curriculum community on peer assessment as a tool to teach critical thinking skills, particularly in large classes in STEM disciplines.

 

CSPRED workshop

As a part of our effort to disseminate the project’s results, we organize a CSPRED (Computer-Supported Peer Review in Education) workshop in conjunction with the 9th International Conference on Educational Data Mining (EDM2016). The previous CSPRED workshop held in conjunction with the Tenth International Conference on Intelligent Tutoring Systems (ITS 2010). The workshop drew about 30 participants with submissions in the form of full papers, short papers, and posters. Since then, the community of peer review researchers has grown, as evidenced by the fact that Google Scholar has indexed over 300 papers on “peer assessment” in the past year.

The workshop seeks research in peer assessment that can help answer fundamental pedagogical questions such as, Can instructors share more assessment responsibility with students, and if so, how?  How should students engaged in peer review be assessed as they submit artifacts for review, and as they give and receive feedback? Is peer review an important practice in the student’s chosen profession? If so, should the peer-review process employed in the classroom be adapted to the profession’s norms, and how does this affect peer-review software? Does peer review yield new information on learning processes, and if so, how can it be discovered?

In addition, the workshop also seeks answers to the practical questions such as What are the best approaches to improving the inter-rater reliability and the quality of the feedback?  What computer-mediated processes be used effectively to train reviewers and improve their motivation? How can these approaches be combined and applied in different environments? Last but not least, we also seek answers to the technological questions such as: How can we apply intelligent technology such as data mining, natural-language processing, and machine learning to improve feedback quality and learning gains? How can peer-review data visualizations be presented to instructors and students?  How far can we classify peer review systems into a common ontology?

more info can be found in the workshop’s website, www.cspred.org.