Three papers are to be submitted to the Frontier in Education conference

posted in: Uncategorized | 0

NC State research team is submitting three new research papers:

1. Improving Formation of Student Teams: a Clustering Approach
Shoaib Akbar, Yang Song, Zhewei Hu, Ed Gehringer
Today’s courses in engineering and other fields frequently involve projects done by teams of students. How these teams should be formed is an important research question. In our approach, students submit an ordered list of their topic preferences. Then we run an intelligent assignment algorithm based on clustering. Our implementation is based on k-means clustering and a weighting formula that favors increasing overall student satisfaction and adding members until the maximum allowable team size is reached. The algorithm iterates k-means until all clusters are at or below the max team size. Topics are then assigned on the client side, using a matching algorithm. After topics are assigned, students are still permitted to switch topics or teams. We implemented our algorithm in Expertiza, an online peer assessment system, We deployed it in an object-oriented design and development class of about 130 students

2. Owning an Open-Source Software as Software Engineering Educators: the Good, Bad and Lessons Learned
Yang Song, Zhewei Hu, Ed Gehringer
To provide better guidance for our graduate o-o design and development course, we repurposed an open-source project, the peer-assessment system Expertiza, as a source of class projects. Since 2008, Expertiza has been the main source of software engineering projects for our course, providing more than two hundred project ideas for student teams. Compared with other sources of projects in our course (e.g., Sahana Foundation, OpenMRS, Apache, Mozilla), the core Expertiza team is on campus and subsequently can provide more guidance to students who work on testing/improving/adding new features. In this paper, we discuss our experience and lessons learned from teaching students software engineering using Expertiza. We played two roles in helping hundreds of students on software engineering projects related to Expertiza: we are both teaching staff and the core team. This brings us two perspectives. From a teaching-staff perspective, we have seen how students learn to analyze, develop, test, and maintain while addressing practical issues such as dealing with a large codebase. Yet we also learned that having students to work on different projects may present them with challenges of uneven difficulty. From OSS core team perspective, we can direct students to the projects/features we need, which may help us with part of the implementation.

3. Collusion in Educational Peer Assessment: How Much do We Need to Worry about It?
Yang Song, Zhewei Hu, Ed Gehringer
In the peer-assessment process, the key to success is to ensure fair and accurate assessment. Some researchers have attempted to train students to be reliable and helpful reviewers. Researchers have found that common understanding on rating standards can be better established after calibration. However, collusion among students is another threat to peer assessment validity and fairness that cannot be mitigated by training or calibration. Unfortunately, this issue has not drawn enough attention from researchers in this area. This paper identifies two types of collusion that we have observed. They are small-circle collusion, and pervasive collusion. Small-circle collusion refers to the behaviors of students who form small circles and give higher peer grades to each other. Pervasive collusion refers to students trying assign top grades to all the submissions they review. This kind of behavior may not arise because of a conspiracy between students at first, but later in the semester, other initially honest students may join. In the worst case, if enough students join those pervasive colluders to save time in reviewing and get (free) high review scores as well, the remaining honest reviewers will become outliers since they could not agree with the majority. We also present our algorithms on detecting two types of colluders. By removing these colluders’ peer assessments, we are able to estimate “how much inflation is brought by colluders in educational peer assessment.”