Three papers are to be submitted to the Frontier in Education conference

posted in: Uncategorized | 0

NC State research team is submitting three new research papers:

1. Improving Formation of Student Teams: a Clustering Approach
Shoaib Akbar, Yang Song, Zhewei Hu, Ed Gehringer
Today’s courses in engineering and other fields frequently involve projects done by teams of students. How these teams should be formed is an important research question. In our approach, students submit an ordered list of their topic preferences. Then we run an intelligent assignment algorithm based on clustering. Our implementation is based on k-means clustering and a weighting formula that favors increasing overall student satisfaction and adding members until the maximum allowable team size is reached. The algorithm iterates k-means until all clusters are at or below the max team size. Topics are then assigned on the client side, using a matching algorithm. After topics are assigned, students are still permitted to switch topics or teams. We implemented our algorithm in Expertiza, an online peer assessment system, We deployed it in an object-oriented design and development class of about 130 students

2. Owning an Open-Source Software as Software Engineering Educators: the Good, Bad and Lessons Learned
Yang Song, Zhewei Hu, Ed Gehringer
To provide better guidance for our graduate o-o design and development course, we repurposed an open-source project, the peer-assessment system Expertiza, as a source of class projects. Since 2008, Expertiza has been the main source of software engineering projects for our course, providing more than two hundred project ideas for student teams. Compared with other sources of projects in our course (e.g., Sahana Foundation, OpenMRS, Apache, Mozilla), the core Expertiza team is on campus and subsequently can provide more guidance to students who work on testing/improving/adding new features. In this paper, we discuss our experience and lessons learned from teaching students software engineering using Expertiza. We played two roles in helping hundreds of students on software engineering projects related to Expertiza: we are both teaching staff and the core team. This brings us two perspectives. From a teaching-staff perspective, we have seen how students learn to analyze, develop, test, and maintain while addressing practical issues such as dealing with a large codebase. Yet we also learned that having students to work on different projects may present them with challenges of uneven difficulty. From OSS core team perspective, we can direct students to the projects/features we need, which may help us with part of the implementation.

3. Collusion in Educational Peer Assessment: How Much do We Need to Worry about It?
Yang Song, Zhewei Hu, Ed Gehringer
In the peer-assessment process, the key to success is to ensure fair and accurate assessment. Some researchers have attempted to train students to be reliable and helpful reviewers. Researchers have found that common understanding on rating standards can be better established after calibration. However, collusion among students is another threat to peer assessment validity and fairness that cannot be mitigated by training or calibration. Unfortunately, this issue has not drawn enough attention from researchers in this area. This paper identifies two types of collusion that we have observed. They are small-circle collusion, and pervasive collusion. Small-circle collusion refers to the behaviors of students who form small circles and give higher peer grades to each other. Pervasive collusion refers to students trying assign top grades to all the submissions they review. This kind of behavior may not arise because of a conspiracy between students at first, but later in the semester, other initially honest students may join. In the worst case, if enough students join those pervasive colluders to save time in reviewing and get (free) high review scores as well, the remaining honest reviewers will become outliers since they could not agree with the majority. We also present our algorithms on detecting two types of colluders. By removing these colluders’ peer assessments, we are able to estimate “how much inflation is brought by colluders in educational peer assessment.”

Three papers to be presented at Frontiers in Education 2016

posted in: Uncategorized | 0

The team members manage to get three papers accepted at the 46th Annual Frontiers in Education (FIE) Conference:

    • “Five Years of Extra Credit in a Studio-Based Course: An Effort to Incentivize Socially Useful Behavior”
    • “An Experiment with Separate Formative and Summative Rubrics in “Educational Peer Assessment”
    • A Peer-Review Markup Language One Step Toward Building a Data Warehouse for the Educational Peer-Assessment Research”

The FIE conference is a major international conference focusing on educational innovations and research in engineering and computing education. This year, it expect submissions that are related to educational issues in electrical and computer engineering, energy engineering, software engineering, computing and informatics, engineering design, and in other engineering disciplines.

The conference is going to be held in Bayfront Convention Center, in Erie PA on October 12-15, 2015.

 

CSPRED Workshop

posted in: Uncategorized | 0

The project members from NC State, Ed Gehringer, Ferry Pramudianto and Yang Song organize the 2016 CSPRED workshop, which will be held on Wednesday, June 29, 2016 in conjunction with the 9th International Conference on Educational Data Mining in Sheraton downtown Raleigh.

We have received substantial contributions including 4 posters, 5 full and 3 short papers with interesting results that will help to advance peer review in education.

If you’re interested in attending the workshop, you could register yourself here:

http://www.regonline.com/edm2016

 

TLT Friday Live

posted in: Uncategorized | 0

Dr. Gehringer the lead PI of the project gave a talk on TLT Friday Live on May 13, 2016. He presented interesting insights on How computer-supported grading and reviewing of students’ work can provide more and better feedback to students, Deepen learning, while decreasing, or at least not increasing, faculty workload.

He introduced and summarized emerging computer-supported options for larger-enrollment courses including:

  • Automated Grading
  • Constructed Response Analysis
  • Automated Essay Scoring

In addition, he also discussed the importance of the following approaches

  • Self Review, and Peer Review / Grading
  • Calibration and Reputation Systems for Peer Reviewing / Grading

Errors in peer grading

posted in: Uncategorized | 0

Dr. Luca de Alfaro and Michael Shavlovsky did a study on the factors that influence errors in peer grading. They analyzed 288 assignments with 25,633 submissions and 113,169 reviews conducted with CrowdGrader. They found that large grading errors are generally more closely correlated with hard-to-grade submission, rather than with imprecise students. They also found a clear evidence of tit-for-tat behavior when students give feedback on the reviews they received.

Perception to anonymous review

posted in: Uncategorized | 0

Dr. Kidd and her teaching assistant, Julia Morris conducted an informal survey in ODU to investigate what students think about anonymity in peer review. They found out that peer review was viewed to bring more benefit when it wasn’t anonymous. 

PeerLogic organizes a workshop

posted in: Uncategorized | 0

We submitted a proposal to organize 2016 CSPRED in conjunction with the 9th International Conference on Educational Data Mining on Feb. 26, 2016. After more than a month, we finally received the good news. We are now working hard to ensure the best workshop experience for the participant, as well as getting high quality papers [more…].