Our SIGDOC 2010 proceedings paper Measuring Helpfulness in Online Peer Review was published this week. And though none of the authors made it to Brazil for the conference, sadly, we are happy that the work is now available.
We are especially proud to announce that our undergraduate research assistant Chris Klerkx and R&W Master’s student Michael Wojcik are both authors on this paper along with WIDE UX Lead Mike McLeod and WIDE Co-Director Bill Hart-Davidson. Bravo Chris for building that C.V. early in your academic career! (he was a first-year student at MSU when we wrote the paper, and he has just begun his sophomore year).
Here is more about the method described in the paper:
Our review method takes input from a structured but flexible review workflow and stores the artifacts generated in the review process (e.g. comments, suggestions for revisions) along with other user-supplied descriptive and evaluative data that correspond with review metrics (such as whether or not a comment addressed a specific review criterion). The core method is designed as a web service that can receive this input from a variety of different sources and production environments, performing the analytics needed to calculate reviewer helpfulness and visualizations meant to offer both formative and summative feedback of reviewers’ performance. Our method understands a review to be a group activity consisting of reviewers, review targets (documents), and criteria, all under the direction of a review coordinator.
There are a number of specific applications for this method, but we will focus on one application called “Eli” that has been developed for online peer review in classroom settings. Eli gives feedback to both teachers and students about reviewers’ helpfulness in both a single review (a helpfulness score) and over time (a helpfulness index) so that students’ ability to do reviews can be meaningfully evaluated.