I thought I should at least note having completed one of those seminal law school activities: grading the writing competition.
Last year I'd kvetched about having to do the writing competition. At BU it's ostensibly the Law Review write-on competition, but unlike some other law schools we use it as the write-on process for all the journals. The torture I put myself through last year wasn't for naught, however, because I made it onto the journal I wanted.
(Yeah, maybe it would have been better to have gotten onto Law Review. But I like the subject matter of my journal, and I had a much saner journal experience last year than my friends on Law Review. Only two issues a year means many fewer tech checks...)
This year, as an article editor, I had to help out with the grading of the entries. I'm not quite sure how the entries were divvied-up between journals for grading, but I personally had a wad of 19 to grade. I didn't have the whole entry though - I didn't have to grade the memos. I "just" had the citation correction parts and the one-page editing portions to review.
For the citation correction part, candidates had 25 cites that needed to be put into correct ALWD form (which is fairly unuseful, because the journals use the Blue Book rather than the ALWD. But because students have been taught the ALWD during first year writing, we can only test them on what they know. This will probably change shortly, however, because the school just changed the curriculum to the Blue Book to help sort out the journal bottleneck of needing to edit articles using a standard we had no clue about.) A perfect cite was worth 4 points, and we could deduct from there. Some cites were easy and most people got them all right (leading me to wonder what was going though the heads of the people who didn't), although some were just nit-baiters and often lost one or two points for small stuff. What was kind of interesting is that there were some entries that were really good - getting all 4 points on most of the cites - but when they got off they really got off and would get 0's on some of them. Whereas other entries tended to lose 1-3 points on lots of the cites, but never washed out completely on any of them.
Perhaps the most difficult thing to grade was the editing assignment. People were judged partly on technical corrections - which most people did with about the same level of competence. Harder to grade was the more substantive corrections. The original essay had been horribly written - no transitions, and sometimes it was hard to figure out what the overall point was. I tended to grade higher the entries that took the essay, divined what the gist was, and then rewrote it to make sure that point got across. The more bold the corrections, the better, at least as far as I was concerned. (Assuming, of course, the corrections were actually an improvement.) The problem is, it's a little unclear how everyone else graded theirs. There weren't a lot of instructions. We were told, however, to curve the grades - meaning I had to have at least one entry with a certain score at the high end and another at the low end, and the remaining scores needed to average in the middle. It leads me to wonder if ultimately candidates will be chosen not so much by the absolute point values from their competitions but by relative rankings. But I don't really know, because I'm not part of that process.
In any case, it was the first time I'd ever really graded anything. It sort of gave me some insight into the incredible annoyance grading exams must be for the professors. (Although at least they can wield more dominion over the scoring than I could.) I can't say that I particularly enjoyed my "power." Grading the entries was only slightly more fun than actually writing them, so that's obviously not saying much. (Also, perhaps "fun" isn't the word. "Less-pressured" might be, since at least my fate wasn't hanging in the balance this time. Unless I failed to finish my grading, in which case I would have gotten me and my journal into trouble.)
Note: like exams, competitions were graded anonymously. I could only see the numbers given to each entrant. I have no idea who any of the people were who wrote them. And I'm very glad for that. It was hard enough to make tough evaluations of my peers even abstractly. Being able to associate personalities with entries would have been disastrous, although as it was I could still come to "know" an entrant based on how he/she did with both the cite check and the editing parts. People who did one with care tended to do the other with care as well. Or at least I kind of hoped they did...