[Home]KuroshinRatingSuggestions

MeatballWiki | RecentChanges | Random Page | Indices | Categories

From MojoAnalysis. Not listed on this page:

CategoryRatingSystem CategoryWebTechnology CategoryKuroshin


List the users who have rated a comment. (Now implemented)

One simple mechanism to try would be to make the rating system more transparent. Allowing readers to see not only a rating, but also who rated at what level, much like how the voting system allows readers to see who selected what vote before a story gets killed or goes to the page. Interesting idea. It might prevent folks from hiding behind secrecy in order to push their ideology though ratings. It's an idea. -- MaynardGelinas

I think it's a good idea, honesty is the best policy (i.e. transparency and openess) -- JosefDaviesCoates

PeerReview is necessary. However, this isn't the way. There is no way to retract a vote, or explain a vote. Peer review is a dialog, not a number. Use English. Furthermore, this totally violates the principle of Wiki:SecretBallot. Retaliation is likely, especially given the poor information channel, it's difficult to AssumeGoodFaith in every case. -- SunirShah

Other suggestions? It's a numeric metric--I can't think of any good way for text explanations. I know you dislike the idea of numeric ratings, but assuming they exist, is there a better way for the system to be transparent? --RustyFoster

I should be clearer re: secret ballots. From now forward, things will be ok because people have an expectation of openness. But the past ratings were done with the expectation of secrecy. I agree that maliciousness occured because of that, but there's no reason to punish people now. Let bygones be bygones. Then again, maybe there will be little retaliation. I'm just a worry wart.

As for a better way, dunno. Will have to think about that. WebLogs require a separate sort of thinking. After all, they are shallow adaptations of broadcast media (newspapers); heck, they're still published columnar. The feedback channels on broadcast media are naturally poor. Here, I'm pondering the notion of the [k5 coward]. I guess the rallying cry is, "Reply, don't rate!" -- SunirShah

Why not do both? and then rate (and reply) to replies ad infinitum. I also quite like the Amazon question Was this review helpful? as a way of futher refinement -- JosefDaviesCoates

Later... Alright, how about this: editorial annotations/feedback. If I want, I can tag a little note to a comment (perhaps only visible to the author of the comment) to give feedback. Really, e-mail is the same thing, but it adds context and serves as GuidePosts. On one hand, replying would serve the purpose of disagreeing with the comment--hence the k5 coward bit--but editorial annotations could serve to discuss the quality of writing, which is off-topic to the discussion.

Implementing this would be a matter of letting people reply to topical comments with editorial comments instead of just more topical comments. I'm not sure restricting the readership of these is necessary or good. No sense repeating what someone else said. Instead, you could add a flag in display preferences so users can choose whether they want to read editorial comments attached to topical comments not written by them.

This idea has severe user interface problems, I admit. -- SunirShah

This is one of two changes I'd like to see made to ScoopEngine:

Even later.. Actually, there would be no useful point to give constructive criticism on writing ability without the ability to correct the offending material. Writing is a two-phase cycle. Production <-> editing. So, perhaps given that, maybe there's no way to give useful feedback on a WebLog. After all, you couldn't allow someone to edit their comments to change their meaning, ala ContentSwizzling, without the commensurate ability to correct the resultant discussion. That would be evil. In fact, this kind of thing happened on WikiWiki in April, 1999. At least there it was eventually possible to correct the damage, even if it took a few weeks to sort out the mess. -- SunirShah

This is absolutely essential for the submission queue, agreed. -- KarstenSelf 8 Apr 2001


Use English.

See also RatingAsContent.

A better way, I think, would be to use English--for example the editorial comments--to help the original author improve the article copy by making suggestions. As for individual comments, that's tougher. But is there more value in 200 redundant comments or 5 well written pages? Whatever. Collaborate, don't moderate. -- SunirShah

Just a quick cut in Sunir, this is a great idea. I've often thought that Meta Moderation on Slashdot should require that those who mark a moderation as "Unfair" should be forced to explain their rational which would then forwarded back to the original moderator. I don't know how that relates to K5, since there's no secondary check on ratings, but it flashed in my head as I read your point. --MaynardGelinas

There's a distinction between what a site like K5 or Slashdot is trying to accomplish, and what a wiki or collaborative editing tool does. WebLogs are conversations, the focus is on threads, discussion, and preserving a full record of the conversation, while at the same time illuminating the highlights and winnowing away the chaff. Collaborative editing tools are about creating a (usually) interlinked and interconnected set of documents. Very different goals, very different mechanisms. Both extremely interesting, and both very much on my mind.

I don't really see how they have very different goals - both are trying to produce lots of quality content that's easy to find. Both succeed to varying degrees. I'm thinking maybe the best content would first go round a focussed wiki a few times and then be filtered through a k5 like system. -- JosefDaviesCoates

Though, as applied to the submission queue and the editorial process, your post has a great deal more applicability. I've been frustrated by the submission queue for much of the past six months, and it's not getting better. Complex measures are necessary, and a simple numeric score is not enough. However, that's not the core focus of the conversation here, and you're confounding the issue. -- KarstenSelf

PeerReview is a dialog, not a number. This is why ratings are broken. The k5 editorial comments are a far, far, far more effective means to peer review each other. In fact, the submission queue + editorial comments are really what make the site, I think. I hope I'm not confounding the broader issue of how to best run a WebLog in general. -- SunirShah

PeerReview is both a dialog and a number. You're focussing almost exclusively on the ReviewPart, while omitting any consideration of the PeerPart. Both are necessary.

In academic peer review, one of the crucial functions of the journal is in organizing and vetting the reviewers, as well as assessing their reviews. In order to do this, some sort of quantitative metric (however informal) must be maintained. In larger journals I'm pretty sure you'd find the methods are actually rather formalized, with minimum requirements and various score metrics used. There's also the issue of evaluating the review comments themselves, for quality, accuracy, and actionability.

Pure text-based reviews are useful to a point, but they don't scale. They're sufficient for a site such as here where a half-dozen or so people are discussing a topic. But in a discussion of several score participants, this rapidly degrades to a chorus of "me-to"s. Might as well be AOL. Moderation allows me to mark a comment indicating agreement, disagreement, or assessment of quality. Where a thread of discussion exists, I can similarly assess responses and counterresponses. Moderation is a convenient shorthand -- it's not a substitute for all text, but it can be used to amplify or de-emphasize existing text. -- KarstenSelf 8 Apr 2001


Additional statistics.

The statistical additions to the rating system are much needed. They will help ferret out individual bad ratings, while showing which comments are poorly rated because of controversy. I agree that even well written iconoclastic views should expect some high ratings given a large enough sample; and conversely, those comments which get rated down by everyone are usually of no value. So, this means that I agree that over a large enough sample a comment tends to get rated far more fairly than from just a few ratings -- especially from ratings of one's opponent. -- MaynardGelinas

There's a bug in the Mojo tallying system, in that a post moderated by one person counts as much as a post moderated by a bunch of people. Leafnode posts which get few readers but favor those with strong involvement in a discussion, tend to provoke extreme moderations. Similarly, editorial comments (particularly critical ones, I've noticed). Moderation by small numbers of people should have less influence than moderation by large numbers of people, all else being equal. Rusty and I have discussed this, I hope it's fixed eventually. -- KarstenSelf

Note, this is now implemeneted.

Specifically, I'd like to see the following statistics:

adding the ability to filter according to complex rules (show me only highly rated or highly controversial stories with more than 5 ratings, and all stories with fewer than 5 moderations), will product a system which can both support high SignalToNoiseRatio and be relatively free from abuse of various sorts.

Another means to the same goal is to allow inclusion or exclusion of specified users opinions in your rating scheme -- essentially coming up with a "buddy list" of editors. This could be a manual or automated process, or a bit of both.

I agree with both of these. The only questions I have are technical ones, and therefore probably not interesting in this context. "When the system is ready, the features will come." --RustyFoster

I also agree and look forward to the system being ready. What's Estimated Time of Arrival?-- JosefDaviesCoates


Group-based rating (Move to KuroshinRatingGroups??)

See also RatingGroups.

Different groups of people have different views of what Kuro5hin should be. Not all of these views will have compatible ratings. Trying to create a single "best" rating from these divergent views may lead to almost nobody being satisfied.

A common example of the problem is the controversy about "disagreement"-based rating. Some people think it is entirely OK to give a low rating "untrue" or "unpopular" comments, while others prefer ratings to be relatively "objective", and not greatly affected by personal (or even widely-held) opinions. For instance, suppose that there is a comment that is well-written, supported with several topical links, and explores its idea in depth. If this was a popular idea, most people would agree on a 4.00/5.00 rating. Now suppose that the idea was "young-earth creationism". Some people would rate it a 1.00 because they strongly disagree, while others would rather keep the high-rating for well-written comments, and save the 1.00 ratings for poorly-written FlameBait (on either side of the issue).

Another division is between the "quality-centered" and "approval-centered" views of ratings. Some people see ratings as means of separating high-quality from low-quality contributions, while others see high/low ratings more as a personal matter of approval or disapproval. A quality-centered rater might want to give out very few 5 ratings (saving them for truly exceptional comments), and might be more free with 2 ratings (for flawed, but not worthless comments). An approval-based view

Suppose Alice and Bob both think a comment should be a 4.00, and Charles and Darlene both think it should be a 1.00. Rating the comment at 2.50 doesn't serve either group well. As I see it, ratings should be structured for the benefit of the comment readers, not (mainly) for the benefit of comment authors (approval), or the administrators (as a measure for giving out "trusted user" status).

My suggestion is to allow people to form groups with specified rating "charters", which detail their rating goals. For instance, an "objective" group might say that raters in the group should not consider their personal opinions, but should concentrate on how well the comments present their ideas. (They might consider the comments to be like a formal debate, where one doesn't choose the topics or which side they will argue.) An "open rating" group might say that anything is allowed as the basis of one's ratings. (If one subscribes to this ratings-group, one shouldn't complain about arbitrary-seeming ratings.) A "best of K5" group might give only a few 5-ratings each day, and rate most comments as 2 or 3. A "best trolls of K5" might give the trolls 5 ratings and the serious posts low ratings. Etc...

Individual readers could subscribe to groups, and the ratings they see would be calculated from that group's ratings. For instance, if the "best of K5" group collectively rated a comment at 4.33, that would be the rating used for subscribers to that group. It wouldn't matter that the "troll central" group rated the same comment as a 1.10, or that the "open-rating" group settled on 2.73.

I have some ideas about implementation, but I wanted to see if anyone else likes the idea. One advantage of the group-rating idea is that Kuro5hin could give the dissenters room to experiment with their strange ideas, while not greatly interfering with the rest of the community. On the other hand, some people may believe it is better for the "community" if everyone has to work together under a single rating system. What do you think? --CliffordAdams

I love this idea. I think it could be a technically feasable way to at least approach the ideal of individually weighted ratings. Your analysis is right, that the current system basically makes almost no one really happy.

Of course, this would require a fairly large refactoring of the comment code. That'll happen eventually, but I don't know when. So consider this filed in TODO list, and you'll probably see something very like it sooner or later. --RustyFoster

I hate it ;-) It's complex, confusing, likely unworkable, and above all, unnecessary. Though rating buddies would be a good thing.

Regarding complexity, the essential problem is that you're now stuck with the problem of ensuring that someone who claims to be acting impartially is acting impartially.

Regarding unnecessary: it really doesn't matter whether someone's representing themselves fairly or not. What you're interested in is whether their moderations (or other behavior) are useful predictors for your own tastes. Note that this can be total disagreement, so long as it's a consistent disagreement. In a true CollaborativeFiltering system, a method of establishing these relationships would be supported by the system itself. Explicit categorization by users isn't necessary as this would be evident on an emperical basis from actual behavior. -- KarstenSelf 8 Apr 2001

I love it - value always varies across domains and localities and this must be accounted for. To find out about and understand a topic in depth, it is often best to start with reading an attempt at objectivity (bearing in mind that you cannot not be you but AssumeGoodFaith), and then read opposing subjective arguments. It'll always be handy to have an all k5 benchmark though, to help decide what objective/ subjective articles I choose to read. -- JosefDaviesCoates


CategoryRatingSystem


Discussion

MeatballWiki | RecentChanges | Random Page | Indices | Categories
Edit text of this page | View other revisions
Search: