A common problem with many rating/ranking systems is that many people have conflicting goals and interpretations of acceptable behavior in the system. Even when given fairly clear guidance like the [Kuro5hin rating guidelines], people will vary in their interpretation of these guidelines. (The K5 guidelines themselves are unclear on whether "normal" comments should be rated "2" or "3".) Even in cases where the guidelines are clear, like not rating based on disagreement, some people will openly refuse to comply with these guidelines.
Some sites handle these conflicts with "MetaModeration" which attempts to correct or expose deviations from the community guidelines. SlashDot has a random MetaModeration system which can reduce the ability of deviant/unpopular moderators to moderate in the future. KuroShin has recently [late 2000] chosen to expose the identity of raters, allowing people to see which accounts are responsible for the ratings given. Both of these approaches lead to problems with retaliatory ratings.
Rating groups could provide much of the quality control of other approaches while being resistant to retaliatory rating. The basic idea is to form a group of people who agree to a group charter which gives explicit guidelines for determining ratings and resolving conflicts. Groups could be either invitation-only (following the charter) or open-membership. Invitation-only groups might require recommendations, a history of good posted content, or even a popular vote for membership. Group members would always be free to leave a group, but they should only be expelled/banned from a group by a process in the charter. (The process could be as simple as a decision by an arbitrator or leader.)
Users of a site could choose to "subscribe" to a rating group rather than use the default/community ratings. (Optionally, a user might allow a rating group to override default ratings, but also use the default ratings if the group hasn't rated a comment. This would be useful for rating groups that correct overrated/underrated comments. Later, a complex system of group layers and overrides could be implemented.) Perhaps some feedback could be implemented to send messages to the rating group, like "I think this message should be a 4, even though it has a few spelling errors".
In order to allow maximum freedom of rating charters, some groups might keep their ratings separate from the overall community rating. For instance, a group might form which explicitly allows personal disagreement as a valid reason for a negative rating. Rather than struggle against the community rating (or compromise the group's charter), the group could decide that their ratings should not be added to the default/community average. In this case, the only people affected by the group's ratings will be the explicit subscribers to that group.
One good analogy for this conflict is a meal made for many people. Most communities try to make a single "soup" of content that will satisfy most visitors. This involves compromise, which often shuts out more unusual tastes. (See [this comment on K5] for more on this analogy.) Separating more extreme tastes allows those with unusual tastes to participate without compromise. I wouldn't eat a dinner featuring soup with Jalapeno (very hot/spicy) peppers, but I would have no problem with an ordinary soup dinner with optional peppers.
In some ways rating groups are similar to AffinityGroups or CollaborativeFiltering. One crucial difference is that rating groups would be explicitly chosen, rather than requiring extensive computation to define groups and subscriptions. An advantage of chosing a group is that one can benefit from the group almost immediately without requiring the user to rate content. One could choose from descriptions of groups that emphasize how they are different from other groups. For example, one group might never give a "very high" rating to content with serious grammar/spelling errors, while another group might largely ignore those issues but be very harsh on bad HTML formatting.
See the KuroShin story ["The Embrace of Death"] and the following comments for several related ideas, especially the affinity-group-like "scent" idea. --CliffordAdams
If you believe VotingIsEvil, try a more direct approach with WebLogDigests.
For example, using the additive filtering, you could subscribe to the "TourBus" group and the "OneBigWiki" group and see ONLY changes which are on either the TourBus pages or the OneBigWiki pages. Or, using the subtractive filtering, you could subscribe to the "no TourBus" group and the "no OneBigWiki" group and see every change except for changes on the TourBus pages or the OneBigWiki pages. Of course, you should be able to mix the two also (start with nothing, add TourBus changes, and then subtract OneBigWiki changes, for example).
A more difficult to implement example; you could subscribe to "Sunir's picks" and see only changes which have been flagged as interesting by Sunir. The difference is that in this example, Sunir explicitly rated each Change that the group syndicates. In the previous examples, the groups explicity rated pages rather than individual Changes.
We may as well make filtering more like PageClusters. Have an option so that if ANY pages are filtered by the filter, instead of showing nothing, show a note like "some TourBus pages were filtered". You could click on the notification to get a list of all the TourBus changes.
Multiple "&filter=" options would also be scanned and would just be added to the end of the Perl filter list (you don't even have to prune duplicates).
now, just like Preferences allows users to have in effect a &showedit=0 appended to their RecentChanges requests, use the same mechanism to allows them to in effect append an &filter=NoTourBusControlPage?? (or whatever page they specify).
note that this simple filtering is scalable to more complex stuff later.
-- BayleShanks
See also ViewPoint.
[CategoryRatingSystem] [CategoryConflict] [CategoryUnimplementedWikiTechnology]