MeatballWiki | RecentChanges | Random Page | Indices | Categories

Check out http://www.geocrawler.com/lists/3/SourceForge/3222/25/4121891/ for RustyFoster's idea of "Mojo" which is SlashDot karma with a twist. In fact, it looks a lot better than karma and aims to protect against DoS and spam attacks.

The basic jist is that it is a time-decayed weighted average of ratings of your last 30 comments, or the last 60 days. So, the last comment you posted is worth 30 times more than the 30th comment ago. From [1] I quote:

You take the rating from the last 30 *rated* comments you've posted, or all rated comments from the past 60 days (whichever comes first). The number of comments and number of days is configurable in Scoop. Order them by timestamp, most recent to least recent. Then you run the following algorithm

r = rating
w = weighting factor
s = total weighted rating sum
n = total weighted number of comments

Start with:
w = 30
n = 0
s = 0

for each rating r:
s = (r * w) + s
n = n + w
w = (w - 1)
next r

Mojo = s / n

So, the upshot is that the first comment gets weighted as if you had posted thirty comments at that rating, the second counts like 29, the third 28, and so on. The final mojo is the average of all these values. Thus, the system reacts a lot more strongly to newer comments than older ones, which is why you can bounce in and out of trusted so fast.

The other important bit is that to be trusted, you must have a mojo, as calculated above, of >3.5, AND you must have at least 10 rated comments contributing to that. You could have a mojo of 5.00, but with only 6 comments, and you'd not be trusted. To be untrusted, you have to have a mojo <1, with at least 5 contributing rated comments.

And to alter the example from the mailing list above to fit the current system:

Example: Bob posted three comments in the last month. One was rated 4, one was rated 2 and one was rated 5. Bob`s mojo is:

(4*30)+(2*29)+(5*28) / (30 + 29 + 28) = (90 + 58 + 140) / 87 = 3.31

To put it another way, that might make more sense to the less mathematically inclined-- the newest rated comment you posted counts as 30 comments at that rating in the average. The next 29 comments, and so on down to 30 comments (or 60 days) days ago, where each comment counts as one comment. The average is the total of all ratings divided by the total number of comments, where ratings and comments are both multiplied by the weighting factor.

The post goes on to discuss why this is a good thing.

More in depth discussion on MojoAnalysis (due to DeepLink from kuro5hin story).

I've been trying to come to grips with what posts garner Mojo and what doesn't. It seems fairly arbitrary to me. I frequenly find what I consider good (reasoned, contentful, sincere) posts at 1.0 and terrible (rambly, fluffy, demagogic) posts at 4.5. Indeed, there doesn't seem much distinguishing the posts between 2.0 and 4.0. Well, as far as I believe. I read newist-first, flat and unfiltered. On Kuro5hin, almost all the posts are worth reading. What do you think? -- SunirShah

Heartily agree. I've actually come to the point of not bothering to look at the comment rating at all.

Trivially, Mojo is working since there seems to be no spam. :-) (I wonder how many, if any, comments are actually being rejected.) However, I think this is a result of the fact that the average participant is fairly mature. Therefore, even though the Mojo selection process for "super-moderation" (or whatever Kuro5hin calls it) ends up being arbitrary, as long as the pool of candidates is good, the results will be effective. When the quality of the average participant degrades is when an arbitrary selection process starts crumbling. -- anon

There's really good and really bad. These tend to be pretty accurate. Things that are just spam, or are excessively obnoxious, drop off the page very quickly. There's a handy "Review HiddenComments?" link for trusted users now, as well, to check for abuse with the zero-rating. There isn't much, but from time to time someone will decide that they should troll SignalEleven?'s diary or what have you. In that sense, the system works great.

In the middle-range, there is a lot of disagreement about what's a 2 and what's a 4. This is ok, I think, because people rate according to different criteria. People seem to latch onto that fact with great vehemence, believing that rating can (or should!) be done always according to some kind of site-wide objective standard, but I actually don't think it should be.

Think of it this way. If some people rate according to agrrement/disagreement, and some rate according to "literary quality", and some others according to helpfulness or informativeness, then, if they all rate one comment, what you end up with is a pretty good measure of the intersection of those things. I think it's healthy to have people rating on different qualities, and ought to get more accurate ratings overall.

The other thing to note is that at any given moment, there are lots of comments with ratings that aren't very accurate. The system is convergent, so if you see an inaccurate rating, fix it! That's the whole idea. But it does mean that any snapshot "I saw this comment with a totally unfair 1 rating" is not going to be a very good indicator of whether it's working or not.

What's broken? A couple things. First, ratings have got to be public. Right now the ability to rate in secret (no one knows who rated which comment what) makes it too easy to rate unfairly. You'll never get caught. In the near future, the list of who rated what on any given comment will be freely available. As well as, possibly, listing ratings by user-- "User Foo rated the following comments:". Secrecy breeds abuse, IMO.

The other broken thing is that a comment that one person rated "5" counts as a five in your mojo. A comment that 12 people rated, which converged on 3.46 counts as 3.46. The second of those is a much more accurate rating, but it counts the same as the totally untrustworthy single 5 rating. I think the Mojo calculation will be amended to also weight by the number of rating points that brought a comment to it's current score. So, say the two comments above were the last comment you posted, and the next to last. The 5 would act like: ((5 * 30) * 1) comments, and the 3.46 would be ((3.46 * 29) * 12). So even if they are "decayed" further, comments with more data points will be able to push their weight around much more in figuring Mojo. This ought to tend to stabilize the system a bit, and should make it more accurate. -- RustyFoster

One of the best features of Mojo is that it mostly silences the ongoing debate about Kuro5hin's moderation systems (at least for awhile). The mojo system seems like it works fairly well for the current community. Personally, I hope Kuro5hin considers adding individual/reader-centered moderator systems as an optional alternative to the default community mojo system. For instance, if I respect the opinions of a certain moderator group, I'd like to have the ability to base my reading on their opinions rather than a popular vote. I'd also like to have individual filter/highlight options--some people simply aren't worth reading, and some people are consistently good. I think RustyFoster has more than enough "good ideas" for now, so I'll let him work in peace (for this year at least).

[The following might be moved/reorganized later.] My view is that the community is not well served by having people read things they would rather not see. A "community values" approach often leads to pressure against unpopular content, which may be categorized as noise simply because most people don't want to deal with it. (This has happened several times with the C2 wiki.) Individual or small-group solutions could allow these less-popular groups (Microsoft NT-lovers? ;-) to have high-quality locally-valued conversations without the "popular vote" being imposed on them. My relatively extreme view is that "If it's worth typing, it's worth keeping.", but also "Authors don't have a right to make anyone read their work." (If someone types in spam, it might be worth keeping but not worth showing to anyone.)

For a BigBlueRoom analogy, think of "alternative" (or even "classical") music. Those people who like that style can listen to their CDs and shop in separate sections of music stores without disturbing the MTV masses. People don't usually have to vote on what kinds of music should be available. In many limited-resource cases like radio stations or music publishing the popular (or culturally respected) music may crowd out other forms. The Internet is removing many of the old resource limits. Businesses like mp3.com and technologies like streaming audio have enabled many tiny groups to gain exposure to their fans.

Sometimes I think too much focus is placed on the "average", "overall", or "community" qualities of discussion forums, and not enough on individual differences. One example specific to Kuro5hin was a recent submitted article about "spark.org"--a very controversial website. After a full week in the moderation queue, the story had about 360+ votes in favor of posting, 310+ votes against, and 200+ votes of "don't care". Many comments were posted to the pending story, some of which seemed worth keeping. Apparently either the story was expired or the negative votes overcame the positive ones, and the story was removed. (I could still see the comments at [2] (scroll down), but the story seems to be gone.)

In this case the opinions of 360+ people were disregarded in order to fit a (likely) minority of people who rejected the story. (Over 200 people didn't care, so unless there were at least 250 more reject-votes in 3 days (which seems unlikely), the rejecting voters were a minority of the total voters (which may be a minority of Kuro5hin readers).) In my opinion, a better fix would be to have a "controversial" section as suggested elsewhere. Non-accepted stories would be placed here as soon as they have 100 positive votes, regardless of the negative votes. These stories may be moved out of the "controversial" section if they are finally accepted. Even if the final vote goes against these stories, they are probably worth keeping because at least 100 people voted for them. (The default settings might hide the section, but it should be easy to find these stories.) [Hmmm... Maybe I won't leave RustyFoster alone after all.]

ViewPoint will try to explore the individual (or voluntary group) differences. In ViewPoint the "default" view may be more like a popular/community view, but it should also point people toward alternatives. (Indeed, I was thinking of having the initial new-visitor page say something like "Choose one of the following starter views: Open, Quality-Filtered, or Alternative-Menu", with a description of each basic starting view.) Of course, those people who want to be bound by community standards will be perfectly free to do so (by adopting rules or structures) within their particular view. --CliffordAdams

As the site grows, the queue should be kept moving so that rotting stories don't sit there and tie up the system. There's a week-old story in the queue right now about Microsoft extending the NT4 cert deadline. What ever relevance it had a week ago may have been up for question, but it's no longer news after spending a week in the queue! -- AnonymousKarma?

A Problem with Karma/Mojo voting system for me, is the restriction to quality of write-up, which is what /. and Kuro5hin ask for in the vote. But while an article is well written, profoundly backed-up by external resources and intelligently embedded into previous comments (thus qualitying for an high rating) it might still not find my personal opinion. So while Mojo/Karma achieve quality filtering in a general sense of good vs. bad, they cannot express democratic decision finding. How often does an opinion drown, because it is disliked by the readers? This is one of the main drawbacks of internet based communities vs. RL discussion.

[CategoryRatingSystem] [CategoryWebLog] [CategoryKuroshin]


MeatballWiki | RecentChanges | Random Page | Indices | Categories
Edit text of this page | View other revisions