It's unlikely any one of the above is singularly relevant, but in composite they may form a statistical index that can be used to detect trolls or at least very odd behaviour. False positives are unacceptable; false negatives are also very problematic as we should AvoidIllusion that the formula is actually a trustworthy defense.
Giving the final arbitration to the public might be bad as it personalizes the fight instead of depersonlizating it, as systemizing it would eliminate most temporary biases. After all, the algorithm would apply to everyone.
To make this less loaded, regulate behaviours, not intentions. Negotiate the content, but limit the ability of an person to overwhelm the community's capacity for negotiation, say by preventing them form pouring too much energy into the site all at once. This may not be a troll, it may not even be an attacker. It may simply be a klutz. -- SunirShah
How about a simple formula, based on what percentage of a user's wiki edits are reverted by other users? (We might tweak this by discounting reverts from a certain class of user.)
For example, if new user Blatheration is reverted 30% of the time, he keeps newbie status. But if new user Jim Dandy has no reverts (except from newbies), promote him to preferred status.
My only worry is that some highly motivated user will figure out a way to hack this system, in order to subvert or destroy it. So at first anyway we should not make it automatic but we could use the statistics when considering the granting of sysop rights.
But the good thing is that tracking each user's "revert count" would enable us to identify edit wars in progress or to identify Edit Warriors. (Yes, of course we'd have to find a way to account for reversions of 'simple vandalism' so this wouldn't "count against" someone.) -- EdPoor
You might need a trust metric to stop relatively untrustworthy users, who have simply kept well-behaved until then, retroactively reverting all a respectable user's posts and thus zeroing his trust.
Reverts aren't measures of trolling. An EditWar between a troll and a valiant (but untrained) defender will lock both out, although that may not be such a bad thing. Worse, a troll can revert a legitimate or even VestedContributor's edits in a violent form of UserStalking. Also, I have a habit of reverting or deleting large swaths of legitimate contributions when trying to keep discussions contained and organized. -- SunirShah
I think there is no algorithmic measure of good or bad. Saying so is a defeat of everything that SoftSecurity stands for, which is the end to CryptoNautic control over the world. What we can instead control technologically are classes of behaviour in a FormOverContent fashion, such as by making it difficult to edit and easy to revert (not a great idea, actually), or by allocating only so much bandwidth per IP block per day and thus limiting the amount of energy a person can put into the project all at once. It's impossible to make value judgments over content in a reliable way, though, without people. -- SunirShah
It would be fun to let each visitor specify their own "training set" of who they consider definite trolls and definite NOT-trolls, and then to have some A.I. algorithm guess whether that person would consider each other person a troll. If these statistics were released in a standardized interchange format, third-party A.I. classifiers to do this could be built. Not worth the effort, but just a fun idea. -- BayleShanks
Detecting a troll is often a self-fulfilling prophecy. Some indications of trolling are found in a person's behavior, and the person is handled that way. The person then becomes frustrated and gets more annoying, eventually becoming a real troll some day. -- HolgerBruns?