There is an interesting thread [in progress] over at DerekPowazek's DesignForCommunity site.
[Adding a consequences of bad behavior]
Well, the title to the thread isn't the greatest, it should be more like "empowering community members to apply a consequence to bad behaviour". While it's true many online community systems (wiki, slashdot, BBS, etc) provide a feedback-loop mechanism, they are most often designed to work at a slow pace, over time.
What might be interesting to see play out might be a user-mediated arrest system in wiki. A CitizenArrest of sorts. It might work something like this: if you see a posting you believe to be way wrong (such as spam, massive deletions, vandalism, etc), you can hit a button that submits a vote that that user/IP# should be blocked from further posts. Of course that arrest shouldn't take effect immediately, but instead require a number of other votes from other members. Votes expire in short time (eg. one hour), and the arrest is only temporary (eg. one hour), and so the emphasis is on it being a temporary mechanism. If there are votes that disagree with the original call then it's ruled contentious and nothing happens -- something like this requires a quorum and a majority.
RecentChanges could be modified to flag the changes which have resulted in others ringing the community alarm, and also flag any pages also modified by the same person. To draw attention to this event taking place a banner message could be inserted into each wiki page as it is loaded ("The WikiAlarm? has been rung on the EarlyPoetryAndWiki page"). This banner would not appear on the vandals page ... the troublesome visitor sees an extra message inserted into every wikipage they visit: "Your recent edits have been noticed and you are being watched.", or words to that effect. Hopefully the sudden appearance of that message would be unsettling -- most online vandals operate on the assumption it's them against an uncaring machine, and no one can see what they are doing (until it's too late). I'm assuming that they don't carry on like that in public, and this antisocial detachement is due to the same forces that causes email flamewars to erupt -- lack of human context.
If you're a visitor and you see the WikiAlarm? message tattling on some other user, in real time, it would communicate that "you are not alone", and may even stop someone from vandalising even one page, knowing that others are watching.
A problem though is that some vandals will see that as a challenge (cf. LimitTemptation), and escalate their anti-social activity to see just what happens. Which is fine, as they won't last long then.
It goes almost without saying that this TechnologySolution has obvious flaws in a system (like wiki) where visitors are anonymous.
Any system could be abused. There would have to be some penalty for crying wolf or for false accusations.
One obvious penalty would be that people can register a warning against you if you post warnings that they view to be bogus. That might be all that's necessary. Anway, it seems rather un-wiki like to start off worrying about the worst that could happen. Better is to only spend effort solving the problems that exist, I think. --tb
The original introduction claimed this was SoftSecurity. I question how this is SoftSecurity; you're giving everyone a weapon. The example about voting (VotingIsEvil) people into CommunityExile is a perfect demonstration of how this is not SoftSecurity.