The only kind of non-soft security I can think of is authenticated user ids (usually a login/password system). These ids can then be used to grant access that fully anonymous/unknown users do not share. (Logins can allow pseudonyms.) Are there other kinds of non-soft security?
As I see it, in such a group the only kind of effective "punishment" is exile (banning by IP). The "bad" users can always create more pseudonyms to re-enter the community. Reputation-based criticism can actually backfire, as a few people may use it to support their "outcast/notorious" reputation. (Consider the people who are proud to be rejected by the SlashDot "crowd".)
One reason I object to pure "egalitarian" systems is that they force many of the hard decisions on their administrators (often a single person). For instance, a few wiki users might complain that another user is continually erasing their home pages. The erasing user could point out that everything is supposed to be editable by anyone. Now the administrator needs to make a decision, and usually take the criticism from those who disagree.
WardCunningham has had to make many of these decisions for the C2 wiki, and in the past I've applied quite a bit of pressure on him to decide some issues. Now that I have my own site, it's very easy to understand his reluctance to "interfere" in the community. Unfortunately, I think non-interference tends to turn into non-participation, since one's participation can "unfairly" influence the community. (Frankly, I don't feel fully free to discuss my ideas on wiki administration and security, since they can be interpreted as policy or editorial tone for this wiki (this interpretation has happened at least once).)
What I would *like* to do is to give most of my "administrative" abilities to "ordinary" leaders of the community. For instance, I trust Sunir (at least as far as I can throw him. ;-), and would like to give him the ability to take drastic action if needed. (For instance, if someone was erasing all the pages, I would like Sunir to be able to stop that user, or even temporarily make the wiki read-only.) I'd also give most of these abilities to anyone else who has shown a serious commitment to the site (such as authoring several good wiki pages). I am not willing to give these abilities to unknown people, however. (If someone else is willing, I'd love to watch their experiment.) My goal would be for the chosen admins to settle matters between themselves, and I would only interfere when there are serious conflicts between admins. (For a larger wiki, some respected people might volunteer to mediate such conflicts.)
Finally, although I'm skeptical of their usefulness, I'd like to encourage discussions about SoftSecurity. Perhaps SoftSecurity has a place in between groups of a few close people (who don't need security from each other), and societies of millions (which rely on "harder" security like locks, police, and laws). --CliffordAdams
I think that most (if not all) HardSecurity measures would require some sort of key to let you through the locked door. The easiest way to do that online is the login/password combination, but it may not be the only way. Some examples might be moderation of the SlashDot, KuroShin, UseNet or FidoNet variety. Or TrustMetrics. An example that doesn't require passwords would be an encrypted data stream to prevent outside snooping.
I would agree that the goal of HardSecurity is to give as equal access to the site as possible, but I would twist that to say "to give unrestricted access to the site." By unrestricted, I don't mean giving away root, but just to keep security from being "in your face." Feeling like a straight-jacket. The bars that protect you are the bars that hold you in, after all. You don't want to lock down the site so hard that trusted users get annoyed. -- SunirShah
P.S. I appreciate the vote of confidence in my abilities. And just try to throw me... c'mon...
I guess, if you want to generalize, HardSecurity seeks to control in some way, provided you use the term "control" to mean impedence against the "natural flow." e.g. a locked door is control because I want to go through it but you prevent me; or coraling a riot impedes the "natural" chaos. Control is inputing energy to keep order (reduce entropy), but the laws of thermodynamics continuously work against you. This is similar to arguments used for and against "InformationWantsToBeFree." -- SunirShah
Three terms I'd like to introduce are "resistance" (like a passive wall), "force" (such as pushing someone away (possibly gently)), and "violence" (use of force to disrupt a system). A locked door is is a resisting wall (that can be moved easily if you have the key). Even coercive force could be usefully distinguished from violence: consider a line of police with riot shields pushing back against a crowd. The use of force to push a crowd is much different than using violence (like shooting into the crowd).
I'm still thinking about the general PoliceForce issue. I do like the point that police do not create the rules, and they do not make final judgement of people accused of breaking them. The roles of police are similar to what I would like the "admins" to do--to temporarily enforce basic rules in volatile situations, without long-term power over the community structure. (The site administrator's powers are more like military force, which should be used rarely.) --CliffordAdams
I don't think "soft" is necessarily egalitarian. For example, I am interested in building a system which lets people vote on proposed edits for pages. This is "soft" in that people would be allowed to propose bad edits. It is not egalitarian in that the votes could be weighted. Votes and edits from unknown people could be given a very low weight by default.
Voting avoids some of the GodKing problems. If the community doesn't like the home-page edits, it can reject them without the site's owner having to lay down the law. Bad users get ignored rather than exiled. Of course, voting has problems of its own, notably ballot-stuffing. Hope may lie in some cunning mixture of hard and soft, with hard security over the user authentification and voting areas and soft everywhere else. -- DaveHarris
How do you avoid making the interface too cumbersome? It seems to me that the more structure you put into that sort of thing, the less people will want to use it. You need a massive ValueProposition in order to get people to ignore SeriousInconvenience?.
Voting on changes must necessarily mean reading both versions, actually thinking about the differences, and then choosing which is best. How do you manage this without polluting the experience to such an extent that users are put off? -- ErikDeBill
I'm hoping to put off voting until there is actually a conflict. While there are no conflicts, it could look much like a normal Wiki. -- DaveHarris
One cop-out answer is to tie votes to user IDs, and user IDs to real-world users, using hard security. For example, on a company intranet you could hope accounts were only issued to real employees and protected by passwords. You can then store votes with the ID of the person voting, and detect the same person voting several times. Transferring this to the internet is partly a matter of AnonymityVsPseudonymity. -- DaveHarris
Most conflicts shouldn't require a vote, however. I'm hoping a few respectable users (not administrators) will volunteer to mediate conflicts before they require a vote. These mediators would listen to all sides and make a judgement. If one or more parties does not accept the judgement, they could call for a vote. (If a reasonable judgement is ignored, it will hurt one's chances of winning a vote.) I think most votes would be along the lines of:
As another option, I've done some thinking about creating an "administrator" group of people who would be given more power in exchange for community service. (The site administrator would not be a part of this group.) Simply posting good content would not be "community service"--the service would be administrative tasks like answering new-user questions, cleaning up other people's content, and dealing with vandals or careless users. Not all valued contributors would choose to become involved at the "admin" level. These administrators would have the technical abilities to enforce judgements. The site administrator would only become involved in cases where the regular administrators are deadlocked. I put some preliminary related ideas on MbTest:AccessLevelIdea if anyone's interested. --CliffordAdams
Wouldn't it be better if judgements were enforced automatically, by the system? For me the difference between administrators is the difference between how their votes are weighted. -- DaveHarris
That's different to what I evisage. "Administration" to me would not mean setting policy. Administrators would not vote, they would implement decisions made by the community. Where possible that would be automated and the administrator would be a piece of software. Like an ideal civil servant.
Likewise "mediation", in the sense of resolving conflicts, would come down to a (possibly weighted) vote. No need for human intervention there, either. By "weighted" I include weights of 0 that effectively disenfranchise people. I would allow groups of policy makers, as in committee members or politicians, who might be appointed by the site's owner, periodically elected by the general community, or whatever. (I am quite interested in replicating common "clubs and societies" idioms.)
"Mediation" in the sense of finding compromises, would, like "advocacy" of specific causes, be left up to humans. I don't see that mediators need special powers. -- DaveHarris
On a community site that seems to require very unique content for which there is significant demand, or a "trendiness". Wikis depend on user contribution to create their content - the value of that content is a balance between the quantity of good quality pages, and the ease of accessing those pages (lots of really good pages, with sparse links between them and poor search facilities is functionally equivalent to few really good pages with good links and search).
Since value comes from the number of pages, and links between pages seem to (in practice) be encouraged by temporal locality, you need a substantial active user base to create that value which draws people to the Wiki. Imposing restrictions before reaching critical mass could preclude getting enough people.
Adding restrictions too late could fsck things up in other ways (people resist change, a large group accustomed to being unruly can cause massive problems).
Of course, if you find the holy grail of the perfect ui all objections are moot...
(interesting. I wasn't aware that using "space dash dash dash dash" instead of just "dash dash dash dash" as the horizontal rule would put this into < PRE > mode. parse error, or poor wiki knowledge on my part?)
I mostly agree. Part of the aim is to identify the minimum of security/beaurocracy we need, to find ways to make that minimum smaller, and to hide it from sight as much as possible. This is precisely because obtrusive security is bad.
Apart from that, I am not sure what you are saying. Do you think we need less security, or that we could hide it better? -- DaveHarris
What I see now, is that all security on MeatBall comes from manual intervention by users. In essence, backup copies of each page, and the administrator's ability to resurrect pages from them form the entirity of the security on MeatBall. In times of great emergency, something ad hoc can be done - block an ip address, set things read-only, etc. These aren't really built into the Wiki, and will always be there regardless.
I'd actually go for more security than we have now. I'd like a login/password (that's HardSecurity). Having to create a login (with a requirement of a unique and valid email address, maybe?) will keep the most casual vandals out. I'm not sure that anything else is needed.
The proposals for voting on proposed edits and such sound like good ways to annoy people. I just think that 99% of the time they aren't useful, so I'd hate to get them in people's way for that 1% of the time when they are. Of course, if you can find a way to add a feature without impeding the common case, then there's no problem. --ErikDeBill
I agree that a login/password may be part of the minimal HardSecurity. Even so I think it will be enough to put off large classes of users. At least, it annoys me personally. (However, this could be fixed with better global infrastructure.) Anyway, given user IDs a lot of SoftSecurity can be built on top.
Part of the idea of voting is to allow judgements more gradual than the "keep it/delete it" choice of most Wikis. People don't like to delete stuff because doing that is too violent, too extreme. Voting something down gives you a way to express dislike, safe in the knowledge that it will be ignored if no-one agrees.
(Much of this page should probably be extracted to a separate page on voting.) -- DaveHarris
I wonder if anyone could provide some examples where there's enough contention that a vote is needed. The recent discussion about categorization at WikiWiki is perhaps an example, although categorization becomes more of a moot issue when Cliff implements sections. So we'll forget about categorization. :-)
In particular, what large-scale contentions does voting "solve" that splitting up a page doesn't? In cases where participants disagree, it is certainly feasible to create an OriginalTopicSeenFromViewpointOne and OriginalTopicSeenFromViewpointTwo (and a possible OriginalTopicDiscussion for a point-counterpoint view). This changes the conflict from one of "what should this page say" to "which viewpoint do you choose". -- anon.
I regretted voting on the latest round of Wiki:WikiOnWiki, but at least the target was the one requesting the vote. On WikiWiki, the vote can be used to bludgeon a victim who one particular person decides is doing something onerous. So, instead of politely asking the other to stop and suggesting a better solution along the WikiWay, the "demagogue" hides behind the faux authenticity of the vote. (That wasn't too cynical, was it?) I think one-sided judgmental votes are disruptive. In fact, on a wiki, it is so much better just to (try to) do the right thing instead of asking permission every step of the way. If you stumble, others will catch you. Just keep an open mind and listen. -- SunirShah
I have an example where voting may be useful. On Fidonet, each echo has a moderator who is ultimately responsible for the echo. The moderator has absolute power. Consequently, most echoes work on a term limit/voting system to choose their moderators.
Similarly, for a wiki, I am the "editor" of MeatballWiki right now. As the editor, I get to make policy decisions (like the MeatballWikiCopyright), pick the logo, as well as try to set the focus of discussion. I also get to do a lot of crap no one would want to do, but such is the price of power. If I eventually leave, someone else should become the editor. That person might be voted in. Or else Cliff might finally have to take responsibility for his own damned server. ;)
I am interested in taking policy statements, reifying them as documents, and causually connecting them to the governing software. Eg the list of the IDs of users who have special status (as police, administrators, policy makers, moderators or whatever) could be a wiki page, and an edit to that page would be like proposing someone for membership of the group, and voting on the edit would be like voting in an election. I think a great deal of policy can be reified in this way, so that much of the Wiki's structure becomes meta-circular. -- DaveHarris
Much later... I came to the conclusion that VotingIsEvil.