However, to an open community, LoginsAreEvil. It's almost the same as being IDed every time you walk into a store. Except logins aren't authenticating; they are pseudonymous. They provide no guarantees that someone is who they say they are. In fact, that's their chief appeal. There's not even a guarantee that there's one login per person, and one person per login. (cf. PublicAccount for a beneficial example.) For organizations that are more masquerade parties than work, it's a little fun to switch identities. (cf. WhatIsMultiplicity.)
Just don't expect the logins to keep the demons out. Anyone can create a new identity out here faster than you can blink. What reputation does a costume have?
See also OnlineRegistration for the one-to-many commercial model of tracking TheAudience, and the related SignUpForm.
CategorySoftSecurity CategoryIdentity CategoryWikiTechnology
Adapted from a post on the PeerToPeerJournalism list, March 13, 2001...
Logins don't mean anything online. They are cheap to make. They exist to give people a presence, but they aren't security. I think the logins are just a vanity thing to give people cool pseudonyms. As RustyFoster pointed out on #email@example.com, it's really easy to get trusted moderator status on KuroShin. Just create two logins, write a lot of comments in one of the diaries, and mod them all up to five with the other account. With a little more ingenuity, you can take out the whole site. Their SurgeProtector is far more useful in protecting them.
This is also why I have problems with PrettyGoodPrivacy signatures. Why would I believe that you are really you anyway? Without the network of social trust (which the WebOfTrust seeks to represent), digital IDs aren't worth the paper they aren't printed on. And the social trust is broken anyway. If you remember the "esr" fiasco on AdvoGato where people just rated the three letters up to Master level without verifying the identity behind those three letters. Or the blackbox tit-for-tat rater.
This brings up the point that the people who will do the most damage to the online community are the ones who are the most expert on them. For instance, in April, 2000, someone with clear understanding of the WikiWiki scripts attacked the site from anonymizer.com, forging user names, messing with the edit histories, changing RecentChanges. But that was nothing. Another well-known individual was changing people's statements completely around to favour his arguments and erasing the VersionHistory to that. The attacker who took KuroShin out in Summer 2000 was aware of how the system worked, and by all indications, he was a pissed off member.
ZwikiClone got SlashDotted once, which attracted one jerk. WikiWiki gets attacked by its own membership all the time. All the time, like on a monthly basis. Which is the bigger concern?
As I said, WebLogs aren't immune to this just because they have complicated moderation systems. Consider TrollTalk. Last time I checked, the good folks on TrollTalk play nice with KuroShin, and that's because kuro5hin is laid back with respect to them. On SlashDot, RobMalda amongst others stoked the flames with the so-christend "KarmaWhores". The biggest "attacks" come from the membership because they are the ones who know the system the best and the ones who are the most emotionally committed to frustrating the rest of the community.
This phenomenon is ancient. MrBungle and LambdaMoo for instance. -- SunirShah
In the old MIT AI-Lab were no security mechanisms on purpose (unlocked doors, file system). Having read this recently again I have to relate all this to wikis. Before I thought of universities and free systems (like we used to do at our servers). --ReiniUrban
(From [Hiroo Yamagata's interview with Richard Stallman], 1997.)
More on bots.
One way to improve the probability that a new account is being made by a human being is to have the reply e-mail require some non-uniform action. Such as generate a bunch of mathematical expressions to evaluate in English. However, anything generated can be parsed (unless you generated ambiguous or information losing material). Nonetheless, it's easier to generate instructions than it is to parse and evaluate them. However, this will increase the angst directed at your site and lower registration.
Another solution is to fight fire with fire. If the only actions possible are ReversibleChanges, then another bot can be written by a white hat to undo all the attacking bot's actions. In this case, the logins are a speed bump. A better SurgeProtector may help. Nonetheless, the trade off is between security and accessibility. But bear in mind that in an attack passive readers that do not have logins will also be barred or slowed from fixing the vandalism. It is in those dire situations that your hidden friends really show.
One of the main points here seems to be the logins-are-cheap assumption. But there are - limited - situations where properly authenticated logins are part of a community solution (not CommunitySolution). Think employees or students on a campus.
This is ludicrous -- defending a login system so poorly written that it can be subverted without the application of any skill, because the alternative is what, some repressive big brother? Being ID'd when walking into the store, as I believe the argument puts it? Evil? Go look up "excluded middle" on c2 or wikipedia and review the argument again. I have a lot of bitter rejoinders to those who let ideology trump pragmatism, but in the big picture, I suppose I just shouldn't care. It's not my wiki after all. --ChuckAdams
First, I remind you, we do not have a login system. There is nowhere to 'log into'. Your preferences don't mean anything in our security model. The argument is mostly that logins are a Wiki:CargoCult. A login system doesn't do anything for us without a social policy about validating who is allowed to become part of our community. How would a login system stop a spammer from creating a new login account and just spamming again? We could do PrematureModeration? in some shape or form, or charge an AccessFee. Ultimately, the fear is that we devolve into a GatedCommunity. What is our social goal?
The ultimate goal is to create a quality text. What is the threat? A lot of dirt. What is the solution? A bigger broom. This is why CitizenArrest is better than a login system, since it attacks the problem as soon as it becomes known. We have no idea who is a spammer/troll/vandal before they interact with us, so for the majority of people who are good, it does not serve as a good start to make them prove to us they are not spammers. PrincipleOfFirstTrust. -- SunirShah
So am I to understand you want to have CitizenArrest without any accountability on the part of those using such power? You want equal destructive power given to vandals and trolls as those who have been around for years? Does trust really mean anything when it's blindly extended to everyone? My latest wild supposition is, if the Preferences system enforced uniqueness of names, it would be a login system (with cleartext passwords, but that's just a detail), there would be no one tossing around terms like "cargo cult" (very scientific! quant suf!), and we wouldn't see such this form of jujitsu common to wiki culture of justifying a gross technical inadequacy as some form of social strength. -- ChuckAdams
The lack of a login system is not a technical inadequacy. It's not the same as having a backup system in case the hardware fails. It's a governance inadequacy. A TechnologySolution may or may not be called for, but it is not the fundamental problem.
I think the miscommunication between us centres around "those who have been around for years." A login system, as you propose, would protect those who have been around for years, and that isn't many of us at all. I don't believe that trust is created by seniority either, as people pop their lids all the time, and friendly strangers are all around us. Trust in a social system is created by BalancingForces, so that structurally we outweigh any bad by overwhelming good. The spam problem is due to this becoming unbalanced.
However, while at best a PricklyHedge (which won't stop spammers and it's an illusion of safety, and we AvoidIllusion), at worst a login system is a GatedCommunity (e.g. a TrustMetric) it will turn Meatball into a GatedCommunity that fails to attract new members (cf. UsAndThem). Worse, my biggest fear is that I (or perhaps the RoyalWe, but mostly I because it's best to fear oneself first) will abuse a login system to suit my own needs, like a GodKing. If Meatball began with such a system, it would be limited to my ('our') own small social network. That's fine if that was the goal, but it isn't the goal. The goal is to be open to people such as yourself taking us to task.
Moreover, as we have seen over and over again on the Internet, once there are technical tools in place that control access, HardBans become the preferred governance solution of an exasperated proprietor (community), who both are the primary target for attacks and the one with GodKing powers. We all have limits to how abuse we can take (the limit is the point where the community no longer gives emotional support to the leader).
I admit that having a dirth of spam, vandals, and trolls also scares off new members, but I don't feel like throwing the baby out with the bathwater.
If you take another look at CitizenArrest, part of my recommendation deals with abuse. However, the first brick in that wall is an OpenProxy defense, which is what I am working on, albeit slowly as I am tired. -- SunirShah
Leaders get exhausted giving support to the community, which generally gives support only in reciprocation. When the leader runs out of steam, the community gets disgruntled as well (cf my reaction to the WikiWiki debacle). I suppose, however, I should be taking my own example and AssumeGoodFaith at least on the part of the community leaders and take the rhetoric down a notch (mind you this assumption doesn't translate to real world politics). My tone's bitter and strident largely because I've seen all this before. Communities are built on trust, and where hostile elements exist, trust must be earned. If you find even pseudonymous accountability (where the unforgeable elements are things like creation time, karma, trust metrics) to be hostile to community, you're going to find the vandals even more detrimental in the long run. If open proxies are indeed a real problem, I can point to some reasonably effective countermeasures like the various DNSBLs. Anything done on your own is likely to be very retroactive in effect, banning after the abuse has taken place. --ChuckAdams
I am particularly concerned that communities are built on trust, but TrustMetrics are built on distrust. I wrote the PrincipleOfFirstTrust as an principle against the AntiPattern of using control mechanisms to 'validate' trust. Trust isn't meted out, earned, or contested for (WhatIsTrust). From group facilitation, I have found that trust is easily granted but faster removed. The secret, I think, is simply creating and enforcing a SafePlace?. While we both clearly hold this SuperordinateGoal, the question is how?
In practice, I have found that vandals are best dealt with by not blaming the person, but focusing on the impact to our ultimate goal, which is a high quality text and community in the service of our overall MeatballMission. It's a distraction to start going after individuals (especially bored 13 year olds) when we should be putting our energies into BarnRaising. Sometimes people come along and are nuisances, but it's critical to remain working on BusinessAsUsual. I don't want to spend time online defending and attacking people. Moreover, punitive approaches create retribution. It's so much easier (less energy) to be strong enough to brush off attacks than to be so weak as to have to go through the trouble of organizing and going on a counterattack. If you look at nature, this is the conflict model. If a species can AvoidConflict, they save precious energy (cf. MotivationEnergyAndCommunity). Most (every?) species has a set of WarningSignal?s as well as PassiveDefence?s and an ImmuneSystem? because they have to engage with the world; they cannot withdraw from it. ActiveDefence?s, like porcupine needles and skunk juice, are so expensive that species with them tend to have an even wider array of WarningSignal?s.
So, instead of going to the trouble to PunishReputation, it's better simply to take a SoftResponse to LimitDamage (e.g. through ReversibleChange). I'm not against going on the counterattack, but I want to make sure that it is so important that I'm willing to distract myself to make it happen.
Maybe a metaphor for attackers not interested in a FirstReading (e.g. spammers, teenage vandals, bots) is do we swat mosquitos, drain their swamps, spray pesticides in the swamps, burn spruce needles, cover ourselves with mosquito netting, injest quinine (against malaria), or move to colder climate? -- SunirShah
I think we're in ViolentAgreement? here -- I don't believe in giving trolls and vandals the attention and validation they seek by regarding them as ravening menaces. But when the technology defeats the ability to effectively respond, and allows vandals so much more power because it requires so much more energy to combat them, and combat them continuously, they *are* a menace. The fact that I stole your name for one post should give you pause: I combat phishing for a living, and I can tell you that the damage done to banks' reputation and trust is vast when they cannot effectively assert even their identity. It has become a distrust-by-default situation precisely because there isn't much of anything on which to effectively base trust.
I am not a fan of trust metrics as they are implemented now, so I prefer more objective criteria for something like a wiki, such as number of edits, number of edits reverted (as a negative), creation time, pages created, and so on. The privileges of such seniority don't make it some kind of "ruling class", they're simply convenience, such as access to one-click reverts to LimitDamage (still peer-reviewed). Longtime contributors don't deserve to become content controllers, no, but I think they do deserve a little convenience in exchange for all that effort contributed. Abuse of authority could precipitate a leadership crisis like it would anywhere, and decisions concerning such abuse may ultimately devolve to a GodKing or BenevolentDictatorForLife? ... which is no different than how it is now with IP bans, I would like to point out. --ChuckAdams
Violent agreement (HealtyConflict?) is the Meatball way. ;) We all have the same SuperordinateGoal, but we disagree (often vehemently) on the way to get there, which is the best way to think about things as long as we keep having fun.
First, you may like TrollDetectionFormula for a discussion of abuse metrics.
The distrust by default is what I dislike, of course. But your example of signing my name falsely is attacking everything about wikis that make them wikis. You can sign my name all you want because all text is editable. The only protection is adequate PeerReview. It's also the best protection, for many reasons. The alternative is to separate one author's text from the control of others, and while that is good in many environments, I don't really want that here. I'd rather evaporate (not eject) problematic people.
Wikis and banks have different problems. A bank has to account for every transation and for every person, and to do this it maintains a data structure called account. Wikis on the other hand do not need to have specific control over every interaction, but rather as a system merely keep improving over time on average. A better analogy is a stock market: while a bank has to be precise, the stock market although made up of precise interactions merely has to grow over the long term. Of course, a volatile stock market is bad news, as is a volatile wiki.
My personal preferred TechnologySolution is TheGreatWallOfCornStarch?: automatic SurgeProtectors that progress to RegionalBans (based on NetworkDistance), informed by CitizenArrests. This is based more on the reality of MeatballWiki existing as a NetworkService?. It physically resides as some NetEstate, and the from the wiki's perspective, it interacts with only other parts of the network. Our conceptual social trust model that exists in a social domain which is outside the Internet, which is why it is difficult to prove identity on the Internet without some sort of bridge (e.g. credit card companies).
At a social policy level, our server will refuse to 'do business' with parts of the network topology (net.geography) that cause more harm than good. This approach views AnIndividual as part of a BalancingForce equation of flow rather than each TheIndividual specifically, which is a security and ethical nightmare. The hope is that good neighbours in bad neighbourhoods will be pressured into dealing with their local problems. -- SunirShah
It's great that you and I have different enough conversational styles, that our login hosts are so completely different, and that you're well known enough that any attempt to forge your identity would more or less fall flat. We don't all enjoy this degree of reputation, though. I'm sad that it was seen as an attack on the nature of wiki rather than a demonstration of a bug, but I've made my point as clearly as I can. Nowhere have I advocated requiring logins to post, but merely to access the destructive features such as instant reverts or CitizenArrest. What I see happening with a proposal like CitizenArrest without logins to associate with it is a dangerous escalation of conflict in direct opposition to the notion of HarmReduction, and euphimistic postmodern neologisms to avoid using terms like "banning" do not change what it actually is. I see nothing preventing auto-banning large ISPs such as AOL, and with no mechanism to override it with registration, because ... well, look up at the page title. It's the same path as the non-solution of WardsWiki ... where apparently he had to destroy the community in order to save it. --ChuckAdams
On CitizenArrest, I recommended the use of PasswordlessLogins for using the feature, so at least we have a good idea of who is doing the banning (e-mail + IPs). Plus, the bans have ReversibleChange, albeit at a harsh penalty. I'm contemplating adding PasswordlessLogins to the site as an authentication system. If my e-mail address showed in RecentChanges, it would be clearer who I was. (SocialText, incidentally, uses e-mail addresses.) But that doesn't change the infinite ways to do Wiki:UnethicalEditing. PeerReview is still important, although I of course believe in an AuditTrail.
My piece of contemplation now is how PasswordlessLogins and LoginsAreEvil reconcile.
As to your believe only 'trusted' members should be allowed to do CitizenArrest, I disagree since I have no time or patience to 'trust' someone (i.e. maintain a database of brownie points to determine who is good or bad on a given day). I'd rather wait and see if someone does something bad, and then respond by either teaching them a better way, fixing the problem, ConflictResolution, or forcing them into CommunityExile if need be. With CitizenArrest, at least there are the checks and balances of PeerReview and serious consequences to EnforceResponsibility, which means it's no longer important for a BenevolentDictator to take the burden and risk of being fair minded. -- SunirShah
Sunir, I don't know how closely you've been following the events on WikiWiki, but I find your arguments to be hopelessly and unsupportably idealistic. There are people who simply refuse to honor a wiki's mission and culture, and try to impose their will with any means available. I don't think there are any perfect solutions, but I'm becoming more and more convinced that some form of login is a necessary component of defense, as a means of identifying and at least temporarily shutting off misbehaving individuals. Banning large segments of the net does nothing but bewilder and annoy legitimate users. Anonymous proxies are available and are currently being used vigorously to prosecute EditWar's in spite of the blocks on WikiWiki.
You are right to be concerned about the potential abuses of logins. I think a system that allows automated signup, but with significant delays in the approval process, may be a reasonable compromise. No doubt some determined individuals will store up a load of identities and unleash attacks through them all at once, but if people who are empowered to do so (another topic of debate, of course) can shut them down quickly, even this would be a containable ForestFire. I'm very sad to have to suggest this, because wiki does lose something in the process. But I'm currently watching WikiWiki burn without any signs that the perpetrators will ever let up, especially since they seem to have automated most of their attacks, and thus it costs them little to press on. -- DanMuller
The problems on c2 are mostly social. Meatball has a long and rich history complaining about c2. Many of the pages here respond to specific problems there. That being said, I'm not against HardBanning the hard eggs. I'm against the GodKing being the one to decide to HardBan, which makes no sense for c2. CitizenArrest is a good way to DevolvePower, I think. I don't think logins are as effective as IP bans because although technically both are logical SemanticSpaces, an attacker can easily create new logins (space), but cannot easily create new IPs. I've already banned AnonymousProxy usage here, and OpenProxy will follow.
CitizenArrest also has the advantage of not overreacting to a problem. HardBans are a last resort, not a first. -- SunirShah
The problems on C2 are at their basis entirely social, of course, but that doesn't necessarily mean that there is an effective and accessible social or psychological solution.
I thoroughly agree that HardBans should be a last resort, but we have cases on WikiWiki that have reached that point and gone well beyond it. New IPs are easily, even constantly and without intent, created by most home users. I dislike the idea of banning large ranges of dynamically assigned IPs. The notion of IP neighbors applying peer pressure to avoid being banned as colateral damage sounds nice, but the cases where hard banning is justified are also exactly the cases where peer pressure does nothing.
You can control how hard or easy it is to get logins. You have no control over how hard or easy it is to get new IPs.
In any case, banning proxies should be a component of a defense against vandals. How effective have you found the AnonymousProxy and OpenProxy banning mechanisms to be? I'll read up on them myself, of course, but I'm interested in your experiences. -- DanMuller
The only OpenProxy attacks came from RA just after he read here about how to do them, and he has recently employed them again. The AnonymousProxy bans are very important since lots of people use them.
I recognize I am going against the flow on the Internet by persuing a program against anonymity and against neighbourly intervention, but I'm not exactly urban (e.g. 'BarnRaising' was my suggestion). Although not exactly rural either, I believe more in TheCollective than TheIndividual. If people want to participate here, they should be the type who take an interest in the social life on the Internet.
Pragmatically speaking, there are a lot more ways to identify someone specifically, such as cookies and measuring the clock skew on their IP packets' timestamps.
How do you control logins, realistically speaking? I've never seen it done well. -- SunirShah
I'm not thinking of anything terribly restrictive, but rather just a gate that allows the speed of flow to be mitigated. Perhaps a fully automated signup, with email verification, perhaps a delay on the verification, and a limit on logins granted per domain per diem. This makes not attempt to prevent anonymity, and excludes any human judgement in the granting of logins. But if individual logins can be quickly disabled by trusted wiki members, then it might at least slow vandals down enough to allow current manual detection and cleanup activities to prevail. The logins would of course only be necessary for saving edits, not for reading. I am very much in favor of keeping WikiWiki as open as possible, but a delay in signing up for write privileges, even as much a day's worth, does not seem terribly onerous to me. If sandbox pages can be exempted from the restriction, so much the better, so that casual visitors can get their feet wet.
Freely-granted cookies can be easily deleted. The vandals that cause all this concern are much too technically savvy to even consider that a bump in the road. As far as clock skew identification goes, it rings a bell, but I have no idea how accurate it is. -- DanMuller
One can play the same games with cookies without using logins. Just use a SurgeProtector against parts of the network that keep requesting cookies, and consider all machines that don't have a cookie to be the 'same' (i.e. limit the amount of totally anonymous activity). What's the point in having a person jump through hoops, when a more HumaneInterface would be for their computer to automatically 'login'? -- SunirShah
But what would trigger the granting of a new cookie? If it's too easy to get a new unique 'identity', the purpose is defeated. Putting some cost on acquiring write privileges, at the very least in time, is exactly the point. -- DanMuller
Argh, it gets hard to sort out the thread when no one makes their text distinctive or sets it off or anything... Anyway, any kind of speedbump or hoop to jump through to gain write access is probably going to have a very large negative impact on participation. I'm not sure it's desirable or even necessarily warranted. What I've been arguing is warranted, is requiring logins and some level of trustworthiness (such as simply having the account for a couple months) to access features like CitizenArrest. It's been made crystal clear to me that such a proposal won't fly, and that something more "innovative" needs to be considered (read: discussed to death). I started hacking on UseMod again to fix the login issue, but I've decided (again) that the codebase is simply unmaintainable, and we'd all be better off with Moin, Twiki, Instiki, PHPWiki, or JSPWiki (there's a choice for each major wiki implementation language) if we actually want something that's hackable. The community matters more than the technology, really.
Incidentally, clock skew identification is a gimmick, it's something that will be easily defeated, the recent paper offering claims of the ability to fingerprint any node anywhere was highly overblown and inaccurate (at least the writeup was). --ChuckAdams
Chuck, dismissing someone else's idea as unwarranted and then claiming that you idea is warranted just because, and then claiming that our process is broken because your idea isn't instantly accepted as Truth is not helpful. I am not yet convinced your suggestion is warranted. There are many reasons why many of us do not want to create a cadre of wizards or SysOps on MeatballWiki, and it's not just because we are mindlessly following 'wiki culture'. The political problems that will inevitably result from that will exhaust us more than the occasional spam attack. While everyone can at least agree that spam is bad, not so who should get power, who should be impeached, and all those organizational issues that I don't see much need for in an small environment like Meatball. We have three system administrators as it is. The fact that we do not have time to hack UseModWiki is because we are busy trying to eat and enjoying our lives. The code is OpenSource (thought not a PublicScript), so patches are welcome. If you don't want to do it, that's fine, but that doesn't mean that nothing useful is being done. -- SunirShah