The idea is to protect the system and its users from harm, in gentle and unobtrusive ways. The opposite of HardSecurity. It follows NonViolence. Instead of using violence, it works architecturally in defense to convince people against attacking and to LimitDamage. It works socially in offense to convince people to be friendly and to get out of the way of people adding value. Soft security is difficult. It often requires you to grow as a person, sometimes painfully so. This by itself makes it valuable.
SoftSecurity is like water. It bends under attack, only to rush in from all directions to fill the gaps. It's strong over time yet adaptable to any shape. It seeks to influence and encourage, not control and enforce.
See also an [excerpt] by Sir Arthur Conan Doyle that Neal selected to show these ideas aren't new. [The link seems broken, there is an [archived copy] and a [summary].] Stephenson and Clarke seem to misinterpret Doyle, since he's writing that the town (slang for London) is safer than the country because of social pressure, which the country is too sparse to provide.
SoftSecurity is a collective solution, whereas HardSecurity is often an individual solution. It's important to remember that although the Patterns below are written as prescriptions for you to follow, they are meant as notices for everyone to follow. When SoftSecurity becomes unilaterally enforced, it fails. This is a chicken and egg statement. When SoftSecurity fails--when TheCollective fails to act--only a few heroes try to keep it working. When only one person defends TheCollective, the defense loses its effectiveness and believability. One, the target of the defense will not know the hero speaks for the RoyalWe, and thus attempt to undermine the hero's authority in acting. Two, it may be the case that the CommunityDoesNotAgree, and the hero is acting out VigilanteJustice; acting alone should be good pause to reconsider what you are doing. Finally, while you think you ModelDesiredBehaviour, you are not providing space for others to act themselves, and so the real message you are sending is that they should not act.
SoftSecurity follows from the principles of
See also
You may also be interested in SunirShah's Powerpoint [presentation] on SoftSecurity from OReillyPeerToPeer East 2001.
CategoryWikiTechnology CategoryWikiConventions CategorySoftSecurity
Some ideas.
A couple of the banks I've been to have ceiling-to-floor glass walls around all the offices -- even the door is mostly glass. It's more soundproof than cubicles, and people sitting in the waiting room can see why they have to wait -- all the people with authority to give loans / draw up CDs / whatever, are all busy just now. I also think it's less intimidating than walking into an opaque office / cubicle. I'm not sure why. Is it because I've been observing it from the waiting room, so I'm now more familiar with the room -- no longer a complete unknown, someone else's turf ? Is it because I've seen other people doing just what I'm about to do, and other people in the waiting room are witnessing what I'm doing, so that I can be sure I'm about to have a civilized conversation, they're not going to ask me to do something weird like flap my arms like a chicken ? Or some other reason it's not so intimidating ?
One bank I've been to had the security vault in the same room as, and directly opposite, the desk with several tellers. They usually left the door standing wide open during normal business hours. There's 2 different soft-security things going on here: (a) While you were waiting for a free teller, you could look at the incredible thickness of the door, and look at the "portholes" on the inside of the door that let you see thick, massive, strong-looking metal gears, and be impressed by how difficult it must be to break in when the door is locked at night. (b) Although it was standing wide open, I never saw anyone actually walk in -- perhaps because they knew that all the tellers faced the security vault -- even when every teller was busy talking to someone, it would be impossible to walk in without the tellers seeing it happen over the shoulder of their customers. Even if someone did walk in, everyone can see that all the security boxes were locked into place.
People who find a place beneficial will lose that benefit if the place is closed down. It's easier to persuade those people to do certain things and not to do other things if you can convince them "If everyone did that, we'd have to shut down". For example, weight limits on airplane luggage -- There may be plenty of room for one person to bring a ton (2000 lbs, 1000 kg) of stuff on the airplane, but "if everyone did that, the airplane couldn't take off".
A little [anecdote] on a NetworkSoftSecurity case:
Any others? I feel there may be a PatternLanguage lurking here, if it could be filled out.
How about:
Another example: Putting the fire extinguisher behind a glass panel, then chaining a small hammer on the wall next to this.
Related real world soft security: What would the online analogy for these be?
(The envelopes, the wax seal, and the flimsy padlocks, none of them really stop anyone from doing whatever they want, but they make it obvious to everyone when security has failed. The hair-on-the-drawer and the paper-on-the-door seem similar, but they don't tell the honest people to stay out ahead of time (not a GuidePost), and after a security breach they don't tell the honest passerby when security has failed.)
"Padlocks only keep honest people out".
Not sure if this is SoftSecurity or HardSecurity, or something in between ... pathetically weak hard security, like two bit padlocks on the petty cash tin (or bikkie tin, more often raided). The point is that it keeps honest people out for very little cost, but would fail (pathetically) with a dishonest person.
It's hard security that can be ignored when necessary by honest people. Perhaps its a sub-variation of GuidePosts and WarningSigns?, where FormFollowsFunction?. (What better way to say "don't open this" than to use a padlock? Beats the language barrier.)
Padlocks are also an audit mechanism (albeit a weak one).
There is [nominally] a small assurance against theft. As someone who's gone through more than one padlock, I can assure you that it's quite small.
More significantly, a padlock may serve as an audit mechanism of sorts. Most attacks against padlocked content results in a broken (or missing) lock. As the first step to recovery is admitting you've got a problem, coming back to a broken or missing lock is an indication of a burglary. The real problem is when you have theft which leaves no signs.
Another argument is that a lock, even a small, trivially bypassed one, may help an honest man stay honest. I suppose the flipside is that it might also encourage a dishonest person to be dishonest.
-- KarstenSelf
Even the weakest padlock forces the thief to be conciously commiting a crime. You can look in an unlocked box out of curiosity and find yourself taking some without ever making the conscious choice to steal. Also, any unauthorised person caught looking in the box can be punished without them having stolen anything, because they have had to break the lock to get into the box which is in itself a crime.
However, the safest place to keep money is in the middle of a table with lots of honest people around. They all know the money is in danger and so will all keep an eye on it to prevent it being stolen. If it's in a box somewhere, they will all think it is safe and won't concern themselves over it.
Similar to the "steering lock principle" of car security, if I have a steering lock on my car then it will be that much harder to steal so the thief will probaly move on to the next easier target- a deterrent- although I've always wondered what happens when every car in the street has one...
Even flimsier than the padlock is the locks on most of the bathrooms in my city. There's a small hole in the handle on the outside, so anyone with a straightened paper clip can unlock the door and get in. Macintosh computers have the same sort of thing protecting the reset button.
Only wimps use tape backup: _real_ men just upload their important stuff on ftp, and let the rest of the world mirror it. -- Linus Torvalds, about his failing hard drive on linux.cs.helsinki.fi
Linus wants his files "secure", in the sense that he doesn't want them corrupted or irretrievably lost. Too many people confuse this with "not allowing other people to read the files". (Is this different from "not allowing other people to edit the files" ?)
There's an interesting idea at Distributed Proofreaders http://www.pgdp.net/
Um... I'm getting caught up in the details here, making it too complex. How can I generalize this idea ?
The "2 different days" is a kind of SurgeProtector. But the rest doesn't seem to match any of the above SoftSecurity categories; perhaps there's a general pattern we can extract here ? What is a good name for this ?
That's HardSecurity. It controls access. See WikiAccessLevels. It is not a good idea as you can game the system; it is completely intolerant to failure. If I sufficiently hated you, I could proofread fifty documents flawlessly, and then turn into the proofreader from hell for the rest. Or even better, deliberatively introduce subtle yet hard to detect errors after that, like mood shifts and racist innuendo.
I may be mistaken, but I think DistributedProofreading? is using this as much to BuildCompetence? as to BuildWalls?.
Orlikowski, W.J. (1996) Evolving with Notes: Organizational change around groupware technology. In C.U. Ciborrra (ed.), Groupware and teamwork: Invisible aid or technical hindrance? Wiley: Chichester. Available from http://ccs.mit.edu/papers/CCSWP186.html
Rasmusson, L. and Jansson, S.. (1996a) Simulated social control for secure Internet commerce (position paper). In Proceedings, New Security Paradigms '96 Workshop.
Rasmusson, L. and Jansson, S.. (1996b) Personal security assistance for secure Internet commerce (position paper). In Proceedings, New Security Paradigms '96 Workshop.
I'm reading the case (*) of a feminist forum that prided itself on inclusiveness that got trolled to nearly the ground by a anti-feminist troll; they did not want to ban the person on philosophical grounds due to their ideal of RadicalInclusiveness. This left them open to two months of vitriole until the forum hosts banned him. Similarly, MrBungle was finally banned when a wizard stepped in and unilaterally banned him, this after a long argument over free speech. Similarly, our adamant principle of SoftSecurity here leaves us open to trolling with the implicit goal of getting us to ban someone's IP. The more famous our philosophy becomes, the more we open ourselves to "career" trolls who will spend a ridiculous amount of time here getting us to ban their IP to prove we are wrong. I think that since the cyberstalking laws have changed, and since invariably these guys are in America, Canada, Australia, or the UK, we should investigate what criminal charges we can lay as that seems to be the only truly effective means at exiling someone. It would be worthwhile to know, if only to make the threat of a LegalThreat enough of a deterrent. Meanwhile, it seems we are hitting the limits of our current architectural defenses, so we should dream up more.
Additionally, we could ban a few IPs now and then just for the hell of it. A perfect record is always going to be trolled. Best to be WabiSabi?, perhaps. -- SunirShah
The problem with so sophisticated methods is the sheer amount of information about the rules, and many beginners involuntarily can easily broke some of them. This is in sharp contrast with HardSecurity where the newcomer immidiately gets feedback on his actions. I believe much more effort should be excerted in the direction of developing user friendly teaching techniques for the SoftSecurity rules then developing new ones. An immidiate example can be giving somwhere at the top of the wiki pages explicite links to rules that are meant to govern this given page (something like 'This is a ThreadMode page'). -- ZbigniewLukasiak
While SoftSecurity is a powerful concept per se, it is important to apply it in a way providing long-term security. Experiences from offline communities applying SoftSecurity show that it often works on a small scale while providing a breeding pit for organized crime through a wide network of loopholes. To avoid this, it is important to distinguish between rules that are only rules but don't allow to be verified if they are followed and the verifiable ones. The latter ones effectively LimitTemptation because everyone will see the offense.
ZbigniewLukasiak talked above about how newcomers or novices, who cannot yet grasp a complex and lengthy rule set, might need immediate feedback and/or explicit rules. This makes sense. I agree with you, ZbigniewLukasiak. Although, I will add that this can be overextended, and the focus on rigid social norms can reverse into a kind of leverage to create PeckingOrder, such as we see with WikiPedia. -SamRose
Offline communities often have problems creating an effective rule system because of privacy considerations, whereas online communities have the possibility to create something new, not held back by this dead freight offline communities have to cope with. -- Anon
To add to the "Structures: Or Why Things Don't Fall Down" quote above -- the author (J.E. Gordon) regularly, throughout the text, argues against the federalization of rigid structures (and associated materials: iron, steel) and mechanical parts, in favor of the soft, tough, flexible, and structural. He mentions that the reason why motor engines are so rigid and mechanical is not because metal is virtuous, but rather because the soft and flexible materials of couldn't withstand the high temperatures that came with frequent explosions.
This seems related to me to NealStephenson's insight into our lack of understanding about the social fabric.
-- LionKimbro