The most obvious problem is security. People might make changes to the script that cause damage to either the server or the users or even the network at large. One would also want to make sure that an attacker couldn't change the Wiki code in such a way that their change was concealed or irreversible.
PeerReview would alleviate this risk, provided the community has a suitable number of technically trained people reading over the changes.
A second mechanism to provide security would be to wrap (sandbox) the main script in a "loader" script, and to give the actual script very restricted privilages (it can't delete the database backups, it can't flood ping another site, etc).
Here is a list of languages with known security/SandBoxing? features:
Perhaps with sufficient PeerReview sandboxing would not be necessary.
Another security risk is the release of sensitive information, like passwords or file locations. Solving this perfectly requires some complicated crytonautic maneuvers. Of course, the simplest solution would just to keep the sensitive information in a separate file, such as a "loader" script.
Alternatively, rely on those who say PrivacyRequired to PeerReview any script changes that might adversely effect their pricacy. (or am I missing the point?) --MartinHarper
A much more dangerous problem, however, is a catastrophic error. A buggy script may blow away the database. A merely broken script won't even run, preventing access to the entire site, thereby "crashing the universe" so to speak. Either of these scenarios are fatal, but fortunately we know how to deal with fatal errors. The database can be protected by regularly back ups. Once again, the script could be wrapped in a "loader" that, on event of a crash, would allow the user to replace the script with the last known good version. e.g.
This is necessary, but not sufficient: a bug could manifest as being able to run the script, but unable to edit pages - perhaps due to a bug in the EditConflict or AccessLevels or FileReplacement code.
It would be also very advisable to use a two-pass system whereby the development script is first tested on a dummy site before being adopted for the mainstream site.
It might help to run a sort of "DebuggingWiki?" like UseMod that can edit the same pages as the the SelfProgrammingWiki, but which is running on unchangable code. So SecurityHawkÿ?0ÿs in the community could periodically view the code through the DebuggingWiki? to check for any subversion. The DebuggingWiki? would also be useful for debugging.
Another problem is that unlike English, code is more sensitive to changes. Bugs are much more severe than poor writing, and they are much harder for a PeerReviewer to catch by eye. A good CommunityExpectation to maintain would be to write strong Wiki:UnitTests for every change.
The final problem is probably the hardest to solve. Many webhosting services would probably not appreciate having a site with a PublicScript because they would be afraid it might damage their system. This is not unreasonable, of course, but if you want to try this, it's up to you to convince them it's safe. In fact, it may be a good impetus to check and double check the robustness of your strategy.
Another solution would be to have the main script on a normal hosting service and a community programmable variant of that script on a special CommunityProgrammableWiki-friendly hosting service (like someone's PC). The variant script would be a WikiWindow; that is, it would be programmed to use the main site's PageDatabase in place of its own.
People would use the main site most of the time, but the community could still use the community programmable sistersite's features to collectively determine how to modify the script. Those modifications would only be automatically applied to the special community programmable sistersite. However, periodically, say once every two months, the host could ask the sistersite to generate a non-reprogrammable version of itself, and then he or she could manually copy that code onto the main site. So, agreed-upon patches would eventually make it into the main site's code with a minimum of effort (the host need only do one "ftp" every two months), but with a delay.
This also might alleviate any lingering security or complexity fears, since the site's host would have a chance to manually check over any changes the community makes before copying them into the main site's code. This approach would also require a good seperation between content and display which is good practice anyway.
Perhaps it would be wise in this situation to implement an AdvoGato style WebOfTrust; each change would have to be reviewed by a "trusted community member" (not just a unique IP).
The system could also have a built in bias for older versions; i.e. FileReplacement would only restrict when going from an earlier version to later. In the case of a bug, any user could revert to an earlier version. Earlier versions could be marked "insecure" to prevent this, however.
A problem with any PeerReview system is that it makes it impossible to do anything quickly; one would basically have to debug a proposed change oneself because of the FileReplacement lag. Maybe sufficiently "trusted members" should be able to make changes without PeerReview.
The trusting mechanism itself could be separate (i.e. not changable through the wiki), at least to start out.
Fyi, for the moment there is a minor discussion in the TWiki development coffeebreak  about combining wiki with viewCVS, LXR, etc.
-- MitchellNCharity? - 12 May 2003
Here's a question: what do the various Smalltalk systems do, particularly for the debugging issues? It's been awhile since I toyed with Squeak, but I'm pretty sure you're able to edit the base GUI objects, etc. How does Squeak prevent itself from imploding if you introduce a syntax error; or a higher-level error that, for example, prevents any object from displaying?
It doesn't. If you screw with the core functionality, you can hose your entire image, and you will need to revert to a backup. A syntax error isn't likely to compile, of course.
PeerReview is not sufficient protection for a CommunityProgrammableWiki, because any sufficiently reflective SPW would be able to have the very peer review mechanisms themselves edited -- and potentially removed or subverted. A system like MOO would be more amenable to a wiki system, but MOO is unfortunately sort of baroque and crufty when it comes to I/O and pattern matching.
Speaking of MOOs and programmable environments, folks interested in this subject might be interested to know about mooix (http://mooix.net). I don't know if it could be made to be self-programming, though, in the sense used here.
Seems to border on a DistributedOperatingSystem? (er, one whose filesystem has no write protection). Clarifications as to the similarities and differences would be welcome.
Wiki software already deals with the main security issues here: users are not allowed to insert raw HTML into Wiki pages. Granted, raw HTML is a much smaller problem, but it's fundamentally the same problem. Users should not be able to insert damaging code into the WikiEngine.
Another important problem is FaultTolerance?. Modern programming languages are much less forgiving of syntax errors than web browsers are. It should be possible to add faulty code to the engine without taking down the entire site. Perhaps the engine should confirm the validity of code before making it live, and always run on the last known good version?