MeatballWiki | RecentChanges | Random Page | Indices | Categories

From MeetForceWithForce

The best means of achieving this would likely be to create an autonomous AI agent to monitor the wiki. We could program or teach it to recognize potential community threats and respond by restoring pages to their prior contents. Somewhere it would keep a log of all it did so that whoever operated it could overrule it later.

I've not really spent any quality time on IRC (is that an oxymoron? ;-), but my impression is that a ChannelBot's primary purpose is to simply maintain operator privilege for the channel op in the event that the op leaves the channel or loses connection. I imagine they perform logging, too. This would be a bit greater in scope.

ChannelBots can do a variety of things. Some are net.nannies, searching for inflamatory text and automatically kickbanning people. Others act like a pet phoenix perched on the shoulder of an op, swooping down to wreak vengeance when asked. Others are FAQbots, automatically answering questions for newbies. Others maintain complicated strategy games. The latter ones are the coolest. But either way, a ChannelBot is basically a program on a given channel, either scanning the conversation or responding to private messages, or (more likely) both. Very useful things.


The best means of achieving this would likely be to create an autonomous AI agent to monitor the wiki.

At least you didn't say "easiest means". :-) (For a moment I wondered if Scott really wrote that message.) I guess I'm much more skeptical about "AI" claims because I used to really believe in it. Now I tend to think "IA", or "Intelligence Augmentation" will be more important for at least the next 20-50 years. Some traditionally "AI" areas such as speech recognition have made incredible gains--modern speech recognition uses advanced models of language and grammar as part of the recognition process.

"this" = dealing with attackers in kind rather than through HardSecurity: for every script they write, I have a script that bites back. Now that I re-read it, it does sound like a much grander claim. :-) -- anon.

To start with, it might be useful to separate a "role" of administrator from the person (or entity) filling that role. Presumably it would be best to have a mature, thoughtful, and perfectly impartial person filling the admin role 24 hours a day with nearly-instant responses to every threatening action. Since these perfect people are in short supply, an agent/bot/AI might be called on to fill in that role. Some things like SurgeProtectors will likely be built into the system (very little judgement is required to block a single site accessing 100 pages/minute). Other actions that require more judgement could be implemented as capabilities and roles which could be performed by people or other agents.

For the design of such agents, I suggest clearly separating the decision-making process from the actions (like hiding or removing a post). One can then test many decision-agents and compare their results. For instance, you could compare a keyword-matching program against a punctuation measurement and see which one catches "!!!MAKE MONEY FAST!!!" type messages better. (Hopefully any agents won't delete this page. ;-)

Even a less-than-perfect agent may be useful as an advisor, especially with an opt-in system such as PersonalCategories (eventually). For instance, a user might choose to hide pages marked LanguageBot:StrongProfanity or FlameBot:FlameWar, and it would be the user's decision whether the bot was accurate enough for their purposes. -- CliffordAdams (or is he Memorex?)

I think any bot would only serve best as an adviser to people. It should not make edits. If Microsoft and their natural language parsing team (just full to the brim with experts in the field) cannot get the AutoFormat&AutoCorrect to work in MS Office, the bot here won't either.

See also TuringTest


MeatballWiki | RecentChanges | Random Page | Indices | Categories
Edit text of this page | View other revisions