I've not really spent any quality time on IRC (is that an oxymoron? ;-), but my impression is that a ChannelBot's primary purpose is to simply maintain operator privilege for the channel op in the event that the op leaves the channel or loses connection. I imagine they perform logging, too. This would be a bit greater in scope.
At least you didn't say "easiest means". :-) (For a moment I wondered if Scott really wrote that message.) I guess I'm much more skeptical about "AI" claims because I used to really believe in it. Now I tend to think "IA", or "Intelligence Augmentation" will be more important for at least the next 20-50 years. Some traditionally "AI" areas such as speech recognition have made incredible gains--modern speech recognition uses advanced models of language and grammar as part of the recognition process.
To start with, it might be useful to separate a "role" of administrator from the person (or entity) filling that role. Presumably it would be best to have a mature, thoughtful, and perfectly impartial person filling the admin role 24 hours a day with nearly-instant responses to every threatening action. Since these perfect people are in short supply, an agent/bot/AI might be called on to fill in that role. Some things like SurgeProtectors will likely be built into the system (very little judgement is required to block a single site accessing 100 pages/minute). Other actions that require more judgement could be implemented as capabilities and roles which could be performed by people or other agents.
For the design of such agents, I suggest clearly separating the decision-making process from the actions (like hiding or removing a post). One can then test many decision-agents and compare their results. For instance, you could compare a keyword-matching program against a punctuation measurement and see which one catches "!!!MAKE MONEY FAST!!!" type messages better. (Hopefully any agents won't delete this page. ;-)
Even a less-than-perfect agent may be useful as an advisor, especially with an opt-in system such as PersonalCategories (eventually). For instance, a user might choose to hide pages marked LanguageBot:StrongProfanity or FlameBot:FlameWar, and it would be the user's decision whether the bot was accurate enough for their purposes. -- CliffordAdams (or is he Memorex?)
I think any bot would only serve best as an adviser to people. It should not make edits. If Microsoft and their natural language parsing team (just full to the brim with experts in the field) cannot get the AutoFormat&AutoCorrect to work in MS Office, the bot here won't either.