I think you'll find most AI researchers are quite annoyed at the misuse of the term "agent" these days.
I worked with one in school. Busily trying to create an intelligent agent to find useful investment advice.
This comment might have to more with ArtificialIntelligence or CritiquesOfArtificialIntelligence, but it seems to me that as soon as a subject of AI research gains some sort of success, it stops relating itself with AI. Meanwhile, AI always says that the results of the research will be there in ten years. If I understand it right, the ten year result threshhold has been the one constant in AI research.
But to make my point more clearly, as there have been more and more people working with computers, and they find their solutions. This means the solutions are tailored both to the problem set and the computer. The computer works well for event-driven or scheduled events, but less so if working on its own timetable. An agent, by definition, is something that acts or has authority to act. In an event-driven or scheduled system, the program reacts to the event or to the time prompting, which is an event, actually. So a StupidAgent isn't an agent in this sense, as it reacts, not acts. Am I understanding the AI definition correctly? So, while slocal is a solution, it isn't an agent.
I'm fine with that. In my computing environment, the results of these agents, be it my email ending up in the right directories or me reading DaveBarry before anyone else or me knowing when new updates arrive at the local FTP server, the results, not the means of achieving them, are the important thing.
(An agent in a hacker/cracker sense would be the guy with a gun arresting you for DoS-ing the Pentagon. B) )
So, a better definition be "An application which works for you but requires manual configuration, and is incapable of expanding upon the rigid instructions included in it's configuration.". procmail(1) fits this bill, as do sendmail(8), my cron script to backup and encrypt important files, and all the other traditional tools on a UNIX.
On the other hand, if you integrated procmail with a MUA, you could have an alternate delete method for use on spam. procmail could then "learn" to identify spam (based on observation of what you do and do not mark as spam) and take proactive measures to filter it out. That (learning what you think is spam) would make it a SmartAgent.
You'd want it to be able to examine everything from source address to subject and even content. Unfortunately, a folder where all the spam goes will not get read. You'd probably want a training period when it just presents it's own guesses, and then after that a folder where things wait a week (or however long) before deletion. But don't write a whole new mail reader just for this functionality - hack it onto mutt or some other mail reader. No sense creating Yet Another Mail Reader.
StupidAgent. Smart would not require you to configure it to run for each site - it would just know. I think SmartAgents should be distinguished by not needing anything but very general configuration ("put your data files here. this is my proxy. My name is bob. Be extra careful with my mail so I don't lose anything important - I don't mind an occasional spam."). StupidAgents, on the other hand, need very specific config "move any incoming mail originally sent to bugtraq@* into folder bugtraq" or "watch the following sites: http://www.usemod.com/cgi-bin/mb.pl?RecentChanges, http://www.brunching.com, .... and tell me when any one of them changes". Of course, I doubt there will ever be a sharp line between the two. What is and is not smart will always be in flux. --ErikDeBill
My strong preference is for a stupid functional layer, with an optional smart layer on top of it. I want to be able to generate configuration files manually (or copy them from other machines, or restore them from backups). One crucial feature is the ability to disable any "smart" processing, either globally, or in special situations. Many people have learned to dislike agents because they couldn't easily (and permanently) get rid of annoyances like the Microsoft Office Paperclip-assistant.
I'd also like to see more smart suggestions and wizards rather than automatic actions. These suggestions don't require any kind of AI--they simply require observation of how people really interact with programs. For instance, consider a mail-handling wizard which first asks if a message is from a mailing list or an individual. If the mail is from mailing list, a message rule might look for To: and CC: headers (rather than the From line in individual email). After all the information is gathered, the rule could be shown and explained to the user, who could accept or reject it. There are many frequent tasks that could be greatly aided by a bit more human intelligence.
Also, on the newsreader-writing topic, a major point of the article was to add functionality to an existing newsreader rather than recreating the whole package. On the other hand, sometimes it is better to start over. For instance, I wouldn't recommend that someone start with rn/trn/strn if they were writing a graphical newsreader--the old *rn codebase has a huge number of rather dated assumptions about character terminals. Finally, writing your own newsreader (or any other common project) is a fine hobby, and can be far more fun than just sitting around watching TV. It's unlikely to be more than that, however, unless you put a large amount of non-fun work into it. --CliffordAdams (author of strn, who now uses [Xnews] (for Win32) on the rare occasions he reads UseNet)