There are examples of IntelligentAgents, such as the price-finder of Best Book Buys [2] or the suggestion aspects of Amazon or BN.com, but since they are reactive and not proactive, I don't think they can be considered autonomous, and thus not IntelligentAgents. Perhaps StupidAgents, but that's fine with me. Some of my best agents are stupid. --DaveJacoby
There was a large sect of philosophy and psychology earlier this century (cf. B. F. Skinner) that believed that minds are purely reactive (well, stimulus-reaction-reinforcement). Nowadays, I don't think anyone believes them.
Here's a paper discussing what is and isn't an agent, as defined by ArtificialIntelligence.
In reading the above paper, I find a comment about anthropomorphism that defines an agent [3] (Dictionary:anthropomorphism), and takes as a counterexample of an agent a predicting mail-sorter that decides what to throw away by what you do as a non-agent. Since e-mail sorting is pretty much my definitional StupidAgent, I present that we can separate computer-aided agentry into two groups, FunctionalAgents and ConversationalAgent?s. Of course they get defined there, should they get defined, but in a nutshell, a FunctionalAgent is an agent that does a function. A ConversationalAgent? would be one where rich interaction is required. A MUD bot (such as Julia in the paper) or something like Eliza++, taking computerized rogerian therapy one step further, would be considered a ConversationalAgent?. Similarly, an ExpertSystem? uses ArtificialIntelligence techniques to present information and insight to someone, but except for in Star Trek: Voyager, you can't wire an ExpertSystem? up to be an expert. --DaveJacoby