[Home]ArtificialIntelligence

MeatballWiki | RecentChanges | Random Page | Indices | Categories

Seeing as this page could be renamed AttacksOnArtificialIntelligence?, I'll try my hand on refactoring it. --DaveJacoby

From Generation 5 [1] ;

A branch of computer science that looks at creating intelligent programs. The field is incredibly diverse, and has potential applications in every other aspect of life, from scientific research to household assistants, stock market prediction or military purposes. Artificial Intelligence includes research areas like robotics, evolutionary computing, distributing computing, natural language processing, fuzzy logic and much more. The definition of what constitutes artificial intelligence is as subjective as the questions of the meaning of life, existence of the soul, mind-body problem and the other big questions in philosophy. For this reason, much of AI is embedded within philosophy too.

ArtificialIntelligence is a subfield of CognitiveScience that seeks to understand how Mind works, comes to be, exists in relation to the rest of existance. The Philosophy of Mind is also a subfield of CognitiveScience, as well as psychology and neurophysiology.

My editorial comment on this is that once a subject becomes practical, it becomes an entity in an of itself, distanced from ArtificialIntelligence. That and ArtificialIntelligence has been saying it'll have results in ten years for the last 20.

See also WikiPedia:Artificial_intelligence


Another editorial comment: "There ain't no such thing as AI." Anything that is going to be recognized as "intelligent" won't be called "artificial". After all, do you drive to work in an artificial carriage or speak over an artificial voice-projecting unit (telephone)? Many times the AI researchers have met their goals, only to find that the goalposts have been moved further. Even the old-fashioned "Turing Test" contest has now been updated to require full audiovisual impersonation. (See [2] for some comments on the "Turing Test" and the changes.)

Sometimes the earlier tests become less interesting when solved, like the famous chess machines. Other areas like speech and visual recognition have made large advances, but arguably haven't gotten the attention they deserve. Of course, the marketing hype of many prior ideas hasn't helped much either.

Personally, I think that general-purpose AI is most likely to emerge from imitating/emulating animal intelligence, then adding logical systems as a tool for the lower-level intelligences. Some people have argued convincingly that the uniquely human difference in intelligence is due to tools such as symbolic languages. Current computers do quite well with symbolic systems, but they have problems with basic goal-seeking and decision making behaviors that even a lab rat can do quite well.

My prediction is that half the work of this kind of AI will be getting computers to emulate the "organic" intelligence of an insect, and 90% of the work will be done at the level of a lab rat. (Then we'll just have to do the other 90% of the work. :-) --CliffordAdams


ArtificialIntelligence is a subfield of CognitiveScience...

I'm sure it will shock everyone here that I disagree with this. Perhaps academic AI has shrunk so far as to be taken over by CognitiveScience, but I would not call it a subfield. In fact, I am not at all sure CognitiveScience really "is" a separate field at all. At its best, CognitiveScience has created cross-disciplinary works building on "ComputerScience" (another questionable "field"), Philosophy and Psychology (including neuro-psychology). At its worst it has drawn in many people who couldn't "make it" in those fields, much like literary criticism often attracts failed writers. The better works seem to come from people with a solid background in one of the components than from those who try to develop a new "holistic" overview of the field.

Here's a description from a graduate-level Cognitive Science program[3]:

Cognitive science is a multidisciplinary field of study whose primary aim is to develop causal explanations of the cognitive processes responsible for the behaviour of intelligent systems, especially the intelligent behaviour exhibited by human beings. What distinguishes cognitive science from other disciplines which operate over this same target domain is the conjecture that the concept of computation provides the key to human intelligence. To this end, cognitive science brings together aspects of philosophy, psychology, computer science, linguistics, and neuroscience, in an effort to construct a computational theory of the human mind.

It seems somewhat strange for me to criticise CognitiveScience, since I have a nearly perfect background for it. I've seriously studied ComputerScience (*cough ;-), Psychology, and Philosophy. My view is that there is no coherent "field" for CognitiveScience to study, and that it is mostly a lumping of disciplines that are largely separate. In a way, it makes almost as much sense as lumping Physics, Chemistry, Electrical Engineering, PoliticalScience?, and Philosophy. All of these fields are well worth studying, and CrossDisciplinaryWork may come up with interesting ideas, but few would call this combination a "field of study". --CliffordAdams

(I take it you agree on some of my comments on ComputerScience) My interpretation is that CognitiveScience is just ArtificialIntelligence rebadged, depending on the point of origin. ArtificialIntelligence starts on the outside, working its way in towards mimicing intelligence, while CognitiveScience starts in the center, assuming that there's something modelable and computable at the core. ArtificialSentience?, if you will. (A term taken from TheCyberneticSamurai?, an actually decent book with a horrible CyberPunk name and cover.)

From the quote, it sounds like if you accept TheSingularity, you can accept CognitiveScience. I don't accept either. At the very least, the existance of Britney Spears as a popular recording artist pokes holes in the concept of human intelligence and logic. That human intelligence can be computed would be questioned by any theist, I believe. --DaveJacoby

Are you saying that you can't predict the success of a pop music artist by examining the subject matter of their songs "nothing serious, they'll do", their sex appeal "nice implants. +3 places in the charts", and the marketing muscle behind them "$2million add blitz - they'll be going platinum"? Behaviour of large groups of people ends up looking like the behaviour of large groups of subatomic particles - you don't necessarily know what an individual will do, but the group is rather predictable. CognitiveScience seems like a plausible grouping of areas of study - as long as the emphasis remains on the ComputerScience portion (anyone else notice how 3/5 of the areas mentioned are "soft"?) I'd almost assume that the new term was coined because Artificial Intelligence has got to be leaving a bad taste in investor's mouths by now (and I include the NSF and other research orgs as investors in this sort of thing).

I understand that while you can't reliably predict individual behavior, you can reliably predict group behavior. This leads me to respect sociology a bit more than psychology. I meant it as a general slam on the human race. I do tend to agree on the part about rebadging ArtificialIntelligence as CognitiveScience.


"Do you drive to work in an artificial carriage" - the movement is real, even though it is produced by artificial means. When you simulate a thunderstorm, nobody gets wet. Some people (eg Searle) claim that simulated intelligence is more like the latter than the former.

Of course nobody gets wet in the simulation--the AI is smart enough to stay inside. (Unlike some humans. :-)

One problem with the "AI" field is that it often covers a broader range than just "intelligence". Personally, I like to distinguish between "intelligence" (problem recognition/solving ability), "experience" (experiential quality like sensation or pain), and "consciousness" (what makes "people" special). As I use (abuse?) these terms, a lab rat has a certain level of intelligence (can solve mazes for food), many forms of experience (like hunger or fatigue), but very little (if any) kind of consciousness.

Given this kind of distinction, "intelligence" might just be a "simple" matter of engineering. The classic "Turing Test" (imitation game) seems quite adequate to test for this kind of "intelligence". If such a system was fully capable of performing a wide range of "intelligent" tasks with no more instruction than a human would receive, then that system should be considered intelligent.

Experience and/or consciousness may be an entirely different matter. On the other hand, if hyper-intelligent machines sieze power (what would stop them?), the important question could be whether the machines think humans "really experience" anything special, or whether humans are simply a slight extension of the primates. After all, humans can't appreciate 37-dimensional nonlinear "music" or fractally-dynamic quantum "poetry"... --CliffordAdams

This all assumes a TheSingularity, which is something I tend to discount. I have not seen any signs that we can encode or compute ArtificialSentience? (or ArtificialConsciousness?, depending on your terminology.) Of course, if we build things with a basic hunter/killer instinct (such as in-development and possibly deployed autonomous mobile anti-aircraft gun) and enough resiliance and firepower, and they get set off, there might be a catastrophic event of perhaps species-ending proportion, but I would hesitate to call that technology hyper-intelligent, and instead of being replaced with an intelligent race made of silicon and steel, we'd have pathetic rusting imitations of the "Vietnam Vet" stereotype: rusting, unnecessary warriors programmed to do something there's no need for them to do anymore, until lack of maintainance or their "fight, not flight" responses to ghost input put them out of commission.

And, if hyper-intelligent machines sieze power (what would stop them?): two answers, depending on which ScienceFiction reference you follow: John Connor (Terminator) or a staircase (Dr. Who). --DaveJacoby

To digress, the Daleks weren't robotic, but an alien species with an exoskeleton designed by idiots. The Cybermen were robots, and not stopped by staircases. However, they were ridiculously weak and stupid. Therefore, the answer to who would stop hyper-intelligent machines? Unimaginative screenwriters.

I think that Dr. Who stopped a hyper-intelligent machine with the statement This statement is false, but to digress a little, suppose that a distributed rule following simple intelligence implemented as multiple nodes of cellular automata; how could it protect itself from having corrupted nodes redirect its energies? -AndrewMcMeikan


Anything that is going to be recognized as "intelligent" won't be called "artificial". Conversely, anything artificial won't be called intelligent. Ask a chimp or a dolphin for the square root of 19 to three decimal places. Ask 99% of humans to solve the same problem with pencil and paper. (I'm old enough to have learned how to do this in sixth grade, and just maybe I could still do it.) Calculating square roots clearly requires human-level intelligence. If a $5 calculator does a square root, we don't call it intelligent, we say that the intelligence resides in the chip designer or the author of the algorithm. However, if a human child does a square root with pencil and paper, we say that the child is intelligent. We don't credit the parents or the teacher.

So the issue is not whether machines can be intelligent, but whether we humans will apply the adjective "intelligent" to machines. We will resist this as long as possible, for reasons that are ultimately religious. I expect that this will continue until machines start arguing their own case.


I'd prefer to say that 'artificial' means non-biological or man-made, the machine I'm using now could produce the square root of 19 to three decimal places in less than a second. That's just the unthinking application of knowledge, which expert systems are good at. Surely, the problem is understanding, the ability to form new relationships between abstract concepts. When computers begin to abstract with no human intervention, that's when I believe they would be deemed intelligent.


Discussion

MeatballWiki | RecentChanges | Random Page | Indices | Categories
This page is read-only | View other revisions | Search MetaWiki
Search: