Artificial Intelligence

Intelligence means capability for analysis (I have no qualifications in Western philosophy). What capacity we have has not been created in us by anyone else. That is what I meant when I said that we are not artificial intelligence. I did mean that we are only some intelligence or consciousness.
 
Intelligence means capability for analysis .... I did mean that we are only some intelligence or consciousness.
Some?

from Mirriam Webster...
Definition of INTELLIGENCE

1
a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests) b Christian Science : the basic eternal quality of divine Mind c : mental acuteness : shrewdness

2
a : an intelligent entity; especially : angel b : intelligent minds or mind <cosmic intelligence>

3
: the act of understanding : comprehension

4
a : information, news b : information concerning an enemy or possible enemy or an area; also : an agency engaged in obtaining such information

5
: the ability to perform computer functions
wheee....

5...the ability to perform computer functions.... would this make all computers intelligent? AI?

4...comprehension? understanding? is there an animal/being that does NOT have intelligence??

2...angels? cosmic intelligence??
 
Again, Penrose and a lot of heavy hitters agree with me when we say "wpon'tever happen".

maybe these heavy hitters have over inflated egos ? but in reality could easily be replaced by a simple shell script :rolleyes:
 
1...the ability to perform computer functions.... would this make all computers intelligent? AI?
2...comprehension? understanding? is there an animal/being that does NOT have intelligence??
3...angels? cosmic intelligence??
I do not go by any dictionary. Let me put in my definitions.

1. Computer performs many functions which human brain does not normally do. Can brain do image manipulation? At the same time brain does many things which computers till now have not been able to do. 'Jump find' in simple words.
2. All animals which have brains have sufficient intelligence for their needs.
3. Bullshit.
 
Well, NCOT. I really want to know how you get around the problems of causality, cognition and consciousness. See, if one can cause a hand to be raised or lowered or turned over from palm to back and the behest of will, where is this first cause in the instance of a total brain repacement? One component somehow magically raises a computer from the status of a manipulator of 1s and 0s to causal agent? A computer quckly manipulates things, but does not understand a bilngual human being understands both (say) English and Hopi and manipulates symbols in both or from one to the other. He or she can go beyond the symbols--say create a new word in Hopi for time or independently recreate Godel's Theorm. This understanding (cognition) is a kind of meta-analysis not available to a computer (which by definition cannot be greater than the sum of its programming). Finally, there is the hard problem of consciousness. Even if we know where in a brain something is done (like neurosientists can follow risk analysis) and we can see and know that when this area and that area are firing (via a fNMRI) and we can infer that risk is a matter of acceptance/avoidance, probability, and impact/consequence (because of the verbal correlates the subjects give us) -- we do not share the subject's experience. Consciousness is private. We do not know how a bat's consciousness works. We know precisely how a computer works, and can share the limited experience when a bit flips.

For these three big reasons ( causality, cognition and consciousness) it is not possible for "strong" AI (the creation of an artificial human-like consciousness). "Weak" AI has been around for a long time. We have many machine that pass the Turing test. None that think in the Minskian or HAL mode. It is the loops and the gaps in material versus mental worlds that forbid it (or at least really stack the deck).
 
I do not go by any dictionary.

3. Bullshit.

My definitions often vary from the conventional dictionary ones as well.

However having a basis of understanding for the words we are using increases the liklihood of the interaction we call communication having a modecum of value.
 
Well, NCOT. I really want to know how you get around the problems of causality, cognition and consciousness. See, if one can cause a hand to be raised or lowered or turned over from palm to back and the behest of will, where is this first cause in the instance of a total brain repacement? One component somehow magically raises a computer from the status of a manipulator of 1s and 0s to causal agent? A computer quckly manipulates things, but does not understand a bilngual human being understands both (say) English and Hopi and manipulates symbols in both or from one to the other. He or she can go beyond the symbols--say create a new word in Hopi for time or independently recreate Godel's Theorm. This understanding (cognition) is a kind of meta-analysis not available to a computer (which by definition cannot be greater than the sum of its programming). Finally, there is the hard problem of consciousness. Even if we know where in a brain something is done (like neurosientists can follow risk analysis) and we can see and know that when this area and that area are firing (via a fNMRI) and we can infer that risk is a matter of acceptance/avoidance, probability, and impact/consequence (because of the verbal correlates the subjects give us) -- we do not share the subject's experience. Consciousness is private. We do not know how a bat's consciousness works. We know precisely how a computer works, and can share the limited experience when a bit flips.

For these three big reasons ( causality, cognition and consciousness) it is not possible for "strong" AI (the creation of an artificial human-like consciousness). "Weak" AI has been around for a long time. We have many machine that pass the Turing test. None that think in the Minskian or HAL mode. It is the loops and the gaps in material versus mental worlds that forbid it (or at least really stack the deck).

its not there yet, but it wont be long. and its potentially just part of our evolution.
 
Forget Bots and the Turing test, no-one has even the vaguest, theoretical notion of how to even go about solving he frame problem, which is just one in many steps to AI
 
Back
Top