Further to my post above:
The A.I. enthusiasts are all putting forward the amazing power that we will have when our hardware catches up to the processing power of the brain.
It is implied that the barrier to The Human Computer is purely a hardware one.
Also it always seems that they want to "jump in at the deep end". Why start at the "pinnacle"...why try to emulate human intelligence?
If the barrier to "proper" A.I. is purely that of computational power, then why don't we have semi-intelligent computers around?
I mean our current computers certainly far surpass the processing power of simpler animals...yet we are equally incompetent in emulating their intelligence.
Why don't we have a believable robot dog at the moment? Sony's toy not withstanding.
Take a look at the robotic “tinker toys” being used in A.I. research at the moment. Not to take away from their research...some very important work is being done....but the “intelligence” of said robot pales in comparison to something as simple as an insect.
If you look at current A.I. research, the vast majority of what you come across could probably be described as misguided at best.
Now, I am not a ludite. I am not a nay-sayer. I love technology, and am always fascinated by what it has to offer the world. I have always been interested in how technology can redefine our experience of the world. I am studying Computer Science in college at the moment for precisely that reason.
I think the reason that I am so sceptical of the grandeous predictions of A.I. enthusiasts is that they are constantly focused on the speed/memory of the available hardware. It is assumed that the human computer is inevitable...it's just a matter of time.
They makes the damnable logical fallacy of making a completely unfounded assumption.
What makes them so sure that intelligence can be recreated even in principle by a computer? This issue is very rarely, if ever, addressed by these people. Instead we are constantly quoted figures of Gigaflops and statements of Moore's Law. This to me is the epitome of being misguided....not seeing the wood for the trees.
Also, another major problem with the current A.I. phenomena is the confusion in terms that it generates. A.I. research is one of the "In Vogue" and sexy areas of research, especially when it comes for looking for grants.
Because of this, many, perfectly viable in their own right, research projects end up being "dressed up" as A.I. research. This adds a very much unneeded confusion to the matter.
The biggest offender has got to be research on Expert Systems. To me, A.I. and Expert Systems are two fundamentally different topics, with nothing more than a tentative link between them.
People researching in these fields don't seem to possess the clarity of thought required to see what their ultimate goal is. Many people who believe they are working on A.I. are really only working on an over-hyped Expert System.
For instance one of the great success stories of A.I. is Deep Blue, the chess machine that beat Gary Karsparov in the late eighties.
To me this is an impressive feat, and it really shows the incredible things that can be done with these machines. But it is hardly intelligent is it?.
I mean, you can’t claim that Deep Blue had any understanding of the game of chess.
The only thing that Deep Blue had was speed. It could play through all of the various future possible games that could be played, and then make it’s choice based on the statistically most advantageous move. It didn’t play chess in an “intelligent” way. It didn’t think....Oh “Oh look...Gary is after moving his bishop...I think that he’s about to try and take my rook...I had better move it out of that square”, etc. You get my point.
I believe that creating human intelligence on a computer, is not just difficult in practice, but impossible in principle. I do not believe that our brains are nothing more than organic digital computers. That being said, I do not believe that it would be impossible, in principle to create an intelligent machine. I don’t see that there is anything “magical” in human intelligence. We should be able to create an artificial brain, at least in principle. However this machine would not be a Turing Machine. It would have to take advantage of the various non-computational aspects of nature.
Whether or not we will be able to create such a hypothetical machine, is however a different question. Certainly not in the immediate future. We are currently very much in the dark about so many of the vital aspects that would be involved with the construction of such a machine.
The first, most obvious problem, is the fact that we know so little about the operation of the brain. Now I’m not trying to claim that our aim should be to “reverse engineer” this machine, but rather A.I. research needs to be strongly coupled with neurophysiology research. Not enough cross-fertilization of ideas is going on at present, mainly because the disciplines are so very different in their current form.
Secondly, and perhaps more importantly is our lack of knowledge of physics. Why is it, that we make the assumption that we currently have the physics to describe the actions of the brain? I think that we will need to come up with a unified physical theory before we can put any serious consideration into creating intelligence in the lab. In particular, we need a fuller theory of quantum physics, which in my opinion, is, in its current form nothing more than a “working model” that we can use, until we come to a more fundamental understanding. (Such a claim would not make me popular among the current batch of quantum physicists, but there you go). I would liken our current quantum theory to that of Newtonian gravity, which was only a limiting case of General Relativity.
For more on this point of view, that we will need a Grand Unified Theory before we can hope to fully understand the workings of the mind, I strongly recommend The Emperor’s New Mind by Roger Penrose.
__________________
Last edited by CSflim; 09-09-2003 at 02:46 PM..
|