For the moment I will present my views on the subject, without argument, Later I might post again with my reasons for these views.
Before I begin, some things that need to be cleared up:
First of all, I would like to make an import distinction between *computer* and *machine*. A computer is a
type of machine. As a result the terms are
not synonymous. This is very important.
Second I believe that reference to past achievements in Artificial Intelligence in discussing the possibilities of sentient machines is a poor way of looking at things.
When I look at these programs, I have to see them in two kinds of light:
Me, the programmer, sees them as incredible achievements in computer science. Wonderfully insightful and subtle.
Me, the conscious human, sees them as extraordinarily crude. My no stretch of the imagination are these mechanical, unrefined, glorified calculators similar in any way to my mind.
For this reason I would avoid speaking of the achievements of AI (as impressive as they are) in discussions such as these
except, maybe, by way of analogy only.
Third, it is by no means certain that human brains are deterministic, even from a physicalist/materialist point of view. Quantum effects may come into play. (see
Roger Penrose and Stuart Hameroff for some interesting views on this) This would not really change anything from a free will point of view, as I have previously argued on this board. So a better word than determinism would perhaps be causality or something similar.
Forth determinism
does not imply large scale predictability. Even if this world operated through entirely known deterministic laws (which it doesn't), it does not follow that we could predict the future (even in principle). As a thought experiment, consider a super-computer, which is fed in a "snap-shot" of every particle in the entire universe, along with its position and momentum at that moment. (already we are well beyond absurdity...regardless, this is only a thought experiment...) The computer is to carry out the deterministic laws on this incredible array of particles, and so predict the future. But you must bear in mind that this computer is a part of the universe, and so must contain a model of itself in its memory, and predict
its future. Of course THAT computer must have a model of itself, and so on to an infinite regress. In short we can only predict the future of entirely closed systems of which we are not apart of (and that is before we get into the 'details' of chaos theory, quantum theory, and the sheer intractability of such a massive calculation). Therefore Minority Report like paradoxes are not something we need to worry about.
Fifth, I have taken a slight liberty in the poll. The question is "Will man ever create thinking machines?". In my view it is pointless trying to predict the future. Man will certainly not create thinking machines any time "soon" (say with in the next two-or-three centuries), and attempting to predict what state our technology will be in at even longer time-scales is unbelivably futile. So, I have to remain agnostic on the issue of whether or not man will ever build such machines (it is very likely that we won't even see 2300, given the worldwide state of nuclear armaments!)
So, I have interpreted the question slightly differently as: "Would it be possible for man to create thinking machines", and my answer to that is a strong affirmative.
This of course results in an obvious question to pose do those people who voted no....how would you answer
my question?
With these things cleared up, I will now give my reactions to the various questions that were posed.
First of all, I do not have any religious baggage, so the various religious questions that were posed bear no problem for me.
I believe that we are, as you say, "biological robots". Anyone who says otherwise is not taking the very pertinent implications of Darwinism seriously, or else is arguing from a religious point of view. If the former, then unless you have some incredible insight, and heaps of evidence, with which to bowl over all of established biological theory, then you have no argument. If the latter...well, I don't want to go down that road
YET AGAIN. But to me saying, "yeah well, my religion says this..." does not constitute an argument.
I believe that sentience is a necessary condition for passing the Turing Test, as such if a machine could pass it, then I believe that we should accept that it is indeed conscious. However, it is plausible that we could have conscious machines before they can actually pass the Turing Test, after all a dog, or an ape, or a very young child would not be able to pass such a test, and we all assume that they are indeed conscious.
I think that the view point of consciousness as a black and white issue...splitting the world into conscious things and non-conscious things, is a very poor way to see things. In my mind, consciousness comes in all shades of grey. So an ape would most definitely be conscious, but not to the same extent as myself, a cat even less so, an insect just barely, and a bacterium not at all. There is no "charmed circle" of consciousness.
Consider times when you do indeed experience being "less" conscious...times when you are very drunk, or incredibly tired. You are still
conscious, though not to the same extent that you are under normal circumstances. I am of course not suggesting that dogs run around in a druken stupor, but rather simply pointing out that we can get at least some idea of consciousness coming in different shades of grey.
I think that concerning yourself with Asimov's laws with regards to conscious machines is way off. It is trivialises how the mechanic mind would work. Thinking in terms of simple algorithms such as this is not the way to go about it. Ultimately I think that all of an intelligent machine's behaviour will be emergent phenomena...not explicitly coded rules.
Perhaps it would indeed be possible to enforce something such as this...as a high level after-thought. In other words, taking the fully operational, intelligent machine, and then adding on some kind of inhibitor device. This indeed would be immoral, and we need not think of it in terms only of machines. Precisely the same thing could be done (in principle of course) to a human being...crack open the skull and start rewiring neurons. Again, equally immoral. (the movie A Clockwork Orange deals with a similar idea).
"Is biologically sentient life superior to synthetic sentient life?"
No. I see no reason to assume such a thing. Of course, from a religious point of view, this might seem hard to accept...
we were created by a perfect omnipotent god, unlike these vulgar machines.
Of course many theists had a hard enough time accepting the fact that we were not superior to all of the other "beasts". The thought that we were
related to vulgar apes was incredibly blasphemous. Of course these days, all educated people accept the fact that not only are we
related to apes, we in fact
are apes.
Would conscious machines be our property?
Is your child your property? After all, you did 'produce' this child. You did not consciously
design it, but regardless you build it from raw materials according to a genetic blueprint.
No, neither your child, nor your intelligent machine would be your property.