![]() |
|
View Poll Results: Will man ever create thinking machines? | |||
Yes |
![]() ![]() ![]() ![]() |
10 | 71.43% |
No |
![]() ![]() ![]() ![]() |
4 | 28.57% |
Voters: 14. You may not vote on this poll |
|
LinkBack | Thread Tools |
![]() |
#1 (permalink) |
Insane
Location: Pennsylvania
|
The Philosophy of AI
Artificial intelligence – the American Heritage Dictionary defines it as the ability of a computer or other machine to perform those activities that are normally thought to require intelligence. While, as far as the general populace knows, we have yet to actually create a computer that possesses artificial intelligence, computer scientists around the world seek to do exactly that. In fact, talk-bots, programs that are designed to learn natural language, as well as a variety of other primitive AI systems have been around for some time (1). Many philosophical issues arise out of this field, primarily where ethics is concerned, but there is also plenty to be discussed as far as epistemology and metaphysics are concerned.
Speaking of metaphysics, that is a good place to start. Perhaps, the most obvious metaphysical assumption of artificial intelligence is that sentience does exist. If it did not, it would be a futile effort to try to replicate it. Furthermore, it is also assumed that not everything which exists has sentience. Again, it would be foolish to try to instill sentience in something that already had it. The last of the big assumptions mirrors the first. Just as it would be futile to replicate something which does not exist, it would also be a waste of time to try to reproduce sentience if it can not be manufactured. With some assumptions safely dealt with, perhaps it would be good to discuss the metaphysical implications of artificial intelligence. If it should come to pass that sentience can not be replicated, it raises a very important question. Is the whole greater than the sum of its parts? Clearly if machinery alone can not make an ersatz sentient being, there is something more to it than the physical. Moreover, if it is such, then determinism might be kaput. After all, if you admit that human awareness is not solely comprised of physical components, then how can you say that man is enslaved by the laws of physics? On the other hand, if we do truly create artificial intelligence, we are forced to question our views of self. Are we just biological robots? If we are, then determinism is, perhaps, after all, true. In which case, all sorts of theological questions come to the forefront. From this rise a panoply of ethical questions which will be discussed later. Before moving into ethics, though, a couple of epistemological issues should be addressed. First of all, it must be assumed that one can differentiate between sentient and non-sentient life. While, perhaps, you can create sentient life without knowing it, the computer scientists of the world would surely assert than you would, even if not immediately, be able to distinguish at some point. In fact, the renowned Alan Turing proposed a test that he called “The Imitation Game (2).” This test, sometimes referred to as the Turing Test, is alleged to do exactly that, distinguish the sentient from the non-sentient. Whether or not it is truly possible to determine sentience, without such a determination we may never say with impunity that we have created a thinking machine. Now, with formalities dealt with, the most interesting portion of this discourse can begin. There are untold numbers of ethical issues which surround artificial intelligence like clouds surround a mountaintop. Perhaps one of the greatest contributors to this field is author Isaac Asimov. A great many of his books and short stories revolved around robots and the ethics surrounding them. The most obvious of which is this: having created thinking machines, are we within our rights to place upon them additional restraints. Asimov placed three such restraints on his robots. The so-called Three Laws of Robotics placed human life before robotic life, essentially making them slaves. So, this raises the question, is biologically sentient life superior to synthetic sentient life, and if so, why? On one hand, being manufactured, robots are, in a sense, property to begin with, but does sentience make them eligible to the same inalienable rights which men are credited as having from birth? In other words, is being sentient the same as having a soul? Back to an earlier question, though, what if sentience is deterministic? Well, the first question is, does this preclude the existence of divinity? If so, then it is an inescapable conclusion that morality is a socially constructed system, and thus bereft, in any true sense, of sound reasoning other than the perpetuation of society itself. More esoterically, if sentient beings are deterministic in nature, then we can, without error, calculate the future. This so-called psycho-history raises questions asked in the movie Minority Report. Knowing the future, are we obligated to change it? I know not the answer to that, because changing the future seems to defy the concept of determinism. There, in a nutshell, is the philosophy of artificial intelligence. In so far as metaphysics and epistemology are concerned, most of the issues are rather straightforward, virtually to the point of common sense. Although, questions about the truth of determinism form a nice segue into all sorts of interesting ethical question. Also, in the field of ethics, are questions about the equality of man and machine. Will there, some time from now, be sufferage for people of the digital persuasion? Only time will tell. 1 ELIZA, the original talk-bot can be found here: http://www-ai.ijs.si/eliza/eliza.html and Mr. Mind, a talk-bot I find very realistic, can be found here: http://www.mrmind.com/ 2 For full text of Turing’s original article, “Computing Machinery and Intelligence” visit this website: http://cogprints.ecs.soton.ac.uk/arc...00/turing.html |
![]() |
![]() |
#3 (permalink) |
Sky Piercer
Location: Ireland
|
For the moment I will present my views on the subject, without argument, Later I might post again with my reasons for these views.
Before I begin, some things that need to be cleared up: First of all, I would like to make an import distinction between *computer* and *machine*. A computer is a type of machine. As a result the terms are not synonymous. This is very important. Second I believe that reference to past achievements in Artificial Intelligence in discussing the possibilities of sentient machines is a poor way of looking at things. When I look at these programs, I have to see them in two kinds of light: Me, the programmer, sees them as incredible achievements in computer science. Wonderfully insightful and subtle. Me, the conscious human, sees them as extraordinarily crude. My no stretch of the imagination are these mechanical, unrefined, glorified calculators similar in any way to my mind. For this reason I would avoid speaking of the achievements of AI (as impressive as they are) in discussions such as these except, maybe, by way of analogy only. Third, it is by no means certain that human brains are deterministic, even from a physicalist/materialist point of view. Quantum effects may come into play. (see Roger Penrose and Stuart Hameroff for some interesting views on this) This would not really change anything from a free will point of view, as I have previously argued on this board. So a better word than determinism would perhaps be causality or something similar. Forth determinism does not imply large scale predictability. Even if this world operated through entirely known deterministic laws (which it doesn't), it does not follow that we could predict the future (even in principle). As a thought experiment, consider a super-computer, which is fed in a "snap-shot" of every particle in the entire universe, along with its position and momentum at that moment. (already we are well beyond absurdity...regardless, this is only a thought experiment...) The computer is to carry out the deterministic laws on this incredible array of particles, and so predict the future. But you must bear in mind that this computer is a part of the universe, and so must contain a model of itself in its memory, and predict its future. Of course THAT computer must have a model of itself, and so on to an infinite regress. In short we can only predict the future of entirely closed systems of which we are not apart of (and that is before we get into the 'details' of chaos theory, quantum theory, and the sheer intractability of such a massive calculation). Therefore Minority Report like paradoxes are not something we need to worry about. Fifth, I have taken a slight liberty in the poll. The question is "Will man ever create thinking machines?". In my view it is pointless trying to predict the future. Man will certainly not create thinking machines any time "soon" (say with in the next two-or-three centuries), and attempting to predict what state our technology will be in at even longer time-scales is unbelivably futile. So, I have to remain agnostic on the issue of whether or not man will ever build such machines (it is very likely that we won't even see 2300, given the worldwide state of nuclear armaments!) So, I have interpreted the question slightly differently as: "Would it be possible for man to create thinking machines", and my answer to that is a strong affirmative. This of course results in an obvious question to pose do those people who voted no....how would you answer my question? With these things cleared up, I will now give my reactions to the various questions that were posed. First of all, I do not have any religious baggage, so the various religious questions that were posed bear no problem for me. I believe that we are, as you say, "biological robots". Anyone who says otherwise is not taking the very pertinent implications of Darwinism seriously, or else is arguing from a religious point of view. If the former, then unless you have some incredible insight, and heaps of evidence, with which to bowl over all of established biological theory, then you have no argument. If the latter...well, I don't want to go down that road YET AGAIN. But to me saying, "yeah well, my religion says this..." does not constitute an argument. I believe that sentience is a necessary condition for passing the Turing Test, as such if a machine could pass it, then I believe that we should accept that it is indeed conscious. However, it is plausible that we could have conscious machines before they can actually pass the Turing Test, after all a dog, or an ape, or a very young child would not be able to pass such a test, and we all assume that they are indeed conscious. I think that the view point of consciousness as a black and white issue...splitting the world into conscious things and non-conscious things, is a very poor way to see things. In my mind, consciousness comes in all shades of grey. So an ape would most definitely be conscious, but not to the same extent as myself, a cat even less so, an insect just barely, and a bacterium not at all. There is no "charmed circle" of consciousness. Consider times when you do indeed experience being "less" conscious...times when you are very drunk, or incredibly tired. You are still conscious, though not to the same extent that you are under normal circumstances. I am of course not suggesting that dogs run around in a druken stupor, but rather simply pointing out that we can get at least some idea of consciousness coming in different shades of grey. I think that concerning yourself with Asimov's laws with regards to conscious machines is way off. It is trivialises how the mechanic mind would work. Thinking in terms of simple algorithms such as this is not the way to go about it. Ultimately I think that all of an intelligent machine's behaviour will be emergent phenomena...not explicitly coded rules. Perhaps it would indeed be possible to enforce something such as this...as a high level after-thought. In other words, taking the fully operational, intelligent machine, and then adding on some kind of inhibitor device. This indeed would be immoral, and we need not think of it in terms only of machines. Precisely the same thing could be done (in principle of course) to a human being...crack open the skull and start rewiring neurons. Again, equally immoral. (the movie A Clockwork Orange deals with a similar idea). "Is biologically sentient life superior to synthetic sentient life?" No. I see no reason to assume such a thing. Of course, from a religious point of view, this might seem hard to accept...we were created by a perfect omnipotent god, unlike these vulgar machines. Of course many theists had a hard enough time accepting the fact that we were not superior to all of the other "beasts". The thought that we were related to vulgar apes was incredibly blasphemous. Of course these days, all educated people accept the fact that not only are we related to apes, we in fact are apes. Would conscious machines be our property? Is your child your property? After all, you did 'produce' this child. You did not consciously design it, but regardless you build it from raw materials according to a genetic blueprint. No, neither your child, nor your intelligent machine would be your property.
__________________
![]() Last edited by CSflim; 03-18-2004 at 03:29 PM.. |
![]() |
Tags |
philosophy |
|
|