Tilted Forum Project Discussion Community  

Go Back   Tilted Forum Project Discussion Community > The Academy > Tilted Philosophy


 
 
LinkBack Thread Tools
Old 05-16-2003, 04:25 AM   #1 (permalink)
I change
 
ARTelevision's Avatar
 
Location: USA
Your Matrix Life

What I like about this sort of exercise is that it tends to crack open the doors of perception just a bit more. Perceptually, conceptually, and experientially, there's no way to ascertain the degree of "authenticity" of our lives.

........................................

The Simulation Argument:
Why the Probability that You Are Living in a Matrix is Quite High


Nick Bostrom

Times Higher Education Supplement, May 16, 2003

The Matrix got many otherwise not-so-philosophical minds ruminating on the nature of reality. But the scenario depicted in the movie is ridiculous: human brains being kept in tanks by intelligent machines just to produce power.

There is, however, a related scenario that is more plausible and a serious line of reasoning that leads from the possibility of this scenario to a striking conclusion about the world we live in. I call this the simulation argument. Perhaps its most startling lesson is that there is a significant probability that you are living in computer simulation. I mean this literally: if the simulation hypothesis is true, you exist in a virtual reality simulated in a computer built by some advanced civilisation. Your brain, too, is merely a part of that simulation. What grounds could we have for taking this hypothesis seriously? Before getting to the gist of the simulation argument, let us consider some of its preliminaries. One of these is the assumption of “substrate independence”. This is the idea that conscious minds could in principle be implemented not only on carbon-based biological neurons (such as those inside your head) but also on some other computational substrate such as silicon-based processors.

Of course, the computers we have today are not powerful enough to run the computational processes that take place in your brain. Even if they were, we wouldn’t know how to program them to do it. But ultimately, what allows you to have conscious experiences is not the fact that your brain is made of squishy, biological matter but rather that it implements a certain computational architecture. This assumption is quite widely (although not universally) accepted among cognitive scientists and philosophers of mind. For the purposes of this article, we shall take it for granted.

Given substrate independence, it is in principle possible to implement a human mind on a sufficiently fast computer. Doing so would require very powerful hardware that we do not yet have. It would also require advanced programming abilities, or sophisticated ways of making a very detailed scan of a human brain that could then be uploaded to the computer. Although we will not be able to do this in the near future, the difficulty appears to be merely technical. There is no known physical law or material constraint that would prevent a sufficiently technologically advanced civilisation from implementing human minds in computers.

Our second preliminary is that we can estimate, at least roughly, how much computing power it would take to implement a human mind along with a virtual reality that would seem completely realistic for it to interact with. Furthermore, we can establish lower bounds on how powerful the computers of an advanced civilisation could be. Technological futurists have already produced designs for physically possible computers that could be built using advanced molecular manufacturing technology. The upshot of such an analysis is that a technologically mature civilisation that has developed at least those technologies that we already know are physically possible, would be able to build computers powerful enough to run an astronomical number of human-like minds, even if only a tiny fraction of their resources was used for that purpose.

If you are such a simulated mind, there might be no direct observational way for you to tell; the virtual reality that you would be living in would look and feel perfectly real. But all that this shows, so far, is that you could never be completely sure that you are not living in a simulation. This result is only moderately interesting. You could still regard the simulation hypothesis as too improbable to be taken seriously.

Now we get to the core of the simulation argument. This does not purport to demonstrate that you are in a simulation. Instead, it shows that we should accept as true at least one of the following three propositions:

(1) The chances that a species at our current level of development can avoid going extinct before becoming technologically mature is negligibly small

(2) Almost no technologically mature civilisations are interested in running computer simulations of minds like ours

(3) You are almost certainly in a simulation.

Each of these three propositions may be prima facie implausible; yet, if the simulation argument is correct, at least one is true (it does not tell us which).

While the full simulation argument employs some probability theory and formalism, the gist of it can be understood in intuitive terms. Suppose that proposition (1) is false. Then a significant fraction of all species at our level of development eventually becomes technologically mature. Suppose, further, that (2) is false, too. Then some significant fraction of these species that have become technologically mature will use some portion of their computational resources to run computer simulations of minds like ours. But, as we saw earlier, the number of simulated minds that any such technologically mature civilisation could run is astronomically huge.

Therefore, if both (1) and (2) are false, there will be an astronomically huge number of simulated minds like ours. If we work out the numbers, we find that there would be vastly many more such simulated minds than there would be non-simulated minds running on organic brains. In other words, almost all minds like yours, having the kinds of experiences that you have, would be simulated rather than biological. Therefore, by a very weak principle of indifference, you would have to think that you are probably one of these simulated minds rather than one of the exceptional ones that are running on biological neurons.

So if you think that (1) and (2) are both false, you should accept (3). It is not coherent to reject all three propositions. In reality, we do not have much specific information to tell us which of the three propositions might be true. In this situation, it might be reasonable to distribute our credence roughly evenly between the three possibilities, giving each of them a substantial probability.

Let us consider the options in a little more detail. Possibility (1) is relatively straightforward. For example, maybe there is some highly dangerous technology that every sufficiently advanced civilization develops, and which then destroys them. Let us hope that this is not the case.

Possibility (2) requires that there is a strong convergence among all sufficiently advanced civilisations: almost none of them is interested in running computer simulations of minds like ours, and almost none of them contains any relatively wealthy individuals who are interested in doing that and are free to act on their desires. One can imagine various reasons that may lead some civilisations to forgo running simulations, but for (2) to obtain, virtually all civilisations would have to do that. If this were true, it would constitute an interesting constraint on the future evolution of advanced intelligent life.

The third possibility is the philosophically most intriguing. If (3) is correct, you are almost certainly now living in computer simulation that was created by some advanced civilisation. What kind of empirical implications would this have? How should it change the way you live your life?

Your first reaction might think that if (3) is true, then all bets are off, and that one would go crazy if one seriously thought that one was living in a simulation.

To reason thus would be an error. Even if we were in a simulation, the best way to predict what would happen next in our simulation is still the ordinary methods – extrapolation of past trends, scientific modelling, common sense and so on. To a first approximation, if you thought you were in a simulation, you should get on with your life in much the same way as if you were convinced that you are living a non-simulated life at the bottom level of reality.

The simulation hypothesis, however, may have some subtle effects on rational everyday behaviour. To the extent that you think that you understand the motives of the simulators, you can use that understanding to predict what will happen in the simulated world they created. If you think that there is a chance that the simulator of this world happens to be, say, a true-to-faith descendant of some contemporary Christian fundamentalist, you might conjecture that he or she has set up the simulation in such a way that the simulated beings will be rewarded or punished according to Christian moral criteria. An afterlife would, of course, be a real possibility for a simulated creature (who could either be continued in a different simulation after her death or even be “uploaded” into the simulator’s universe and perhaps be provided with an artificial body there). Your fate in that afterlife could be made to depend on how you behaved in your present simulated incarnation. Other possible reasons for running simulations include the artistic, scientific or recreational. In the absence of grounds for expecting one kind of simulation rather than another, however, we have to fall back on the ordinary empirical methods for getting about in the world.

If we are in a simulation, is it possible that we could know that for certain? If the simulators don’t want us to find out, we probably never will. But if they choose to reveal themselves, they could certainly do so. Maybe a window informing you of the fact would pop up in front of you, or maybe they would “upload” you into their world. Another event that would let us conclude with a very high degree of confidence that we are in a simulation is if we ever reach the point where we are about to switch on our own simulations. If we start running simulations, that would be very strong evidence against (1) and (2). That would leave us with only (3).

Nick Bostrom is a British Academy postdoctoral fellow in the philosophy faculty at Oxford University. His simulation argument is published in The Philosophical Quarterly. A preprint of the original paper is available at http://www.simulation-argument.com .

.................................

Fancy that!

Speculation abounds.
It's good, though, to see a logical structure elaborated with perhaps more persuasiveness than a hypothetically melodramatic film plot. Of course, both have their charm...
__________________
create evolution
ARTelevision is offline  
Old 05-16-2003, 05:16 AM   #2 (permalink)
42, baby!
 
Dragonlich's Avatar
 
Location: The Netherlands
Re: Your Matrix Life

Quote:
Originally posted by ARTelevision
Now we get to the core of the simulation argument. This does not purport to demonstrate that you are in a simulation. Instead, it shows that we should accept as true at least one of the following three propositions:

(1) The chances that a species at our current level of development can avoid going extinct before becoming technologically mature is negligibly small

(2) Almost no technologically mature civilisations are interested in running computer simulations of minds like ours

(3) You are almost certainly in a simulation.

Each of these three propositions may be prima facie implausible; yet, if the simulation argument is correct, at least one is true (it does not tell us which).
I think this list is incomplete. I propose a fourth option: "You are not in a simulation *and* there are many advanced civilisations that are interested in simulations of minds." The options are not mutually exclusive. Writing off the first two options does not automatically mean there are a near-infinite amount of simulated minds, nor does it mean that *we* are simulated minds (not even probably!).

And this whole discussion isn't that new or revolutionary. It's a very old philosophical problem: how do you know that the world is real, if reality is always filtered and adjusted by your own mind? This simulation argument merely adds the idea that your mind might not be real either.

...which leads to the obvious question: does it really matter?
If we are in a simulation, we'll never know about it unless our designers (gods) performed some direct intervention, which would ruin the hypothetical simulation. Unless the whole simulation was aimed at testing what happens when simulated humans find out they're simulated, of course...

Even if this latter option was true, it is unlikely we'll ever see this, given the length of the simulation run, and the relative length of our simulated lives. Unless the simulation only started a few seconds ago, and every memory was planted by the designers...

(Dear designers: please insert Object_Girlfriend(Rich,Sexy,Smart,Nice) into my simulated life. Thank you.)
Dragonlich is offline  
Old 05-16-2003, 05:32 AM   #3 (permalink)
Tilted Cat Head
 
Cynthetiq's Avatar
 
Administrator
Location: Manhattan, NY
ouch... my brain hurts....
__________________
I don't care if you are black, white, purple, green, Chinese, Japanese, Korean, hippie, cop, bum, admin, user, English, Irish, French, Catholic, Protestant, Jewish, Buddhist, Muslim, indian, cowboy, tall, short, fat, skinny, emo, punk, mod, rocker, straight, gay, lesbian, jock, nerd, geek, Democrat, Republican, Libertarian, Independent, driver, pedestrian, or bicyclist, either you're an asshole or you're not.
Cynthetiq is offline  
Old 05-16-2003, 05:53 AM   #4 (permalink)
Psycho
 
Location: Drifting.
Fascinating read, but what he is saying is somewhat impractical.

Dragonlich, as usual, has addressed the two things i wanted to pick on, primarily that this idea that we are a figment of someone elses imagination has been around for many years, and that his potentials scenarios are somewhat incompete.

Anyway, with regards to technology -

Maybe a very large bleeding edge distributed computer network could successfully simulate a single brain with its full functional capability. Factor in 6 billion such brains, plus some sort of massive background system that applies all the laws of our universe to everything, and you'd have one incredibly powerful system requirement... Something that i don't think we could achieve for a very very long time. Then again, creating color graphics on a desktop workstation was impossible 20 years ago =)

Actually, come to think about it, its a good explanation for all the supernatural tales circling around... bugs in the code =)
Loki is offline  
Old 05-16-2003, 07:17 AM   #5 (permalink)
Insane
 
Location: The Local Group
This is a little off topic, but i think it plays in with the "simulation" theory.

A sub heading to all of this is the idea of free will. Do we really have free will/free choice? Physicists will say no, because quantum theory states everything is not free nor is deterministic. Rather, everything is probabilistic.

We see a solution/route that we perceive as a way out, but even that route contains more roots of control. I guess what I'm trying to say is that the solution itself may be a means of control, and that’s where I am placing my bets on the plot of the movie.

Free will as we know it may only be a low level manifestation of another "fake reality”, and as mentioned above, it really does not matter because we can not ever know the real reality.
__________________
If liberty means anything at all, it means the right to tell people what they do not want to hear.
Simple_Min is offline  
Old 05-16-2003, 07:35 AM   #6 (permalink)
Insane
 
Location: Finland
That was scary, in a way.
alpha is offline  
Old 05-16-2003, 08:33 AM   #7 (permalink)
Insane
 
The premise seems kinda similiar to the movie "The 13th Floor".

Interesting read, thanks.
Unknown Poster is offline  
Old 05-16-2003, 08:37 AM   #8 (permalink)
I change
 
ARTelevision's Avatar
 
Location: USA
OK, that was the Matrix 101 Version, Here's the Matrix 401 Explanation

Alrighty then, got your thinking caps on?

This is the original version of Bostrom's arguments.
It's a bit detailed, but It's nice to have access to the fully fleshed out version of our potentially non-corporeal lives.

.............................................................................
ARE YOU LIVING IN A COMPUTER SIMULATION?

BY NICK BOSTROM
Department of Philosophy, Oxford University

Homepage: http://www.nickbostrom.com
[First version: May, 2001; Final version July 2002]
This is a preprint of the final version which appeared in Philosophical Quarterly (2003), Vol. 53, No. 211, pp. 243-255.
[This document is located at http://www.simulation-argument.com]

ABSTRACT
This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

I. INTRODUCTION
Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears. That is the basic idea. The rest of this paper will spell it out more carefully.
Apart from the interest this thesis may hold for those who are engaged in futuristic speculation, there are also more purely theoretical rewards. The argument provides a stimulus for formulating some methodological and metaphysical questions, and it suggests naturalistic analogies to certain traditional religious conceptions, which some may find amusing or thought-provoking.
The structure of the paper is as follows. First, we formulate an assumption that we need to import from the philosophy of mind in order to get the argument started. Second, we consider some empirical reasons for thinking that running vastly many simulations of human minds would be within the capability of a future civilization that has developed many of those technologies that can already be shown to be compatible with known physical laws and engineering constraints. This part is not philosophically necessary but it provides an incentive for paying attention to the rest. Then follows the core of the argument, which makes use of some simple probability theory, and a section providing support for a weak indifference principle that the argument employs. Lastly, we discuss some interpretations of the disjunction, mentioned in the abstract, that forms the conclusion of the simulation argument.

II. THE ASSUMPTION OF SUBSTRATE-INDEPENDENCE
A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is nor an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.
Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.
The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) – just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.
Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).

III. THE TECHNOLOGICAL LIMITS OF COMPUTATION
At our current stage of technological development, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. Some authors argue that this stage may be only a few decades away.[1] Yet present purposes require no assumptions about the time-scale. The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints.
Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers. It is currently hard to be confident in any upper bound on the computing power that may be available to posthuman civilizations. As we are still lacking a “theory of everything”, we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those constraints[2] that in our current understanding impose theoretical limits on the information processing attainable in a given lump of matter. We can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood. For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 10^21 instructions per second.[3] Another author gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet.[4] (If we could create quantum computers, or learn to build computers out of nuclear matter or plasma, we could push closer to the theoretical limits. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits.[5] However, it suffices for our purposes to use the more conservative estimate that presupposes only currently known design-principles.)The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~10^14 operations per second for the entire human brain.[6] An alternative estimate, based the number of synapses in the brain and their firing frequency, gives a figure of ~10^16-10^17 operations per second.[7] Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dentritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its neuronal components. One would therefore expect a substantial efficiency gain when using more reliable and versatile non-biological processors.
Memory seems to be a no more stringent constraint than processing power.[8] Moreover, since the maximum human sensory bandwidth is ~10^8 bits per second, simulating all sensory events incurs a negligible cost compared to simulating the cortical activity. We can therefore use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating a human mind.
If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. The paradigmatic case of this is a computer. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. This presents no problem, since our current computing power is negligible by posthuman standards.
Moreover, a posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times. Therefore, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director could skip back a few seconds and rerun the simulation in a way that avoids the problem.
It thus seems plausible that the main computational cost in creating simulations that are indistinguishable from physical reality for human minds in the simulation resides in simulating organic brains down to the neuronal or sub-neuronal level.[9] While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~10^33 - 10^36 operations as a rough estimate[10]. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument. We noted that a rough approximation of the computational power of a planetary-mass computer is 10^42 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.
· Posthuman civilizations would have enough computing power to run hugely many ancestor-simulations even while using only a tiny fraction of their resources for that purpose.

IV. THE CORE OF THE SIMULATION ARGUMENT
The basic idea of this paper can be expressed roughly as follows: If there were a substantial chance that our civilization will ever get to the posthuman stage and run many ancestor-simulations, then how come you are not living in such a simulation?
We shall develop this idea into a rigorous argument. Let us introduce the following notation:
: Fraction of all human-level technological civilizations that survive to reach a posthuman stage
: Average number of ancestor-simulations run by a posthuman civilization
: Average number of individuals that have lived in a civilization before it reaches a posthuman stage
The actual fraction of all observers with human-type experiences that live in simulations is then

Writing for the fraction of posthuman civilizations that are interested in running ancestor-simulations (or that contain at least some individuals who are interested in that and have sufficient resources to run a significant number of such simulations), and for the average number of ancestor-simulations run by such interested civilizations, we have

and thus:
(*)
Because of the immense computing power of posthuman civilizations, is extremely large, as we saw in the previous section. By inspecting (*) we can then see that at least one of the following three propositions must be true:
(1)
(2)
(3)

V. A BLAND INDIFFERENCE PRINCIPLE
We can take a further step and conclude that conditional on the truth of (3), one’s credence in the hypothesis that one is in a simulation should be close to unity. More generally, if we knew that a fraction x of all observers with human-type experiences live in simulations, and we don’t have any information that indicate that our own particular experiences are any more or less likely than other human-type experiences to have been implemented in vivo rather than in machina, then our credence that we are in a simulation should equal x:
(#)
This step is sanctioned by a very weak indifference principle. Let us distinguish two cases. The first case, which is the easiest, is where all the minds in question are like your own in the sense that they are exactly qualitatively identical to yours: they have exactly the same information and the same experiences that you have. The second case is where the minds are “like” each other only in the loose sense of being the sort of minds that are typical of human creatures, but they are qualitatively distinct from one another and each has a distinct set of experiences. I maintain that even in the latter case, where the minds are qualitatively different, the simulation argument still works, provided that you have no information that bears on the question of which of the various minds are simulated and which are implemented biologically.
A detailed defense of a stronger principle, which implies the above stance for both cases as trivial special instances, has been given in the literature.[11] Space does not permit a recapitulation of that defense here, but we can bring out one of the underlying intuitions by bringing to our attention to an analogous situation of a more familiar kind. Suppose that x% of the population has a certain genetic sequence S within the part of their DNA commonly designated as “junk DNA”. Suppose, further, that there are no manifestations of S (short of what would turn up in a gene assay) and that there are no known correlations between having S and any observable characteristic. Then, quite clearly, unless you have had your DNA sequenced, it is rational to assign a credence of x% to the hypothesis that you have S. And this is so quite irrespective of the fact that the people who have S have qualitatively different minds and experiences from the people who don’t have S. (They are different simply because all humans have different experiences from one another, not because of any known link between S and what kind of experiences one has.)
The same reasoning holds if S is not the property of having a certain genetic sequence but instead the property of being in a simulation, assuming only that we have no information that enables us to predict any differences between the experiences of simulated minds and those of the original biological minds.
It should be stressed that the bland indifference principle expressed by (#) prescribes indifference only between hypotheses about which observer you are, when you have no information about which of these observers you are. It does not in general prescribe indifference between hypotheses when you lack specific information about which of the hypotheses is true. In contrast to Laplacean and other more ambitious principles of indifference, it is therefore immune to Bertrand’s paradox and similar predicaments that tend to plague indifference principles of unrestricted scope.
Readers familiar with the Doomsday argument[12] may worry that the bland principle of indifference invoked here is the same assumption that is responsible for getting the Doomsday argument off the ground, and that the counterintuitiveness of some of the implications of the latter incriminates or casts doubt on the validity of the former. This is not so. The Doomsday argument rests on a much stronger and more controversial premiss, namely that one should reason as if one were a random sample from the set of all people who will ever have lived (past, present, and future) even though we know that we are living in the early twenty-first century rather than at some point in the distant past or the future. The bland indifference principle, by contrast, applies only to cases where we have no information about which group of people we belong to.
If betting odds provide some guidance to rational belief, it may also be worth to ponder that if everybody were to place a bet on whether they are in a simulation or not, then if people use the bland principle of indifference, and consequently place their money on being in a simulation if they know that that’s where almost all people are, then almost everyone will win their bets. If they bet on not being in a simulation, then almost everyone will lose. It seems better that the bland indifference principle be heeded.
Further, one can consider a sequence of possible situations in which an increasing fraction of all people live in simulations: 98%, 99%, 99.9%, 99.9999%, and so on. As one approaches the limiting case in which everybody is in a simulation (from which one can deductively infer that one is in a simulation oneself), it is plausible to require that the credence one assigns to being in a simulation gradually approach the limiting case of complete certainty in a matching manner.

VI. INTERPRETATION
The possibility represented by proposition (1) is fairly straightforward. If (1) is true, then humankind will almost certainly fail to reach a posthuman level; for virtually no species at our level of development become posthuman, and it is hard to see any justification for thinking that our own species will be especially privileged or protected from future disasters. Conditional on (1), therefore, we must give a high credence to DOOM, the hypothesis that humankind will go extinct before reaching a posthuman level:

One can imagine hypothetical situations were we have such evidence as would trump knowledge of . For example, if we discovered that we were about to be hit by a giant meteor, this might suggest that we had been exceptionally unlucky. We could then assign a credence to DOOM larger than our expectation of the fraction of human-level civilizations that fail to reach posthumanity. In the actual case, however, we seem to lack evidence for thinking that we are special in this regard, for better or worse.
Proposition (1) doesn’t by itself imply that we are likely to go extinct soon, only that we are unlikely to reach a posthuman stage. This possibility is compatible with us remaining at, or somewhat above, our current level of technological development for a long time before going extinct. Another way for (1) to be true is if it is likely that technological civilization will collapse. Primitive human societies might then remain on Earth indefinitely.
There are many ways in which humanity could become extinct before reaching posthumanity. Perhaps the most natural interpretation of (1) is that we are likely to go extinct as a result of the development of some powerful but dangerous technology.[13] One candidate is molecular nanotechnology, which in its mature stage would enable the construction of self-replicating nanobots capable of feeding on dirt and organic matter – a kind of mechanical bacteria. Such nanobots, designed for malicious ends, could cause the extinction of all life on our planet.[14]
The second alternative in the simulation argument’s conclusion is that the fraction of posthuman civilizations that are interested in running ancestor-simulation is negligibly small. In order for (2) to be true, there must be a strong convergence among the courses of advanced civilizations. If the number of ancestor-simulations created by the interested civilizations is extremely large, the rarity of such civilizations must be correspondingly extreme. Virtually no posthuman civilizations decide to use their resources to run large numbers of ancestor-simulations. Furthermore, virtually all posthuman civilizations lack individuals who have sufficient resources and interest to run ancestor-simulations; or else they have reliably enforced laws that prevent such individuals from acting on their desires.
What force could bring about such convergence? One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral. On the contrary, we tend to view the existence of our race as constituting a great ethical value. Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.Another possible convergence point is that almost all individual posthumans in virtually all posthuman civilizations develop in a direction where they lose their desires to run ancestor-simulations. This would require significant changes to the motivations driving their human predecessors, for there are certainly many humans who would like to run ancestor-simulations if they could afford to do so. But perhaps many of our human desires will be regarded as silly by anyone who becomes a posthuman. Maybe the scientific value of ancestor-simulations to a posthuman civilization is negligible (which is not too implausible given its unfathomable intellectual superiority), and maybe posthumans regard recreational activities as merely a very inefficient way of getting pleasure – which can be obtained much more cheaply by direct stimulation of the brain’s reward centers. One conclusion that follows from (2) is that posthuman societies will be very different from human societies: they will not contain relatively wealthy independent agents who have the full gamut of human-like desires and are free to act on them.
The possibility expressed by alternative (3) is the conceptually most intriguing one. If we are living in a simulation, then the cosmos that we are observing is just a tiny piece of the totality of physical existence. The physics in the universe where the computer is situated that is running the simulation may or may not resemble the physics of the world that we observe. While the world we see is in some sense “real”, it is not located at the fundamental level of reality.
It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration. If we do go on to create our own ancestor-simulations, this would be strong evidence against (1) and (2), and we would therefore have to conclude that we live in a simulation. Moreover, we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.
Reality may thus contain many levels. Even if it is necessary for the hierarchy to bottom out at some stage – the metaphysical status of this claim is somewhat obscure – there may be room for a large number of levels of reality, and the number could be increasing over time. (One consideration that counts against the multi-level hypothesis is that the computational cost for the basement-level simulators would be very great. Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should expect our simulation to be terminated when we are about to become posthuman.)
Although all the elements of such a system can be naturalistic, even physical, it is possible to draw some loose analogies with religious conceptions of the world. In some ways, the posthumans running a simulation are like gods in relation to the people inhabiting the simulation: the posthumans created the world we see; they are of superior intelligence; they are “omnipotent” in the sense that they can interfere in the workings of our world even in ways that violate its physical laws; and they are “omniscient” in the sense that they can monitor everything that happens. However, all the demigods except those at the fundamental level of reality are subject to sanctions by the more powerful gods living at lower levels.
Further rumination on these themes could climax in a naturalistic theogony that would study the structure of this hierarchy, and the constraints imposed on its inhabitants by the possibility that their actions on their own level may affect the treatment they receive from dwellers of deeper levels. For example, if nobody can be sure that they are at the basement-level, then everybody would have to consider the possibility that their actions will be rewarded or punished, based perhaps on moral criteria, by their simulators. An afterlife would be a real possibility. Because of this fundamental uncertainty, even the basement civilization may have a reason to behave ethically. The fact that it has such a reason for moral behavior would of course add to everybody else’s reason for behaving morally, and so on, in truly virtuous circle. One might get a kind of universal ethical imperative, which it would be in everybody’s self-interest to obey, as it were “from nowhere”.
In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or “shadow-people” – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience. Even if there are such selective simulations, you should not think that you are in one of them unless you think they are much more numerous than complete simulations. There would have to be about 100 billion times as many “me-simulations” (simulations of the life of only a single mind) as there are ancestor-simulations in order for most simulated persons to be in me-simulations.
There is also the possibility of simulators abridging certain parts of the mental lives of simulated beings and giving them false memories of the sort of experiences that they would typically have had during the omitted interval. If so, one can consider the following (farfetched) solution to the problem of evil: that there is no suffering in the world and all memories of suffering are illusions. Of course, this hypothesis can be seriously entertained only at those times when you are not currently suffering.
Supposing we live in a simulation, what are the implications for us humans? The foregoing remarks notwithstanding, the implications are not all that radical. Our best guide to how our posthuman creators have chosen to set up our world is the standard empirical study of the universe we see. The revisions to most parts of our belief networks would be rather slight and subtle – in proportion to our lack of confidence in our ability to understand the ways of posthumans. Properly understood, therefore, the truth of (3) should have no tendency to make us “go crazy” or to prevent us from going about our business and making plans and predictions for tomorrow. The chief empirical importance of (3) at the current time seems to lie in its role in the tripartite conclusion established above.[15] We may hope that (3) is true since that would decrease the probability of (1), although if computational constraints make it likely that simulators would terminate a simulation before it reaches a posthuman level, then out best hope would be that (2) is true.
If we learn more about posthuman motivations and resource constraints, maybe as a result of developing towards becoming posthumans ourselves, then the hypothesis that we are simulated will come to have a much richer set of empirical implications.

VII. CONCLUSION
A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true: (1) The fraction of human-level civilizations that reach a posthuman stage is very close to zero; (2) The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero; (3) The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any relatively wealthy individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).
Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation.
........................

Take as much time as you like.
If you have any questions, I'm sure they will be good ones!
__________________
create evolution
ARTelevision is offline  
Old 05-16-2003, 10:31 AM   #9 (permalink)
42, baby!
 
Dragonlich's Avatar
 
Location: The Netherlands
1) Simple_Min: quantum mechanics does not preclude free will - in fact, it suggests that there *must* be free will. After all, even God cannot bypass quantum mechanics' probabilistic nature, and thus cannot predict/guide our actions.

2) About processing power: during a university course on computer theory, I think I heard the teacher say that the theoretical maximum of calculations a second is about 10 ^ 50th, given minimum distances (between memory locations) of one electron. Quantum computers could easily solve that upper bound.

3) The writer assumes some advanced civ would build many computers (logical), but then assumes they'll run many simulations, as the required computing power is small. Question is: why the hell would they run his "ancestor simulations" so many times? One would assume they have better things to do. But perhaps it's part of their history lesson: a simulation of human life from beginning to current days (future for us), all compressed; we wouldn't know we're compressed, of course.

4) if *we* are in a simulation, wouldn't our simulators be in a simulator too, logically speaking? In fact, according to his logic, practically everyone should be in a simulation, again leading to the question: why? (And to the simulators: who?) The writer addresses this problem by stating that our simulators will likely be simulations too. Nice, but probability doesn't work that way. If it did, one could argue that *everyone* is in fact a simulation - after all, the odds of anyone being real is negligible...

5) Does all of this really matter? As with the older theories about reality being some sort of dream or something, the answer is obvious: no! One has to act as if the simulation is real, in order to survive in the simulation.

6) Finally, the writer brings in religion, and states simulators *in* simulations (er...) will have to assume there's an afterlife, and they'd be punished for being bad to *their* simulations. This cannot reasonably be true: a simulation will not have an afterlife at all, because a) the survival of simulated humans would ruin future simulations, and b) a simulated afterlife is irrelevant to the simulation. Also, simulated evil, why punish that? It's part of the simulation, after all!

7) Finally, my original comment still stands: advanced civs with the capability to run such simulations do not mean we are likely to *be* simulations. That's only true if you accept that the number of simulated humans is way higher than the number of real humans. Which then leads to the obvious question: "what *is* real", from Matrix 1...
Dragonlich is offline  
Old 05-26-2003, 03:34 PM   #10 (permalink)
Psycho
 
Location: in a deep, dark hole where rainbow creatures attack me to eat my fingernails.
AHH!! BRAIN ACHE!! CAN'T... THINK!! AAARRGGHHH...

dude, i'm sorry, but shit like that kind of scares me. it's interesting and i would love to learn about it more, but the thought of not having control of my own life upsets me very much. i feel trapped inside myself, only i'm not there...

i will forever love mind games like that, no matter how much they may scare me, but i must say, to know that i don't control my own life? i'm sorry, but ignorance is bliss in that case senerio.
scarebearjinx is offline  
Old 05-26-2003, 07:14 PM   #11 (permalink)
Psycho
 
Location: British Columbia
I feal exactly the same way, scarebear. Mind riddles like this are cool and all, but its scary thinking that perhaps everything around us that we see, feel, hear, etc. Is an illusion, and that we ourselves are just an illusion.
Eviltree is offline  
 

Tags
life, matrix


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



All times are GMT -8. The time now is 02:08 AM.

Tilted Forum Project

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0 PL2
© 2002-2012 Tilted Forum Project

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360