Reaching Morality in non-Top-Level Realities
I wish to caution that this is just a draft, and really needs to be enhanced and made more rigorous. I think, however, that the approach has some strong potential.
This was originally posted elsewhere at the request of a friend, but I'm reposting it here to open it up for more people to chew on.
I'm also posting it to 'link' to my post in the Burka thread.
My goal here is that we *can* build the foundation for a cohesive theory of morality even without absolute knowledge of objective reality. This permits secular moral theories to survive nesting of realities (think The Matrix.)
There is no need for moral relativism.
Edit: Anyhow, what I'd like is for you to chew on this and see what you find. Currently, I think the biggest weakness is in the vagueness of its definition of personhood, but I imagine there's a variety of other things people from other perspectives will find and point out.
I created this out of a desire to escape moral relativism and radical skepticism as underminers for a moral theory, and I think this is a promising approach. Thoughts?
----
Is this the true reality?
At some point, that no longer matters.
The more appropriate question is, "is this reality useful?"
Can I observe this reality?
Can I make predictions about this reality, using those observations?
Can I test those predictions about this reality?
Can I make models based on the results of those tests?
Can I manipulate this reality using those models?
Can I change my mental state by manipulating this reality?
If the answer to all six questions is yes, then this reality is sufficiently real to be useful.
If, at some point, the answer to one of these questions becomes no, then it is no longer sufficiently real to be useful.
We may make inferrences based on data, and operate on them as if they were true, until such time as new data suggests either a better inferrence or that the existing inferrence is invalid.
---
Nested (or Linked) Realities
If the contents of a reality can be altered from some other reality outside of it, these realities are linked.
The more a linked reality can be altered from within a current reality, the more useful the current reality is over the linked reality.
---
Other People
A system cannot be perfectly modeled by a system less complex than the minimum complexity it can be reduced to.
That is, to say:
+ There is some minimum complexity a system can be reduced to, and still produce the same results.
+ A different system which is less complex than that minimum cannot produce or predict exactly the same results.
If we cannot completely model the actions of another apparent individual internally, then they must be at least more complex than our most powerful possible internal model of them.
( For human beings, we can offload some of our simulation work off onto our dedicated human hardware, which means our models of other humans can be even more advanced. That means hey, this other person's a pretty complex system! )
This presents a couple possibilities:
+ They are a subcomponent of the same system as ourself, of which we are also a subcomponent.
+ They are a whole or part of a separate system from ourself.
In both cases, something other than our immediate self exists.
In either case, if we have reason to believe that a separate apparent entity is a person, because it acts like a person, with the complexity of a person, we should do so.
---
Value
That which pleases us has value.
Why? Because we like it.
Value is subjectively experienced. It cannot be proven absolutely to originate in a particular reality.
Suppose, for example, that we have physical brains in some reality. If the reality we witness is simulated, but affects that high-level reality (imagine getting hurt in a video game injecting drugs into your real body), then what we're really measuring when we examine the physical brain in our perceived/simulated reality is correlation of mood, not causation of mood. We cannot prove absolutely that the reality we're observing is the one our physical brains exist in. This is why, as per the next section, we must rely on agent reports of value.
---
Spreading Value to Others
We are a system of a certain complexity and experience subjective value.
If a separate entity of sufficient complexity to qualify as a person expresses that it experiences subjective value, we have no logical reason to deny it, as it is so complex that we cannot contain it internally. ( That is to say, we do not have grounds to reject its assertion, and we cannot reject its assertion without calling into question our own personhood. )
We must rely on an entity's own reports to determine its subjective value-experience, although we can conclude if this report is sufficiently likely to be false based on the truthfulness of the entity's past reports on verifiable data.
If an entity does not express its subjective value-experience, and we have no reason to believe it has one, it is not useful to attempt to appease it. This is because we cannot usefully predict what its subjective value-experience will be.
(We might infer, for example, that a mute, living human who does not use our language experiences value, despite him having the inability to articulate it to us. We might also infer that a rock experiences no value.)
Last edited by cypher197; 04-25-2011 at 05:23 AM..
|