Quote:
Originally Posted by cypher197
It does not matter if the ball and its trajectory are virtual constructs of a machine. That is part of the point of the theory.
|
It matters from a physical point, since a virtual ball; even when experienced in it's own reality would be different than in the real world.
Or is it that you disregard our reality as a standard to which we measure all other realities.?
Quote:
I think you misunderstood. That requirement is not for a reality, but a criterion for whether some perceived reality is useful. A reality which it is impossible to make predictions about makes consequentialism effectively impossible to implement, for reasons that should be obvious. You cannot deliberately undertake some action to bring about some intended consequence if you cannot make predictions about the world. In such a world, a moral theory, which exists to guide actions, is useless.
|
Thanks for clearing that up.
That just leaves the matter of what kind of value that reality has, although I believe that to be subjective and in most cases non-measurable. If so the consequences; and the actions following them, of nested realities can't be objectively measured. Moreover the moral theory would change in our relation with each reality. Where would a reality that has a low value and it's in no way quantifiable fit in your theory?
Quote:
On the contrary. This theory builds its moral framework by pooling subjective experiences. Value is, after all, subjectively experienced.
1. And I can say, definitively, that I exist. That is absolutely true at this point in time.
2. And, I can say that I experience emotions, including positive ones. (Positive emotions being positive value.)
3. And I can say that there is a minimum complexity to which any system can be reduced while still producing identical results/outputs given identical inputs.
#1 and #2 may change in the future, but #3 cannot.
---------- Post added at 11:14 PM ---------- Previous post was at 10:53 PM ----------
I'd like to clearly state that, for any given situation, the probability that an actor governed by this system will pick the absolutely perfect response is pretty much zero, because they do not have perfect knowledge.
The goal for the system is not to find the absolute best action for every possible situation, but to attempt to find the best answer given the available resources; an answer which may include less computation in order to conserve some resources. (This is an irony in Utilitarianism, but remember that achieving a local maxima is better than triggering a local minima by failing to achieve a perfect maximum.)
It is entirely possible that two different actors, confronted with the same situation, may choose different courses of action based on their past experiences causing them to form different models of the same world. It is far more important that, if their two realities are linked, agents are able to evaluate morality of potential actions and of positions reasonably and based on available evidence, than that they reach exactly the same conclusion each time.
edit: There is a big difference in believing all penguins to be black based on previous observation, and believing all penguins to be black as a dogmatic matter.
Suppose a green penguin exists. In the former case, it's possible to convince someone by showing them actual evidence of the green penguin. In the latter, they will either deny green penguin's existence contrary to all evidence, or undergo a moral system failure, which we require rebuilding or abandoning the belief system which was dogmatically committed that all penguins are black.
|
From a moral standpoint is difficult to define, because of the aforementioned reason of morals changing with respect to our relation with each reality. Since the limitations of that reality will set the perceivable moral boundaries. Despite them being linked since it's most likely that we will still continue to exist on this plane of existence. So without the ability to physically transcend reality, how useful would this ethic hold. Aside from providing a deconstruct for moral frameworks.