Quote:
Originally Posted by Orogun01
It's just part of formality, to state one views on the subject. Your main focuses seem to be the greater moral good, and the nature of reality.
But your definition of reality it's vague enough to invite any interpretation of reality. To follow your analogy, there is no telling that the ball and it's trajectory are not virtual constructs of a machine.
|
It does not matter if the ball and its trajectory are virtual constructs of a machine. That is part of the point of the theory.
Quote:
On your Opening Post you set "Can I make predictions about this reality, using those observations?" as a requirement for your definition of reality.
But it doesn't account for the physical differences that the same object may have on different states of "reality". Let's same a comparison between a ball and a virtual ball.
|
I think you misunderstood. That requirement is not for a reality, but a criterion for whether some perceived reality is useful. A reality which it is impossible to make predictions about makes consequentialism effectively impossible to implement, for reasons that should be obvious. You cannot deliberately undertake some action to bring about some intended consequence if you cannot make predictions about the world. In such a world, a moral theory, which exists to guide actions, is useless.
Quote:
The problem with a cohesive morality theory it's that it fails to account the different subjective views. The metaphysical and even mystical experiences vary from person, making it difficult to make a moral "absolute". Specially considering that the "true" state of nature it's without morality.
|
On the contrary. This theory builds its moral framework by pooling subjective experiences. Value is, after all, subjectively experienced.
1. And I can say, definitively, that I exist. That is absolutely true at this point in time.
2. And, I can say that I experience emotions, including positive ones. (Positive emotions being positive value.)
3. And I can say that there is a minimum complexity to which any system can be reduced while still producing identical results/outputs given identical inputs.
#1 and #2 may change in the future, but #3 cannot.
---------- Post added at 11:14 PM ---------- Previous post was at 10:53 PM ----------
I'd like to clearly state that, for any given situation, the probability that an actor governed by this system will pick the absolutely perfect response is pretty much zero, because they do not have perfect knowledge.
The goal for the system is not to find the absolute best action for every possible situation, but to attempt to find the best answer given the available resources; an answer which may include less computation in order to conserve some resources. (This is an irony in Utilitarianism, but remember that achieving a local maxima is better than triggering a local minima by failing to achieve a perfect maximum.)
It is entirely possible that two different actors, confronted with the same situation, may choose different courses of action based on their past experiences causing them to form different models of the same world. It is far more important that, if their two realities are linked, agents are able to evaluate morality of potential actions and of positions reasonably and based on available evidence, than that they reach exactly the same conclusion each time.
edit: There is a big difference in believing all penguins to be black based on previous observation, and believing all penguins to be black as a dogmatic matter.
Suppose a green penguin exists. In the former case, it's possible to convince someone by showing them actual evidence of the green penguin. In the latter, they will either deny green penguin's existence contrary to all evidence, or undergo a moral system failure, which we require rebuilding or abandoning the belief system which was dogmatically committed that all penguins are black.