Tom Jump’s moral theory

0. Introduction

Tom Jump is an atheist YouTuber with a prolific output, making videos several times a week, mostly debating Christian philosophers. Recently, he did a debate with Ask Yourself in which they discussed whether there are moral facts or not, with Jump arguing that there are. Jump’s position is that there is something like a ‘moral law’, which has similarities to physical laws. Their debate descended into a squabble over whether a specific statement of Jump’s expressed a proposition or not. I don’t want to follow that part of the discussion, but I do want to look at Jump’s theory as (I think) he intends it, and to point out some of the problems it has.

  1. What is the theory?

As I said above, Jump thinks that morality is objective, in a similar way to physical laws. Objective in this context means that it exists independently of any minds. We take physical laws, like the law of gravitation, to obtain in the universe just as much whether there are any minds present or not. If a moral law is objective, then it too would obtain just as well without any minds in existence.

Jump’s idea is that objective morality is defined in relation to the notion of involuntary impositions. I think we can have a go at reconstructing his idea as follows:

Action x is a morally wrong iff x is an involuntary imposition on A, for some agent A.

Part of the problem here is that we need to get clearer on what it means for something to be involuntary. This is because on various ways of understanding this term we run into trouble.

2. First go.

On one way of thinking about it, involuntary means something like ‘not actively consented to’. When things happen to someone but they have not specifically chosen that they happen, these are immoral. And sometimes that is right; sometimes things we haven’t actively consented to are immoral.

But this definition of ‘involuntary’ cannot be  what it means to be immoral, because if it were then it would classify things as immoral that are obviously not. For example, surprise birthday parties are not immoral, yet the recipient has ‘not actively consented to’ them happening. So it isn’t the case that the definition immoral is an involuntary imposition on someone, if involuntary means ‘not actively consented to’.

3. Second go

One might think that the problem with the surprise birthday case is that it isn’t involuntary unless you have actively stated that you do not want a surprise party. So maybe we could improve things by changing the definition of ‘involuntary’ from ‘not actively consented to’ to something like ‘against stated preference’. So before, the surprise party was immoral just because you didn’t say anything about the party, but now it would only be immoral if it went against your actively stated preferences. Assuming you have never actively expressed a preference for not having any surprise parties, it would not be an involuntary imposition on you for your friends to throw one for you, and so not immoral. This is an improvement, because it doesn’t misclassify surprise birthday parties.

And there is something fairly intuitive about this idea. Certainly sometimes things that go against our actively stated preferences are immoral. If someone tells you they do not want to have sex with you, but you continue to try to have sex with them then this would be a case of sexual assault, and clearly immoral.

But again, despite this partial alignment, this definition of involuntary cannot be the what it means to be immoral. That’s because there are obvious cases where things go against our stated preference, and are thus ‘involuntary’ in that sense, but that are not immoral. Imagine I go into a bar and order a beer. After I have finished it I state that my preference is for it to be on the house. If the bartender insisted that I have to pay for it, this would make it an ‘involuntary imposition’, because it is against my stated preference. But it is not an immoral thing for the bartender to do; he is quite within his moral rights to charge me for the beer, regardless of whether I stated that I would prefer not to pay for it. So there are obvious cases of things that are against my stated preferences which are not immoral.

4. Third go

As another try, we might say that something is involuntary if it is ‘against my desires’. We might think that the problem is that our previous two tries to define ‘involuntary’ were about whether we do, or no not, say something in particular. In contrast, we might think that it is about whether we have a desire or not, and not about what we say at all. So let’s define ‘involuntary’ as ‘against our desires’.

This would help with the surprise party example, as follows. Assuming I am the sort of person who enjoys surprise parties, then even though I haven’t actively stated that I consent to it, it wouldn’t be involuntary as such, because it wouldn’t be against my desires. It is the sort of thing I would have consented to had I known about it, because I desire that sort of thing to happen. So it is not involuntary, and so not immoral. So far, so good.

However, this is no help in the bartender case. I might just order a drink and desire for it to be on the house, but not say anything out loud. Is the bartender doing something immoral by charging me for the beer? No, clearly he is not. So this has the same problem here. Sometimes things happen that we don’t want to happen which are not immoral. Too bad.

This version also has problems from the other direction too. The problem is that sometimes people have immoral desires. Take a heroin addict who asks for his doctor to prescribe him some heroin. Clearly, the addict desires the heroin. Prescribing it to the addict wouldn’t be an involuntary imposition on him. But it is at least of dubious moral value for the doctor to do, if not outright immoral, even if the doctor wants the money being offered. Take another example: maybe some unstable (North Korean?) dictator asks an advanced country (the UK?) if he can buy some tanks from them. Clearly, it wouldn’t be an involuntary imposition on him to sell him the weapons, and maybe the other country wants the money. Still, just because both parties desire it doesn’t mean it is not immoral. We can easily iterate these examples.

5. Fourth go

One standard way to respond to these sorts of objections is to retreat from what people actually desire, to what they would desire if they were in some idealised state; if they were perfectly rational, etc. We might think that the heroin addict happens to desire another hit, but that he is just suffering from a lack of rationality. If he were being perfectly rational, then he would not desire to have more heroin; he would desire to get clean instead. And there is something intuitive about this particular example.

However, I think it is not so straightforward. The connection between rationality and desires is surprisingly complicated, and something debated at length by philosophers. One simple view, known as ‘Humeanism’ in the literature, is that someone is rational when their actions efficiently realise their desires. If I desire not to get wet, then knowingly walking into the rain without an umbrella is irrational. But, change the desire and the very same action becomes rational – if I want to get wet, then leaving my umbrella behind is rational.

The problem with this simple theory is that if you change the desire to, say, wanting to do something immoral, then the rational thing becomes whatever efficiently realises that desire, which would be to do something immoral. So if you want to kill someone, it might be rational to hit them over the head with a spade. Clearly, there is no guarantee that a perfectly rational person would have no immoral desires on this theory.

We could avoid this problem by abandoning the simple Humean theory. Instead of saying that only desires can motivate us, we could include beliefs too. Being rational might mean something like doing whatever realises your desires but is not believed to be immoral. So take someone who believes that it is wrong to murder people, but desires to kill you. He would be irrational if he hits you on the head with a spade because, although his actions realised his desires, they contradicted his beliefs about what is immoral.

But if someone had Jump’s starting point, then this option would collapse the whole project into circularity. We would have been led down the following path: the definition of immorality involves voluntariness, which in turn involves rationality, which itself involves the notion of beliefs about immorality, and the whole thing becomes a circle. We were offered a definition of immorality which in turn used the notion of immorality.

Thus, Jump is left with a dilemma: either tacitly include the notion of immorality in the definition of rationality, leading to circularity, or stick with Humeanism, and the problem of immoral desires.

6. Final thought

Even if this huge problem were somehow avoided, there is another one that is perhaps even more pressing. The whole point of this theory was supposed to be that it was a theory of objective morality. That means that the moral law that Jump was trying to express (which was supposed to be a bit like a physical law), doesn’t depend on minds to be true. But that is not the case here. If something is immoral when it is involuntary, then it depends on the person having some kind of intentional state, some desire or ‘will’, for it to be in contrast to. In a world where there were no people, there would be no wills for any action to be in contrast with, and so nothing would be immoral. There would be no true proposition, such as ‘x is immoral’, just because there would be no person on whom x would be an involuntary imposition. Thus, this theory is blatantly a variety of subjectivism and not a version of objectivism at all.

 

42 thoughts on “Tom Jump’s moral theory”

  1. Another great post. Enjoyed reading this.

    However, I think your last point — about subjectivity — fails. Consider the statement S: ‘if action phi is F then phi is immoral’. Suppose that F is filled out by some mental characteristic, eg, contrary to desires, and that S is considered an ordinary conditional statement, eg, S is not a counterfactual. In that case, S might be true even if there are no minds. At worlds where there are no minds, S is trivially true. At worlds where there are minds, S is true if actions contrary to desires are immoral.

    Suppose that S is a counterfactual. In this case, S could be necessarily true if in every world where someone phis and phi has feature F, their action is immoral.

    So I think S can be a mind independent truth even though S talks about minds.

    As far as your other objections go, Jump’s theory reminds me of preference utilitarianism. But preference utilitarianism can handle objections that Jump’s theory cannot handle. For example, your bartender example seems to involve a tension between the patron’s preference for the bar to pay the tab and the bar tender’s preference for the patron to pay. The preference utilitarian could argue that the patron should pay because, in this case, the bar tender’s preference overrides the patron’s. Do you think Jump could fix some of the problems with his view by making his view into preference utilitarianism?

    Liked by 2 people

    1. Hi Dan! Glad you enjoyed this.

      On your first point, take some classic subjective theory, according to which x is good iff I desire x. If I don’t exist, that is trivially true. So doesn’t that mean that even this theory isn’t subjective? Is it possible for a theory to be subjective??

      I agree that “S can be a mind independent truth even though S talks about minds”. For instance, “It is wrong to cause any mind to feel pain”, for example. That could be true even if there is no mind.

      I guess I was thinking of (especially the last version of) Jump’s theory to be dependent in a more significant way. Eating a chocolate morally wrong? Only if doing so is an involuntary imposition on someone. But if it isn’t, then it isn’t. While you can say that involuntary impositions are wrong even if there are no minds, you can’t say that any given action is wrong without any minds existing. Does that address your point?

      And yeah, he could rescue his theory quite a lot by adopting something like preference utilitarianism.

      Liked by 1 person

      1. Alex — Glad to hear from you. I’m usually inclined to say that a statement is subjectively true if (roughly) the truth of the statement covaries with the subject who utters the claim. For example, indexical statements are subjective. Plausibly, for any given subjective statement of that sort, we can always give a truth functionally equivalent statement that is not subjective, e.g., “I have a headache” is truth functionally equivalent to “Dan has a headache”. Someone who wants to endorse a full blooded subjectivism would need presumably need to deny that all indexicals statements are truth functionally equivalent to non-subjective statements. In any case, maybe I’m being idiosyncratic here and no one else shares my intuition.

        Perhaps the thought is that a statement is subjective just in case the truth of the statement varies across possible worlds according to whether a given world contains minds. In that case, a statement like “if I harm any mind then I have done wrong” would fail to be subjective because that statement does not vary in the right way across possible worlds.

        Here are two more ways to drive home the intuition that Jump does not appear to be a subjectivist.

        1. Given that Jump’s view seems fairly close to preference utilitarianism and I wouldn’t ordinarily be inclined to call preference utilitarianism a subjectivist theory, I’m inlined to say that Jump’s theory is not subjectivist.

        2. Jump’s theory is close to moral claims that might be made by other moral theories that are not considered subjectivist. For example, let’s imagine a Kantian who argues as follows. We could not consistently will for everyone to live by a maxim in which agents place involuntary impositions on other agents. Therefore, we have a categorical imperative not to place involuntary impositions on other agents. Set aside whether an argument like that really works and let’s suppose that a Kantian really can muster an argument like that. In that case, would we say that the Kantian has a subjectivist view? If not, notice that, like Jump, they have a universal law according to which we ought not place involuntary impositions on other agents. If Jump’s view is subjectivist, then why isn’t the Kantian’s?

        Liked by 1 person

      2. I take it that subjectivism in meta-ethics means something like that moral claims, “x is wrong” etc, are reducible to claims about someone’s attitudes, like “Dan dislikes x”. In this sense, you need people with the right sort of attitudes to exist for there to be any (non-trivial) moral truths. So Jump’s theory seems to be subjective in that sense; “x is wrong” means something like “x goes against Dan’s will” or something. If there were no agents, there would be no true (non-trivial) propositions like “x is wrong”.

        Another way to think about it is that what is wrong or not depends on the choice of agent. So if something can be wrong for me, but good for you, then it is subjective. This seems closer to your intuition. Then Jump’s theory looks like it meets this too, as x might be against my will, but not yours. Maybe you enjoy being punched in the face, but I don’t, etc.

        I wouldn’t call preference utilitarianism subjective as such, but then I would see it as a theory in normative ethics, not meta-ethics. So it helps us decide *which* actions are the most good (the ones that maximise preferences), but it doesn’t tell us *what it means* for an action to be good. Jump’s theory seemed to an attempt at meta-ethics, as he put it forwards as a *definition* of morality.

        Same thing with the Kantian. I think you can have objective and subjective versions of Kantianism too.

        Like

      3. Thanks for your reply!

        Your response was clear and insightful. I think you correctly diagnosed much of what was wrong with what I was saying.

        I’ve never listened to Jump, so I don’t know what he says. But your description of what he says makes him sound like someone who is trying to do both metaethics and normative ethics. (And this might make for good symmetry with his theistic interlocutor who is also trying to offer both.) On the metaethical side, Jump thinks that there is a law of nature, that this law has the same metaphysical status as the laws of physics, and that the content of this law is that it’s wrong to place involuntary impositions on others. Because this law is supposed to have the same metaphysical status as the laws of physics, and the laws of physics are ordinarily taken to be objective, I think Jump should be understood as trying to offer an objectivist theory.

        Like

    2. If TJumps position is that, “the will of a person is the most important factor in determining the morality of something” what does his view have to say about suicide?

      If it is your will to kill yourself it would be immoral for anyone to stop you.

      If you do kill yourself you are eliminating your ability to will moving forward.

      What would be more important under his moral system? Letting them kill themselves if it is their will? Or stopping them so the can continue to exercise their will going forward? This example could even be expanded to, “what if the person wanting to commit suicide had kids who weren’t capable of caring for themselves.”

      Good post!

      Like

  2. I don’t think that it is TJump’s theory at all. As far as I have seen when he is using it it is just counter to theists objective morality claim. It is as made up as gods objective morality. If they can claim objective morality from god then he can claim objective morality from some unknown natural law.

    Like

  3. I was your big fan until I witnessed the Jump debate. I don’t know maybe you were primed by some vegan brat against him before the debate or something else. It’s really strange to me how you can be charitable and friendly with people like Slick, Darth or that jittery Craig’s minion from your other debate while seem to be intentionally thick with Jump. It’s strange for a philosopher like you to not be willing to analyze your own hypothetical deeper than “colloquial” intuition and then got offended by colloquial usage of the word itself. Maybe ethics just isn’t your thing.

    Liked by 1 person

    1. I’m sorry you feel that way. I wasn’t primed by anything other than watching Tom’s previous debates. I’ve seen plenty of them, not just the one referred to above.

      Also, I don’t think it is fair to say that I wasn’t willing to analyse my folk concept of morality (or “colloquial intuition”). I think it is reasonable to ask whether the output of applying his algorithm is actually tracking the concept of morality, or if it is just a nearby concept. And if the latter, then he is just using the word ‘morality’ to rebadge something else. It’s not a straightforward question of whether my concept is incorrect, or if he is changing the subject. That seems to be a question full of philosophical interest.

      I quite concede that I didn’t explore that as well as I would have liked to have done. But I feel like your account of the exchange doesn’t do my position justice.

      Like

  4. In everyday English, objectivity is often conflated with facts/statements that have a truth value, and subjectivity with opinions/statements with no truth value. I haven’t watched Tom Jump’s content but I suspect he would say that people’s preferences are themselves subjective (and lacking in truth value) but that whether someone has or doesn’t have a given preference is an objective fact/truth.

    Like

  5. All of these seem to be easy to answer:

    Bartender: Yes it is immoral to force someone to pay if they don’t consent to/punish them if they don’t. Just like any form of imprisonment is always immoral for any crime, its pragmatic for the function of the world but it is always immoral.

    Taking/desiring heroin isn’t immoral

    final thoughts: gravity is objective even if there are no particles with mass for it to affect, same applies to morality… how is that a problem?

    None of these seem to be issues for my model at all.

    Like

    1. You didn’t understand the doctor/addict example. I said that it would be of at least dubious moral value to sell the addict heroin, even if the addict actively wants it.

      But all you do is bite the bullet every time. The bartender is immoral for charging the guy for a drink, but the heroin dealer is not immoral for dealing heroin. The problem is that these consequences are not what we would expect for an account of what morality is.

      You are like the guy who says “my definition of morality is: x is immoral iff x is a teapot”. When someone points out that the Nazi Holocaust isn’t a teapot, he just bites the bullet and says “ok then, I guess it’s not immoral”.

      You can apply an algorithm that spits out the result that charging someone for a beer is immoral but dealing heroin isn’t. But you are just as wrong as the teapot man if you think that’s a theory of morality.

      Like

      1. Im not just biting the bullet, i’m claiming my answers are better. “charging someone under the threat of prison” is not moral/amoral… its obviously immoral

        Just the act of creating a tab is amoral, sending them to prison if they refuse is immoral.

        you argument is:
        “2000 years ago we thought slavery (or
        women rights, lgbt right, w/e) was morally ok, your moral theory says its not… these consequences are not what we would expect for an account of what morality is.”

        clearly this isnt a problem with a moral theory that says “slavery is wrong”, for the same reason its also not a problem with mine.

        And i ‘bite the bullet’ and say, ya you are wrong its immoral… and in the future based on the pattern of how moral progress works everyone will see it as wrong in future.
        like in startrek, where drinks are all free for this same reason.

        How is this a problem?

        Im answering the question what is the core of morality so we can understand what moral progress will be at any time, rejecting some beliefs today is not a bug its a feature.

        Like

      2. You bring up the issue of sending people to prison, or “under threat of prison”. This is totally irrelevant, and honestly is a good example of how you aren’t following the nuance of the issues here.

        First thing: I didn’t mention prison. You brought it into this conversation. That should be a hint that you are sliding from the issue at hand towards something else.

        Second thing: I agree that sending someone to prison for refusing to pay for a beer would be immoral (because it is disproportionate). But so what? Another thing that would be immoral as a response to someone not paying for a beer: killing their whole family as revenge. Again, what is the significance of this? Is this supposed to be a rebuttal to what I was saying? We aren’t talking about what sort of violence the bartender can do to someone who doesn’t pay for a beer. I certainly didn’t say the bartender can enforce this with violence or prison. We are talking about whether charging for a beer is morally wrong when someone states that they don’t want to pay for it. It’s a different thing completely. You missed the nuance and went charging off down the garden path.

        What I actually said was that the bartender “is quite within his moral rights to charge me for the beer, regardless of whether I stated that I would prefer not to pay for it”. This has *nothing* to do with any threats he might or might not be using. You totally superimposed into this that we were talking about something which involved the threat of prison. If I had said “the bartender is quite within his moral rights to send me to prison for not paying for the beer the beer, regardless of whether I stated that I would prefer not to pay for it”, then you would be saying something relevant.

        The point of the dialectic, because your response makes it clear you are not following it, is that I’m trying to get clear on what you mean by involuntary. Does it mean ‘not actively consented to’? Does it mean ‘against stated preference’? Does it mean ‘against what I would have consented to had I known about it’? Each of these has difference consequences.

        The bartender example came in when I was talking about the middle option – ‘against stated preference’. I walk into a bar, order a beer, drink it, and then say out loud ‘I don’t want to pay for that’. Does that mean that it would be immoral for the bartender to ask me to pay for it? Doing so imposes on my will to some extent, because by asking me to pay for it anyway he is mildly socially pressurising me (without violence or the threat of prison) to pay for the beer because I expressed my preference to not pay for it. If involuntary did mean ‘against stated preference’, then he is asking me to do something that is involuntary. Yet, clearly I’m the one being a dick here. I just ordered a beer and drank it. I don’t get to just stamp my feet like a little crybaby and say ‘but I don’t wanna’ when it comes time to pay the tab. To paint the bartender as the moral villain here would be nuts. It’s another example of where your theory would misfire if that is the implication it had.

        But you didn’t grasp the dialectic here. You didn’t engage with the nuance of the idea in any way. You just pointed out that it would be wrong to send me to prison for not paying for a beer. Ok, but that is irrelevant. The relevant question is: would it be morally wrong for the bartender to say something like “I don’t care that you don’t want to pay for the beer. You came in and drank it, so you should pay me”? It’s not a moral conundrum. Nobody is confused about what the answer here is. It isn’t morally wrong for the bartender to do that. Not only that, I morally ought to pay for the beer, regardless of whatever I say about whether I want to. But your theory seems to say that the bartender is being immoral merely because I state my preference out loud.

        This isn’t a lethal blow to your theory. But it’s one of a thousand silly counterintuitive consequences it has. It’s death from a thousand cuts. In the end, the sheer weight of counterintuitive consequences just makes it a bad candidate for being an account of morality.

        You characterise ‘my argument’ as:

        “2000 years ago we thought slavery (or women rights, lgbt right, w/e) was morally ok, your moral theory says its not… these consequences are not what we would expect for an account of what morality is.”

        Then you say:

        “clearly this isnt a problem with a moral theory that says “slavery is wrong”, for the same reason its also not a problem with mine.”

        So you are trying to say that the consequences I’m highlighting for your theory are just like if we were in (say) ancient Rome people and I am defending owning slaves against you arguing that it isn’t ok even if our culture seems to think that it is. As if I’m just clinging unreflectively to the prejudices of the day and you are a far sighted moral theorist ahead of his age.

        But look, the consequences of applying a moral theory *are* used as a way of showing difficulties for that theory. You don’t get to just brush them under the carpet by pretending you are ahead of your time. Consider Phillipa Foot’s famous problem for utilitarianism, where there is a hospital that could save five people’s lives but only if they take the organs out of the body of an otherwise perfectly healthy person. According to some versions of utilitarianism, that action maximises utility, and so that is what they should do. But that is extremely counterintuitive. The problem is that we know enough about morality, even if we don’t have a formal theory in mind, to know that we shouldn’t kill the guy in that situation, even if the utilitarian algorithm says we should. The utility ratio is too crude an instrument to characterise all morality, even if it has something important to say about morality. Sure, a utilitarian could say “but in the future we will routinely kill people in that situation to maximise utility”. That guy might be a moral profit foretelling the future, but he might just be a dick who refuses to see that his algorithm is flawed.

        That’s what I’m saying about your theory. You aren’t dispelling great miscarriages of moral justice with your diagnosis, like how we routinely mistreat animals, or people in far away countries, or whatever. Your theory is saying that dealing heroin is not morally wrong, that surprise birthday parties are morally wrong, and that if I drink a beer and refuse to pay for it I’m not in the wrong but the bartender who asks me to pay for it is. You are not the guy who is ahead of his time, calling out slavery when everyone else was being horrible to each other. You are the guy who says that the holocaust isn’t immoral because it’s not a teapot. Like Foot, I’m showing that the crude application of a simple algorithm misfires, and we can tell by looking at cases where we have a clear idea about what the answer is and the theory gives the opposite result.

        It’s as if you assume that if you say it then it must be right, and so the best way to explain why nobody else thinks the same thing as you is that you are tuned in to how philosophers of the future will think about things. I doubt I will puncture your ego enough for this to sink in, but that’s delusional man. Snap out of it.

        Liked by 1 person

      3. You are right i wasn’t following at all what your point was because you were conflating involuntary with immoral… You should have asked, “Forget whether or not its moral, is this involuntary: [bartender example]”

        moral and voluntary are not the same.

        involuntary means all of those:
        ‘not actively consented to’
        ‘against stated preference’
        ‘against what I would have consented to had I known about it’

        Involuntary is NOT immoral

        Immoral = involuntary impositions

        If it doesn’t restrict your freedoms or force you to do something, its not immoral because its not an imposition… even if its involuntary

        Surprise parties are NOT immoral because there is no restriction of your freedoms or forcing you to do something (no imposition)

        Surprise parties are NOT immoral… as i said in the convo so that would not be a counterintuitive result of my model at all

        you are bad at explaining your points in a way that is easy to understand where you’re going

        Yes charging for drinks will be seen as immoral in future, just like in star trek

        I understand your argument is that my model is like the utilitarian one, “too crude” but you haven’t provided any examples of it being too crude.

        bartender is conflating involuntary with immoral, yes its involuntary no its not immoral

        surprise parties are not immoral on my model.

        Dealing heroin isn’t immoral, libertarianism agrees with that so its no too counterintuitive and we will not see it as immoral in future

        Like

  6. I greatly enjoyed your debate with Tom, Dr. Malpass. One thing I was wondering about Tom’s “Best of All Possible Worlds” is what if person one person wants to move a rock but another person wants to keep that rock still? If one person can exercise their will, then the other person’s will is restricted, and vice versa. It’s either my will is imposed upon or their will is imposed upon.

    Like

  7. I really enjoyed this debate, Dr. Malpass. One thing I was thinking was interesting about Tom’s “Best Of All Possible World’s” was what if two people have opposing wills? Let’s say person A wants to move a rock but person B wants the rock to stay still. Either A’s will is involuntarily imposed upon or B’s is.

    Like

  8. I will take a crack at your question Clayton Weaver
    “Let’s say person A wants to move a rock but person B wants the rock to stay still. Either A’s will is involuntarily imposed upon or B’s is.”

    a) In the real world, the happiness of person A and B are constrained by the position of a rock. (If rock moves from position X, A is happy but if it stays there, then A is sad. And the opposite is true for B)

    b) A constraint on happiness is an involuntary imposition of will/restriction of freedom.

    c) In the best of all possible worlds, there won’t be any involuntary imposition of will/restriction of freedom. So the best of all possible worlds, one’s happiness cannot be constrained by the position of a rock.

    d) Therefore, the conflict of interest described in the above scenario won’t arise in the best of all possible worlds.

    e) In fact, no conflict of interest can arise in the best of all possible worlds because physical objects won’t be able to constrain your freedom/happiness. The only reason why conflict of interest arises in the real world is because physical factors constrains our freedom/happiness.

    Like

    1. I think this just highlights the implausibility of TJ’s idea of the ‘best of all possible worlds’. If there is no involuntary imposition of will at all, then you (somehow) cannot throw your friend a surprise birthday party, or have the opportunity to save someone from an oncoming train, etc. When I talked to him about it he seemed to say that you just couldn’t interact with other agents in a morally significant way at all in the best of all possible worlds. So it’s a world with no morally good or bad actions.

      At the end of the day, it’s not clear we have to accept that there is a ‘best’ world. It’s assuming there is an ordering on the worlds which has a greatest element. But why think that? Like the natural numbers, they could be ordered in such a way that there is no greatest element. Or the relation could be partial, such that there is no fact about which of two given worlds are better than the other for some pairs of worlds. As far as I know TJ just adds it in to his theory without any argument (or even awareness).

      Like

      1. apmalpass

        “If there is no involuntary imposition of will at all u won’t have the opportunity to save someone from an oncoming train”.

        Think of if from a doctor’s perspective. For a doctor an ideal world is where nobody gets sick. But what if someone came along and said, “But if no one gets sick, you won’t have the chance to cure anyone”. I imagine the doctor will say, “So what?”. The fact that you won’t have to cure anyone is a good thing. The fact that no one has to be saved from a train is also a good thing. I don’t see how it invalidates the concept of an ideal world.

        “It’s a world with no morally good or bad actions.”

        Its a world with only morally good actions and no bad actions. Because everything you do will only lead to desirable consequences (for you and also for others). So all actions are necessarily moral in the best of all possible worlds.

        I think of T jump’s ideal world as the Christian heaven. Where u can do whatever u want but you won’t harm anyone as there won’t be a conflict of interest (as I explained in my previous comment). Anyone in Christian heaven will necessarily perceive it to be good. So we can think of the Christian heaven as an objectively good world. A Christian heaven does not and cannot exist in real life. But it is an abstraction/ideal, just like an ideal triangle. If we maximize the freedom/autonomy of individuals in the real world, we are moving closer to the ideal world, that is objectively good.

        Like

      2. “Its a world with only morally good actions and no bad actions. Because everything you do will only lead to desirable consequences (for you and also for others). So all actions are necessarily moral in the best of all possible worlds.”

        I think that’s too quick. In a ‘perfect’ world, one cannot do various types of morally good actions, such as charity for instance. You can give somebody something, but it isn’t going to be something they actually need, because its a perfect world, and presumably that means that people already have everything they need. If that’s right, then charity is like metaphysically impossible. Consider a whole family of morally virtuous actions – charity, sacrifice, forgiveness, generosity, etc – none of them is realisable in the perfect world, because they are all a sort of addressing of some imbalance (you give something you have to someone who needs it and doesn’t have it, or someone does something bad to you but you do not get even and accept them instead, etc). These sorts of things can’t happen in a ‘perfect’ world.

        All Tom means by ‘best’ world is just one where everyone is satisfying their will without constraint. This is a sort of egotistical fantasy world, but it has nothing to do with the sorts of things traditionally characterised as morality, which plausibly cannot even happen in a world without some kind of suffering.

        The interesting thing to consider here is that morally significant actions might be a good in themselves, and if that’s right and they cannot happen in a world without suffering (or thwarted wills) then there is always going to be a compromise somewhere between how much good to bad we have in a world, and never one with the maximum amount of good and no bad in it.

        Obvs, Tom isn’t interested in this because he just wants to rebadge desire satisfaction as ‘morally good’. As you might be able to tell, I’m not buying that.

        Like

  9. amalpass

    ” one cannot do various types of morally good actions, such as charity for instance. ”

    Based on what you just said, if we rid the world of poverty right now, it wouldn’t be a good thing. Because we won’t be able to practice charity. I think that goes against the average person’s moral intuition.

    Based on your statements, we should make sure that certain percentage of the population remains poor and sick. So that the rest of the world can practice charity and medicine. We should not try to minimize poverty and sickness.

    Like

    1. No, that’s not what I’m saying. Let’s distinguish between actions and states of affairs. It is good if there are no needy people that would receive charity. That’s a state of affairs that would be good. My point is only that there are no good *actions* without some kind of imperfection in the world.

      You made the claim that in TJ’s perfect world there would only be good actions. I’m just pointing out that if the state of the world has no suffering (one type of perfection) then there is plausible reasons for thinking that various types of moral actions would be impossible. So no such types actions would be morally good. I’m not actually hearing you disagree with this point.

      One *could* go further and argue that the state of there being moral actions is a good in itself (I’m not sure what to think about that for the record), but if one did think that then it would mean any simple idea of the ‘best’ world is out of the question. This is a further point though, just to illustrate how the idea is more nuanced than TJ wants to admit.

      Like

      1. “You made the claim that in TJ’s perfect world there would only be good actions”

        a) The very act of existing in the perfect world would be a moral action. Because if u exist in the perfect world, you are experiencing maximum autonomy, which defines morality. So even though u r not actually “doing” anything in the perfect world…u r existing and having conscious experiences. And I could argue that existing and generating consciousness are actions.

        b) Alternatively I could bite the bullet and tell you, “Fine moral actions don’t exist in the perfect world. So what?”…I mean actions are moral/immoral because of their consequences. For example stabbing someone can be a moral act provided u r performing surgery. Actions serve purposes. And when we reach the perfect world, moral actions will have served their purposes. So they won’t be needed anymore. I fail to see how that is a bad thing.

        “One *could* go further and argue that the state of there being moral actions is a good in itself “–I think I disagree for the reasons stated in point b.

        Like

      2. “The very act of existing in the perfect world would be a moral action. Because if u exist in the perfect world, you are experiencing maximum autonomy, which defines morality”

        I’m not sure that’s right either. In TJ’s perfect world, I wouldn’t be able to do stuff that I can do in the actual world. In the actual world I can throw you a surprise birthday party, for instance. Or punch you in the face. Etc. But in TJ’s perfect world I wouldn’t be able to do either of those things because they would be imposing on your will (it’s not clear what would stop me from trying in that world of course). So I would have less autonomy in that world than I do in this. Rather, by restricting everyone’s autonomy (so we can’t do things that impose on others will) we merely get ‘freedom from’, at the cost of ‘freedom for’. Again, it’s far from clear that the world TJ tries to describe is one in which anyone has ‘maximal autonomy’.

        His formula has to be both that impositions on will are bad and that non-imposes wills are good. But, as I just pointed out, you can’t maximise both. The less I can impose on your will the less I can express my will freely. You can’t have maximal of both. So it’s not even clear there can be *a* maximally good world on his view, but just a bunch of messy compromises between how much we get to impose on one another. It starts to just look like the actual world to me when you think it through…

        “Actions serve purposes. And when we reach the perfect world, moral actions will have served their purposes. So they won’t be needed anymore. I fail to see how that is a bad thing.”

        Well, this is supposed to be the morally perfect world, right? I mean, let’s be clear about what it means to be perfect. I might think that the perfect world is where I get to run around killing people randomly. Maybe that’s my idea of a perfect world. But the concept of ‘perfect world’ we are trying to cash out here is presumably something to do with morality, and clearly that world is not maximal with respect to morality. A pretty straightforward thing to think is that in a morally perfect world things like the following is true: everyone always makes the morally best choice in any given situation. That is, the *morally* perfect world is one where everyone is perfectly moral all the time. The problem is that this super intuitive way of understanding the morally perfect world is excluded in principle from the world that TJ describes. That’s because one can never be in a position to do anything usually described as moral, like giving to charity or sacrificing for another person or whatever. If those actions are impossible in that world, how can it be a candidate for the morally perfect world. That’s the problem.

        Now, of course, we can just say “all ‘morality’ means here is ‘maximising will’ (or whatever)”. So then we are so then we can say that a world where I can do that more (or where the opposite happens to me less) is better. But what’s that got to do with anything we usually think of as moral? It has nothing to say about injustice or sacrifice or generosity or whatever. It’s just a game of rebadge a word (morality) as something else (desire satisfaction, or whatever) and pretend you haven’t changed the subject.

        Like

  10. “In the actual world punch you in the face. Etc. But in TJ’s perfect world I wouldn’t be able to do either of those thing”

    The fact that people punch each other’s faces in the real world is not a demonstration of freedom but it is a demonstration of a lack of freedom (for both the puncher and the one getting punched). Imagine you punch people because watching people in pain makes you happy. The fact that cannot achieve happiness without causing physical pain, is a restriction to your freedom. In the perfect world, you won’t have to punch someone to get what u want. You will be free from that restriction. So you can punch someone but you don’t have do. And you are never going to. Because conscious creatures want to maximize pleasure by putting in the least amount of effort. And you don’t have to put in the effort of punching someone, it is an unnecessary effort in the perfect world. So you are not less free in the perfect world. But you are more free…More free from the shackles of sadism and psychopathy.

    There are 2 ways the real world can be moved closer to the perfect world for a sadist person:

    a) We can give people behavior therapy so they are free from the shackles of sadism and psychopathy
    b) If we can’t cure the sadism, we can minimize the conflict of interest. (In the perfect world there is no conflict of interest)…The sadist can become a UFC fighter. So as to surround himself with people who consent to be punched in the face.

    ““all ‘morality’ means here is ‘maximizing will’ (or whatever)”. So then we can say that a world where I can do that more (or where the opposite happens to me less) is better. But what’s that got to do with anything we usually think of as moral? It has nothing to say about sacrifice or generosity or whatever.”

    A world in which freedom/autonomy/will has been maximized is a world where generosity, sacrifice, charity etc have served their purpose. These actions have reached their end goals. So it does have something to say about ” sacrifice or generosity or whatever”…That these actions have served their purpose and we have reached the desired purpose of performing those actions.

    Like

    1. “The fact that people punch each other’s faces in the real world is not a demonstration of freedom but it is a demonstration of a lack of freedom (for both the puncher and the one getting punched). Imagine you punch people because watching people in pain makes you happy. The fact that cannot achieve happiness without causing physical pain, is a restriction to your freedom”

      You are adding implausible conditions that help your point to the example. There is no reason to think that everyone who has the ability to punch someone in the face does so because they can’t be happy unless they cause others pain. The simple fact is that people without that weird psychological constraint have the ability to punch others in the face. Anyone with arms has that ability. So you can cook up some weird scenario where it looks like a lack of freedom, but unless that generalises to everyone in the class, it’s no help to the general point. And to make things even worse for you, the action was obviously arbitrarily chosen. Any imposition on will would do, no just punching. So for your reply to be salient you would have to show that every imposition of will is caused by some psychological condition that constitutes a lack of freedom for the individual. Good luck dreaming that up.

      On your final point, about my “what’s this got to do with morality?” question, your reply is that it shows that the concepts are redundant in the best possible world(s), and that is relevant to morality. But I think you are misunderstanding my question. I’m asking why this world should be considered the morally best world at all when what we usually think of as moral actions cannot even take place here. Pointing out that they don’t take place here doesn’t answer my question.

      Would I prefer to be in a world where charity was not needed? Of course I would. That would be preferable to the actual unequal world we are in.

      But maybe I would prefer even more to be in a world where I was king of the universe and everyone had to do what I say all the time.

      In general there’s no reason to think that the outcome I want the most is the thing that *should* (or *ought*) to happen the most. My desires might be for immoral things, for instance. In general there is often a big difference between what someone wants to do and what the right thing to do is.

      Saying that the ‘best’ situation is one where we all get to do what we want (without imposing on others) my heart describing the most desirable outcome (although I don’t even buy that), but it’s not tracking with morality. The notion of ‘best’ is just ‘most desirable’ not ‘morally perfect’. I don’t know how else to say it.

      Like

      1. amalpass (sorry for the late reply)

        1) I think the root of our disagreement is that you think maximizing freedom is a self defeating endeavor. Except that it is not. Even if I grant you that the absence of physical violence in the best of all possible worlds , is an example of lack of freedom(which I don’t); it simply does not follow that you have more freedom in the real world than you do in the BPW. You won’t be punching people in BPW but there are infinite number of other things you can do in BPW that you cannot even imagine in the real world. You can do drugs forever, you can fly, you can gain infinite knowledge, you can be immortal etc. It is undeniable that the net freedom in the BPW would be infinitely higher than that in the real world.

        2) You seem to imply that morality does not track with maximizing freedom and autonomy. In reality, the moral progress we made in recent times are all rooted in maximizing freedom and autonomy. For example abolishing slavery, legalizing gay marriage and abortion, the push back against factory farming.

        3) The BPW is an objectively good place I think. Good and bad are conscious perceptions. BPW is a place where your ability to do what you want is maximum and conflict of interest with others is minimum. Any conscious creature in BPW would necessarily perceive it to be good. So in that sense we can say that BPW is objectively a good place. Its not merely the fact that you are able to do what you want. It is that you are able to experience what you want, unrestricted, without harming yourself or others. (I know u will say that, but I can’t punch people who don’t want to be punched. The answer is…you will have the ability to punch people, but you won’t have the desire. That is because there will be an infinite number of other things that will fulfill your desire more easily )

        Like

      2. On your first point, I think it’s fair to say there are things you can do in this BPW that you can’t do in the actual world (fly, etc), and there are things you can do in the actual world that you can’t do in the BPW (punch unwilling victims in the face, etc). You say it’s “undeniable” that the net freedom is greater in the BPW than the actual world, but it’s not clear what the metric is. It seems to me there are infinitely many things like punching unwilling victims in the face (different angles, different velocities, whatever). So you probably just have a trade off where infinitely many things can happen in one world and can’t in another, and vice versa. Certainly, to show conclusively that there is more freedom in one than the other you need to do a lot more than what you have so far. It seems to me that for every thing you can name that you can do in the BPW I can name something you can’t do and it will end up a draw, and the same goes for the actual world. If that’s right then it’s a score draw.

        On your second point I just don’t agree that it’s a settled observation that autonomy is the thread that connects all examples of moral progress. TJ routinely says it, but that doesn’t mean it’s true. When I spoke with him I pointed out another interpretation, which has to do with not discriminating based on criteria that have no moral relevance (like skin colour or gender). That’s at least plausible. Plus the factory farming one doesn’t fit your pattern. Are you saying the animals autonomy is being violated? I would have thought the main problem with factory farming is the pain it inflicts on the animals. In the animal rights literature it’s common to distinguish between moral agents and moral patients. Autonomy is part of this picture, but it’s implausible to reduce everything to one variable. So I don’t buy TJ’s analysis of moral progress. He doesn’t have any arguments for it, he just says vague things about drawing a line on a graph. So I’m not down with this but either.

        In the last bit you say that the BPW is ‘good’ but you just mean ‘desirable’, which is true. But given that the issue under dispute is whether there is more to morality than what is desired, this seems question begging. I mean, so what if it is desirable? We are thinking about whether it is the morally best world.

        Like

      3. amalpass

        This is a reply to your comment in June 6, 2020 at 2:53 pm. Your most recent comment that is. The reply option is not there for some reason. So I am replying here.

        1) “So you probably just have a trade off where infinitely many things can happen in one world and can’t in another, and vice versa.”
        That is again simply not true. You cannot do an infinite number of things in the actual world. You are limited by time, you will eventually die. You are limited by other physical factors. You also have limited knowledge and imagination. But these limits are not present in the best of all possible worlds. So even if I were to grant you that you are loosing freedom as nobody is punching anyone (which I don’t)…you are trading limited freedom for infinite freedom.

        2) You implied that you can express all the moral progress we made using things other than respect for autonomy. For example in case of factory farming, we want to reduce pain and suffering for animals. That does not negate respect for autonomy as the central driving force necessarily. It could also mean that respect for autonomy and reduction of pain necessarily entail each other. We can define pain and suffering as “that which is not desired while being experienced”. I stole that definition from Alex’O Connor aka Cosmic Skeptic. Anyways, if you define it that way, then reduction of pain is a part of respecting the autonomy of animals.

        3) You said that just because a place is necessarily desirable, does not mean it is good. You seem to be willing to grant me that BPW would necessarily and maximally be perceived by conscious creatures as desirable/ good. But that does not make it objectively good. This is a fundamental disagreement that we have. “Good” and “bad” are conscious perceptions/psychological phenomena. If X necessarily and maximally generates the psychological state of pleasure/desirable/good, then I think it is fair to say that X is objectively good. Same way…sugar is sweet because it generates the conscious perception of “sweet” via taste buds. If conscious creatures necessarily perceived sugar to be sweet, it would be objectively sweet. Of course, that does not happen as some conscious creature might have their sense of taste impaired. But in case of BPW, conscious creatures must necessarily and maximally perceive it as good. If you are going to claim that the conscious perception is not enough to qualify BPW as objectively good and that something else is required, then you have a burden of proof. You have to demonstrate that “good” and “bad” are tangible characteristics of an object like color, shape and size and can exist independent of a conscious mind. Or that it is something that can exist out there in the universe out side the brain/mind of a conscious creature.

        Like

      4. Take a sort of parody of TJ’s BPW – the ‘worst of all worlds’, or WPW – where one can do anything but only if it is an imposition on another’s will. In this world there is infinitely many things you can do. To see this just think about pairing the things you can do in the BPW with unwilling victims. If I can punch a willing partner in the face in the BPW, then I can punch an unwilling partner in the face in the WPW. If I can have sex with a willing partner in the BPW, then I can rape an unwilling partner in the WPW. I can do things like flying in the BPW, but presumably only if nobody else wills that I don’t fly. If they did, then my flying would be a violation of their will. So in the WPW, I can fly just so long as someone doesn’t want me to (maybe someone gets increasingly angry the longer I fly for or something). Likewise, I can know infinitely many things in the WPW, just so long as they are things people don’t want me to know (like their embarrassing secrets or whatever). Etc, etc. So, is the WPW one in which I am more free than the actual world? I think you have to admit that I am. Am I more or less free in the WPW than the BPW? It seems it has to be a tie; because I can pair actions between them. If so, why think that the BPW is the end point of morality rather than the WPW? If we are just guided by maximising freedoms, then we should be indifferent between these two worlds.

        I anticipate the response is that in the WPW we routinely suffer at the hands of each other’s actions. Our wills are being violated all the time. So even though I have more freedoms, I’m going to have my will violated by everyone else doing things to me like torturing me or punching me in the face, etc. That’s fair enough.

        But then in the BPW, we can’t do a whole bunch of stuff that we can in the WPW. Anything, in fact, that imposes on someone else’s will. So I’m not free to do those things in the BPW. I don’t see why this doesn’t make up the gap between the two worlds though. The WPW includes a bunch of stuff that I don’t want (like me getting punched in the face), and the BPW doesn’t include a bunch of stuff I do want (like me punching you in the face). I don’t see why these don’t balance out perfectly.

        On two, I’m not sure I care too much about this. You could argue that moral progress is entirely reducible to increasing autonomy. I don’t think that’s right. I don’t think all cases of morality are reducible to anything simple, like autonomy or pleasure or whatever. I’m more of a particularist than a universalist about this I think. But I can’t prove this is right. And I think you can’t prove you are right either. All I’m insisting on is humility about that. It’s might be true that moral progress is entirely to do with autonomy but it’s not clear that it’s true. Animal welfare isn’t a clear case of increased autonomy. If you do what O’Connor does you run into the problem of masochism – they desire pain (of a certain sort), but that should be logically impossible if the definition was right. You could spin a line to explain that I guess, but I find that kind of tedious. There are experts on moral progress and a huge literature on it. I’m not an expert, and I dare say you aren’t either. Neither is TJ. All I’m insisting on is that it’s more complex than he and you seem to want to make it out to be.

        And I think this is my main issue here, at the end of the day. I’ve studied philosophy for about 20 years, and it is more complicated than TJ will admit. He just wants to do it all on his own and win some big intellectual arm wrestle. That’s not for me. Engaging with him brings that side of me out, and I don’t enjoy it. I would rather work with people who had an open attitude rather than trying to be the big macho man all the time.

        I think I’m almost done with this topic tbh

        Like

      5. apmalpasssays:
        response to your comment in June 4, 2020 at 7:16 am, your most recent comment that is.

        I am actually very grateful that you have interacted with me for so long. I am an admirer of your work. Enjoyed your debate with William Lane Craig. I promise to make this my last comment..I have already taken up too much of your time.

        a) To respond to your worst of all possible worlds example, you cannot do as many things there as you can do in BPW. In fact, you can’t even do as many things there as you can do in the real world. Why? Because everyone will constantly impose on each other’s wills. So pretty soon, everyone will be dead . So that will put a stop to your activities. It is possible that you can list an infinite number of activities you could potentially do in WPW…But you cannot actually do every single one of those. Because you will be dead pretty soon. In the BPW, not only can you list an infinite number of activities, you can actually perform all those activities.

        b) I don’t think I am the only one advocating for a simple foundation of morality. I am a bit familiar with Shelly Kagan, and in his book Normative ethics, he wrote about ideal contractarianism and ideal observer theory as 2 possible foundations for morality. In his debate against William Lane Craig, he defended ideal contractarianism as a foundation for morality. Ideal contractarianism (IC) is a theory where moral actions are the ones which would be agreed upon by perfectly rational bargainers who are setting up all the rules of society. IC is a relatively simple concept and seems to compare the real world to a hypothesized ideal world. Which is similar to what TJump’s theory does. So I am curious as to what makes TJump’s model extra bad as opposed to any other model of objective morality.

        c) I am curious to know what your favorite moral theory is. You said you are somewhat of a moral realist during the TJump debate, as far as I remember. So I was wondering which moral theory do you think is closest to explaining morality?

        Like

  11. My definition of consent: If Person X was omniscient and could look over all possible actions and put them on a list of “allowed” or “not allowed” to happen to them, the ones on the allowed list are consensual.

    Surprise birthday parties would presumably be on the “allowed” list, so you can still do them and they are consensual/not immoral.

    Charity is also on the allowed list, as well as all moral actions. You would need to choose to be impoverished and choose to allow people to assist you but in the BPW you can do both so moral actions are still possible in the BPW.

    Impositions are bad
    Assisting wills is good
    Non-impositions are just amoral

    The BPW is where everyone gets theory own world and does exactly that and decide aht actions are allowed in your (and only your) world. Your “will” only applies to yourself in your universe (you have maximal autonomy over those things only). So if there is a rock one person wants to move and one doesn’t, the morality is determined by whoevers universe it is… if it’s your rock (in your universe) your will is the relevant one to the morality of it moving.

    If you choose to enter someone elses universe you need to agree to their rules, in which case you may have voluntarily limited your autonomy.

    As long as the number of involuntary impositions of will = 0 (at any/all points of time in the world existence), I define any such world as the BPW even though there can be many different worlds that could meet that criteria any/all of them are ‘the’ or ‘a’ BPW. its just a label.

    Liked by 1 person

    1. “If Person X was omniscient and could look over all possible actions and put them on a list of “allowed” or “not allowed” to happen to them, the ones on the allowed list are consensual.”

      I don’t understand what this is supposed to be doing. For a start, it’s surprising you are appealing to an omni property, as I thought you objected to them. But secondly, I don’t understand what criteria the omniscient person is supposed to be using. There are tons of ways we could split up all the possible worlds into two groups. What makes a world qualify for one pile and not the other? How is his being omniscient supposed to help? Does he already know which pile each world belongs in? Does he know which pile he will put the worlds before he does it? What’s going on here? If theta is some condition that the omniscient person uses to distinguish an allowed world from a not allowed world, why not just give that and forget this pretend omniscient agent?

      “ Surprise birthday parties would presumably be on the “allowed” list”

      But why think that? You ask us to imagine an agent splitting worlds up into two groups. Now it “presumably” follows that some given world is in one pile and not the other? Without some reason that links them together (which your definition doesn’t provide) we have no way of telling which pile any given scenario would go into.

      So there must be something going on in the background with your idea that you are not spelling out. Just pointing to an omniscient mind sorting worlds, but not saying how he does that sorting, doesn’t help at all.

      It might even make things worse. Can an omniscient agent be surprised by anything, including a birthday party? If not, how can he even exist in a scenario where he is being thrown a surprise birthday party? If the analysis means that some straightforward things like this are suddenly problematic, then how is it suppose to help us evaluate things like the morality of surprise birthday parties? It’s like you are saying “take moral situation X; the best way to evaluate it is to imagine an agent who cannot be in that situation”. How is that going to help?

      “ You would need to choose to be impoverished and choose to allow people to assist you but in the BPW you can do both so moral actions are still possible in the BPW.”

      Choosing to be impoverished doesn’t qualify im afraid. It’s just a sort of pretend poverty. If you live in a slum but at any point could click your fingers and be transported back to luxury you are not really impoverished. You are just playing a role. And if I donate stuff to you while you are pretending to be poor I haven’t really done anything to help you, because you could have done that at any point yourself by just willing it to be so. There is no real difference between you deciding to accept my offer and you deciding to give the same thing to yourself.

      In contrast, real poverty is bad partly due to it not being a self imposed thing that one can end at the snap of the fingers. Part of the value in helping someone in that situation is doing something for them that they couldn’t do themselves. There is a modality of moral actions that your pretend version lacks when everyone can do whatever the like all the time. So, no, I dispute your claim that moral actions are possible in your imagined world. Only play acting versions of morality are possible. Just like if someone acts the role of a pauper in a play, that’s not actually poverty.

      Like

      1. involuntary imposition of will/violations of consent, are immoral

        This definition of consent just outlines what a particular person wills/consents to, and doing anything to them they decide is on the “not allowed” list, is immoral.

        BPW = everyone gets thier own world an decides on the list for themselves however they want

        Hitler can choose to put murder on his worlds “allowed list” and murder would be possible in his world, but you could only murder people who consented to being in his world/consented to being subjected to being murdered.

        Its moral to help someone carry thier groceries even if they dont need the help and could do it themselves… so yes you can still do moral actions.

        A world with no involuntary poverty is morally superior to a world where you can give to the poor and if you dont they die of hunger. If that kind of moral action isnt possible in the BPW good riddence.

        Like

      2. “This definition of consent just outlines what a particular person wills/consents to, and doing anything to them they decide is on the “not allowed” list, is immoral”

        Well, I’m not sure that’s how what you gave as the definition in the last post works. Let’s take an example. Let’s suppose you say “I consent to you punching me in the face”. Suppose also that your will is such that you want me to punch you in the face (for some reason). But now suppose that it’s also true that if you had been omniscient, given the special insight all that knowledge gives you, that you would have put the “getting punched in the face” action into the ‘not allowed’ pile. In that case, it’s doesn’t meet the definition you gave previously of consent even though given what you put above you obviously think it is a case of consent. My point is just that it’s logically possible for a regular person to will for something to happen and for an omniscient version of themselves to put it in the ‘not allowed’ pile. So then it looks like we have two logically distinct ideas floating around here. It doesn’t help here, it just makes things more confusing.

        “Hitler can choose to put murder on his worlds “allowed list” and murder would be possible in his world, but you could only murder people who consented to being in his world/consented to being subjected to being murdered”

        This depends on what we mean by murder. In this example they would be killed, but not all killing is immoral, and not all killing is murder. Often the distinction between killing and murder is merely legal, but that won’t help here as there is no legal system the controller of the world is subject to (they could presumably will for whatever legal system to be in place at any time, etc). So the question is just whether what the moral distinction between murder and killing is supposed to be. We could with some plausibility argue that killing in self defence isn’t murder (isn’t morally wrong) because self preservation is morally relevant. Presumably you like this idea because it imports easily into your will imposition model. Ok. But then when we come to consider the distinction between whether the victim actively and knowingly signs up for it to happen (like someone who allows them to be killed so that the rest of the starving prisoners can eat his body and survive a bit longer, or whatever) is actually being ‘murdered’ at all. It seems at least as plausible as the self defence clause that if someone goes into it willingly then it isn’t really ‘murder’, but just killing. This seems to fit with your model as well. The only reason that this sort of killing can happen in your BPW is that the participants have agreed to it, meaning it isn’t immoral after all, meaning that it is merely killing and not ‘murder’.

        But that just shows that in your BPW no morally significant actions can happen. The participants could dress up like murderers, but they can’t actually murder anyone, because (by definition) there can’t be any violation of the other’s consent. So this helps my point, not yours.

        Basically nobody ‘consented to being murdered’ because that’s like ‘being a round square’.

        “Its moral to help someone carry thier groceries even if they dont need the help and could do it themselves… so yes you can still do moral actions”

        I’m not sure it is. At least there is a clear difference between the following. Imagine we see two people exit the supermarket with equal bags of shopping: a massive muscle bound body builder and a frail old man. Clearly, the old man needs more help, and I think it is equally clear that helping him over the body builder is the morally preferable thing to do. What this shows is that need is relevant to morality. Now if we jump over to your fantasy world where the controller can do anything they want all the time, if I offer to help him I’m offering to help someone who is in even less need of my help than the body builder (because the controller is essentially omnipotent and can make the shopping float home on its own if he wanted to). Offering to help that controller, it seems to me, is morally neutral. And that’s precisely because he has no need at all (not even a tiny bit) for the help.

        “A world with no involuntary poverty is morally superior to a world where you can give to the poor and if you dont they die of hunger. If that kind of moral action isnt possible in the BPW good riddence.”

        So here is a sort of category mistake it seems to me. On the one hand, I agree that there being no poverty is morally better than there being poverty. Poverty is bad. But where I disagree is that I don’t think worlds are the beaters of moral values as such. Rather, I think it is primarily *actions* (and possibly people in virtue of their character, but that’s a rabbit hole) that are properly labelled good or bad.

        So if we were picking between sets of choices, like do set A or set B, and A leads to an eradication of poverty, and B left things the same, then cateris paribus we *ought* to pick A. So if the state of affairs of there being no poverty is the outcome of a set of actions, then those actions are the right ones.

        But now imagine a mini world that just pops into existence fully formed as your utopia with no impositions of will (no immorality at all), lasts for 5 minutes and pops out of existence again. While that five minutes is qualitatively identical to the end stage of the first world, it seems like a category mistake to say that this world is equally moral. Like, I guess it’s nice for those people who pop into existence in a utopia. Like good for them I suppose. But the existence of that world itself isn’t a morally significant fact. It just exists. This is where the category mistake comes in. What people do can be moral or immoral but the existence of a world in its entirety isn’t the outcome of what someone does. I mean, a theist might try to say that the existence of the world is the outcome of God’s actions, but we don’t think that’s right. The world exists but isn’t the outcome of anyone’s actions. It just is.

        So it feels like we might just be talking past each other when you talk about the morally best possible world. I just think you are using the word ‘morally’ to talk about something else (desire satisfaction).

        It would be helpful, I think, if you settled whether your theory is one where you are saying “morality as traditionally conceived doesn’t exist, and instead all we have is desire satisfaction (or whatever)” rather than “morality as traditionally conceived is actually just desire satisfaction”. When I spoke with you I tried to get you to see the distinction but you somehow wanted your theory to be both. I think that’s a mistake. It can’t be both. You have to pick one of these and accept the consequences of that. Otherwise you are confusing things. I think this is probably why you think my complaints miss your point. Most of my criticisms are about the idea that this is supposed to be explaining what traditional morality really is. If you just bit the bullet and admitted that’s not your project a lot of this would go away.

        Like

  12. If you will to be punched in the face it would be placed in the allowed list. This list is not immutable, you can will to change it/its a representation of your will.

    Yes murder would not be possible, even if someone said it was allowed in thier universe you would need to consent to to be murdered which would be a contradiction. Immoral actions are impossible in the BPW thats the idea.

    Moral actions are still possible. Instead of groceries think about a video games, if i playing a really hard video game and i dont know how to beat the boss and am struggling, if a person helps me by explaning how to beat it or we play coop to beat it, that person is doign a moral action. I could just turn the cheat codes on and be invincible, but its still moral for someone to help me.

    That is possible in the BPW, voluntarily accepted challenges/difficulties and other helping you to overcome them. You could choose to live in a world exactly like our earth where the rules are the laws of physics you are in poverty and its very hard and other can help you. The only difference being you can choose to leave at any time. So moral action are still possible.

    Im taking the current view of morality and adding to it. So i accept it and say there is more

    My view of morality is different from the current view, the current view is that morality applies to actions by agents… i see this as like a command to increase, like a + sign, “do +”. My view is that this idea of morality is incomplete/shallow and we need to answer the question where we are, and what we are adding to.

    Our world has a moral value of 37, and we need to “do +” by helping someone… to get to 38, the ultimate goal being to get to 100 (the BPW).

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s