25 Comments
User's avatar
Allan Olley's avatar

I mean my worry would be that not only are their no moral facts their are no egoistic facts, no facts about self-interest, either. The reasons for rejecting "self-interest" seem much like the reasons for rejecting "general interest" or "moral interest" to me.

You say "But as we know since Hume, normative reasons lack convergence among agents, hence rendering us incapable of defending categorical imperatives." As far as I can tell self-interested reasons lack convergence within the very same agent. Most obviously across time, but even at the same time.

The smoker smokes knowing he will probably get lung cancer and then when he gets lung cancer regrets smoking. Is there really a definition of self-interest that can reconcile this? Can you say there is some sort of definite self-interest achieved or satisfied, without saying either they were wrong to smoke or wrong to regret smoking. I don't think so either they were wrong to smoke (the immediate satisfaction of smoking was not good enough reason to risk lung cancer), wrong to regret smoking (the immediate satisfaction justified the risk and so the actuality of lung cancer) or there is not one self-interest rather it is an incoherent concept, it both was and was not in their self-interest. The case where we overrule one desire with another is just the kind of convergence we declared impossible between agents, there is no way to recover it in this case that I can see (taking the non-convergence as given), no new aspect of this situation that would allow convergence here. In the incoherent case no one will consistently get to a set of answers in their deliberation, depending on where they start they will come to different conclusions. They can conclude smoking is so good it is worth getting lung cancer for and just as easily and with no change in desires or knowledge conclude that lung cancer is so bad it easily overrides any satisfaction from smoking and so on. Some may think their desires and evaluation of "self-interest" is coherent, but that hardly seems probative of anything, people incoherent in that way could easily feel that way.

I say this is most obvious across time but also the case at the same time. I think this is plain. At the same time the smoker may be moved by desire to smoke and fear of its consequences. They may tremble with the conflicting desires as they try to resist lighting the cigarette or as they take a drag of it. Basically random changes in the order or time of deliberation would lead to different actions and outcomes there is no coherent desire here. More generally if I desire to survive I at once desire not to change (since if I change to much I am dead) but also to change (since if I stop changing I am also dead), even individual agent desires don't converge to anything.

If one can square the circle and smooth over the desires that contradict themselves and the contradiction of different desires at different times by the same agent to come up with some unified self-interest that is fulfilled then it should be straightforward to smooth over the conflicts between agents and derive a converging normative reason across them. To me both enterprises of convergence rise or fall together for better or worse (if indeed those words can mean anything). I don't see much chance of solving one without solving the other.

Luckily if self-interest is as chimerical as moral interest in this sense there is no danger it will make us less well-off in some other sense because given the incoherence of our deliberative judgements (and explicit desires etc.) the details clearly did not matter. Our behaviour would have been much the same (given the same circumstances). Clearly something other than inferential reliability of our decision making process was keeping us regular in our behaviour. Perhaps as the economists like to say it is "animal spirits" that seems very Humean. If there is some kind of non-moral non-self interest then hopefully our animal spirits or whatever is regulating our behaviour responds to that.

Walter Veit's avatar

Thanks for the great comment! As a matter of fact, that's exactly what I think: self-interest/prudence in the sense of reasons across an agent's life is also a mistake (though there is much more consistency than across billions of humans). I am a present-aim theorist to use Parfit's terminology. What you have most reason to do depends on your current aims. And it's simply true for most people that their current aims often extend to their future states as well as those of others. But not always.

Neural Foundry's avatar

Outstanding argument on the disutility of moralizing languageitself. The abortion example is spot-on, moral facts peoplecreate impasses where preference-stackers would just talk tradeoffs. I worked on policy debates where invoking rights frameworks basically nuked any chance at pragmatic middle ground, people assumed we were arguingabout cosmic truths instead of resource allocation.

conor king's avatar

I stumbled into this arcane debate via BS Brigade’s posts. Reading his posts and others I then encountered led me to the strong view that there is no good case for moral facts.

I could well be wrong. My main point is I cannot see how it matters.

Fact or preference - the fact advocates struggle to present a useful set of agreed such facts. Ones that would guide action on relevant matters.

I have used abortion as an example of something relevant. There are numerous coherent cases made that abortion is/is not OK. I cannot see that both outcomes can be facts - if only one is a fact how is that determined?

I say my position is my judgement.

What I get from the moral facts people is a need for certainly. They struggle with saying ‘I may be wrong but this is what I think I (or even other people) should do”.

The Sacred Lazy One's avatar

You’re doing two moves at once, and they’re easy to miss if someone only hears the word nihilism and flinches.

A metaethical move: denying moral facts (especially the “categorical force” kind).

A practical/interface move: suggesting we may be better off dropping moralistic language because it tends to harden conflict, inflate certainty, and license punishment.

That’s a serious, readable attempt to separate care from cosmic authority, and it fits cleanly with the observation that EA communities can converge strongly on first-order priorities while splintering on metaethics.

This raises a question for me. If we imagine a “post-moral” EA that runs on consequence-tracking and negotiated constraints, what is the minimal shared vocabulary that still lets us say:

“This predictably increases suffering,”

“This is an unacceptable risk,”

“This tradeoff is being hidden,”

…without smuggling moral authority back in through the side door?

Small sidenote, since you invoked the movie: the “Dude says man 147 times” claim floats around, but it’s not stable across tellings. I’ve seen it asserted directly, and I’ve also seen other counts (including claims that dude is 147 and man is higher). The vibe is correct even when the integers wobble.

Jesús Zamora Bonilla's avatar

I’m glad to see the paper was published in the journal of my neibourgh university in Madrid. 😊

Ian Jobling's avatar

I find this article rather hard to understand, but what you seem to be saying is that Williams should be rejected because he would say that someone living in Nazi Germany is justified in acting according to Nazi morality. This is a complete misreading of him. First, Williams rejected all morality in the sense of objectively binding rules for behavior, so making him out to be defending any form of morality is already wrong-headed. But he would also have said that there was no sound deliberative route from the motivations of the typical German to actions like mass murder. He would have said that such actions were based on false beliefs of various sorts, like the notion that Jews were basically different from Germans. He would also have considered such actions bad because they had bad outcomes for everyone. And he would say that you could critique the German morality based on such considerations. For more on Williams, see my: https://open.substack.com/pub/eclecticinquiries/p/on-deliberative-subjectivism?r=4952v2&utm_campaign=post&utm_medium=web

Walter Veit's avatar

Thanks for the comment! I don't buy into the distinction Williams draws between ethics and morality. His arguments against moral objectivism, in my view, should have driven him to endorse moral nihilism as I use the term. Otherwise I read him as a case of "Have your cake and eat it too" - but I'm curious to see where you disagree with my moral nihilism! :)

Ian Jobling's avatar

I'd like to understand better how you think you can maintain effective altruism without appealing to morality. Because Singer is very clear that we're under a moral obligation to reduce suffering. If we're not under any moral obligations, why shouldn't we just do whatever we want, whether that involves reducing suffering or not? Are you saying that amoral people would naturally choose to devote significant portions of their incomes to effective charities? It's possible, but I don't think you seriously engage with these questions. See my: https://open.substack.com/pub/eclecticinquiries/p/effective-altruism-as-moral-slavery?r=4952v2&utm_campaign=post&utm_medium=web

Walter Veit's avatar

I support effective altruism because I care about the suffering of others.

Ian Jobling's avatar

Well, there are all kinds of different suffering, and effective altruism targets some of them rather than others. People suffer from romantic disappointments, but EAs don't say you should work to prevent romantic disappointments. Why do you support the specific kind of suffering reductions that EA targets? Also, is care about the suffering of others your only emotion? What about all your other emotions? Why shouldn't you act on them and ignore your concern for suffering?

Walter Veit's avatar

Is this meant as an objection to effective altruism? Firstly, I doubt anyone thinks romantic disappointments are the greatest source of suffering, compared to say starvation and factory farming. Secondly, even if it was, it's entirely unclear how I or anyone else in the world could effectively reduce it. So I am not sure what you're getting at?

Walter Veit's avatar

In regards to the second point, one can care more or less about different things. There is no ultimate appeal to authority beyond the subjective ‘values’ I hold. Nihilists can value all kinds of things, from animal welfare to pizza. Humans evolved to be prosocial and have altruistic desires - so it is unsurprising that many nihilists chose to pursue altruistic projects.

Ian Jobling's avatar

I don’t think anyone knows what the greatest source of suffering is. One problem I have with EA is that you people are so confident that the type of suffering that you care about is the type that matters the most. I think EA offers an easy and dumb vision of the ethical.

Matt Ball's avatar

This is fantastic, Walter. Although we disagree about utilitarianism, I am super-impressed with your willingness to question convention and take arguments seriously.

And I'm not just saying that because I agree with you on this.

(OK, maybe I am ;-)

One of my fav memes:

https://www.reddit.com/r/Existentialism/comments/dq7g4n/existentialism_what_people_think_vs_what_it_is/

Walter Veit's avatar

Thanks and fun meme! Perhaps I'll convince you of utilitarianism one day as an efficient tool to deal with the trade-offs emerging from billions of individuals with competing interests.

Matt Ball's avatar

That'd be great. As I write in "Losing My Religions" chapter "Biting the Philosophical Bullet," my life would be much better if I could be convinced back to being a utilitarian.

Alexander Goodman's avatar

This piece's argument rests on a false choice. It assumes morality must either be a set of mystical, unconditional commands that float above human life, or else be meaningless and discarded. Once the first option is rejected, the author jumps straight to abolition. That is not a discovery; it is an evasion of the alternative—that values can be grounded in reality without being mystical or arbitrary.

The core mistake is the idea that if moral principles depend on human needs or goals, they must be made up. That is simply false. Standards can be conditional and still objective. Medicine depends on the goal of health; engineering depends on the laws of physics. In the same way, principles about how to live depend on the factual requirements of human life. Calling that “subjective” is a category error.

Blaming morality itself for atrocities is another evasion. The problem was never judgment as such, but bad moral systems—tribalism, collectivism, obedience, and sacrifice treated as absolutes. Abolishing moral judgment does not prevent those ideas from arising; it just removes the rational tools needed to identify and reject them.

Most telling of all, the author keeps relying on words like “better,” “worse,” and “harmful” while denying that there are any real standards behind them. That is not honesty; it is smuggling values in through the back door while pretending to reject them.

The piece is right to reject mystical moral authority. But instead of doing the harder work of grounding values in reality, it throws morality out entirely. That move is not clarity or courage—it is philosophical evasion.

Jesús Zamora Bonilla's avatar

One must is Hanno Sauer’s “The invention of good an evil”.

Fenrir Variable's avatar

And how much does Individualism play into this? Because why else are they motivated to create arbitrary moral systems to feel better about who they are?

Seems like a low self esteem problem. Must be for all the high IQ rationalists who forgot you can't have rationality without compassion and empathy. Effective Altruism is an excuse to be an exploitative menace using capitalism they can feel better about already ruining the world.

It must be that those who see their personalities as "unique special individuals with high agency" need these moral systems for coherence? Do they not know how to use their mirror neurons?

Is it so hard to feel the right half of their brain to understand what does harm to another person? They need to craft a while philosophy school for what evolution built in?

I'm inclined to think Individualism is the whole problem, but whatever, those people assume every human works just like them. They're too delusional in their functional psychosis to actually care what's real. It's all about their feeling, their high agency and their high IQ. They should rationalize harder how being a capitalist makes them moral according to their own utilitarian system 😂

I don't disagree with your premise either, I think for Individualism, Moral Nihilism may about about the best their brain is capable of after roasting their ACC, insular cortex and rTPJ. Like maybe don't have the default mode network running self referential scripts all day.

I hope you left that cult. They think they're summoning a god in their machine and they don't give a damn how much harm they do, all based on their feelings.

Walter Veit's avatar

Thanks for the comment! How do you think is donating most of your money and time to say effective altruism charities aimed at improving animal welfare is exploitative/ruining the world? Also not all effective altruists would consider themselves capitalists. The movement is pretty diverse with a shared committment to tackle some of the greatest harms, such as factory farming. In regards to individualism, effective altruism owes a lot to Derek Parfit who opposed it, arguing that we should not treat others differently from how we would treat ourselves.

Fenrir Variable's avatar

I haven't post my TESCREAL deep dive yet or I would link it to you.

Maybe you don't know enough about your own organization if you think all these people do is donate all their money to animal welfare and select charities they use their own arbitrary moral clauses to decide on.

I'm far from the only person who refers to EA as a cult. Anyone who has read about the ideology knows why it's actually psychological Egoism. Do you also think Mr. Beast is a saint for what he does?

How is it not harmful that EAs have the express purpose of wealth accumulation?

Especially beyond survival and to the point it's killing thousands to millions of people with poverty. Yep, that's super moral. Just harm people in your communities and country because you're going to give it all away someday.

What?

Do we just assume all people who live in the global south are uniquely deserving of the IMF to run their governments with austerity politics?

Are people of color just less deserving? They suck, so keep them in poverty now while EAs make more money? Worry about people a thousand years from now. Don't actually do anything to get rid of an oppressive system that causes poverty.

Most people would recognize this as a psychological coping mechanism for being a greedy, anxious, deeply damaged person who grew up in an authoritarian family. Making up new moral systems to feel righteous about being greedy.

EAs are no different than Christians, moral systems are about controlling the body politic. The EAs just believe their high IQs qualify them to make the mind control rules for everyone else.