2024-10-19

Reason and Morality

PXL_20241019_195309749 Having established - at least to my own satisfaction - The Foundations of Morality, I can now play the fun game of deconstructing other people's ideas. And by happy chance Reason and Morality by Alan Gewirth, falls into my hands1. The contrast in language with Hazlitt is immeadiately obvious; this is academic philosophy. But casting that aspect aside, he's wrong, which is the important point. Wiki, naturally, can't bring itself to say this; partly because it just doesn't do that and partly because - all together now - philosophy doesn't do that, because philosophy is too scaredy-cat to make such sharp distinctions3.

AG is keen to found morality on rationality; to this extent he is part of the Enlightenment Project and that is good. Unfortunately... well, read his words:

...every agent must claim, at least implicitly, that he has rights to freedom and well-being for the sufficient reason that he is a prospective purposive agent. From the content of this claim it follows, by the principle of universalisability, that all prospective purposive agents have rights to freedom and well-being. If the agent denies this generalization, he contradicts himself. For he would then be in the position of both affirming and denying that being a prospective purposive agent is a sufficient condition of having rights to freedom and well-being.

This is (I think; don't let me claim to have read the whole thing) the core of his argument: that if you have F+WB, you're logically obliged to accept that others also have F+WB. And so:

...the statement that some person or group of persons has a certain right entails a correlative ought-judgment that all other persons ought at least to refrain from interfering with that to which the first person or group has the right. Since, then, the agent must accept the generalized rights-statement, All prospective purposive agents have rights to freedom and well- being, he must, on pain of contradiction, also accept the judgment, 'I ought at least to refrain from interfering with the freedom and well- being of any prospective purposive agent.' The transition here from 'all' to any is warranted by the fact that the 'all' in the generalization is distributive, not collective: it refers to each and hence to any prospective purposive agent.

And there we have it; we've deduced a general duty to behave nicely to people whoops I mean agents2.

The problem though is that this isn't morality; the morality that we all know and use isn't found in an absence of contradiction or in logical reasonning. Worse, what he has deduced is essentially just the Golden Rule: do (or refraim from doing) unto others what you would have them do (or refraim from doing) unto you.

His error is to attempt to apply rigourous logic to morality, where it doesn't belong. In something more like normal language, he is attempting to found morality on benevolence: he wants us to behave well to others - implicitly, at some cost to ourselves - having logically deduced that we "owe" that to them. Hazlitt is closer to founding morality on prudence - the moral rules experience have taught us show that being nice to people is not only good for them, but for us as well, over the long term. Hazlitt is congruent to human nature; Gewirth isn't.

AG's scheme (like Kant's; like Hazlitt's) isn't actually a moral code but a schema that moral codes must fit. In chapter four he looks at what he can actually deduce. Do-not-harm-people is his first deduction, in 4.5, but with an exception for self-defence, in 4.6. This doesn't work well: the problem is that although he "knows" there must be such an exception, his schema doesn't really provide for it; nonetheless he tortures it into doing so. This is, incidentally, yet another hint that his proposed moral principle is wrong: he is not really deducing morals from it, instead he is desperately trying to make things he knows to be moral fit into it. Similar things happen with the duty-to-rescue in section 4.7. In chapter five he realises that we actually live by various social and moral rules; but he still prioritises his principles and does not as far as I can see, get to realising that those rules bind because they have "evolved" to, rather than because of his abstract principles.

Notes


1. I bought it second-hand for £15 from the Oxfam bookshop - it had been relegated (or stolen?) from the University of Lancaster philosophy department. If you're not from the UK, or are from some benighted part of the UK, Oxfam run a number of shops that are just second hand books; this works well in Oxford and Cambridge; though for Cambridge, Heffers is generally better for the heavyweight stuff.

2. But not to non-agents. Is animal cruelty bad, in his world? There's some whiffling around this (and mentally deficient persons) that I didn't have the patience to plough through; I sense he is uneasy on this point.

3. And partly of course because they don't even realise it is wrong, sadly.

Refs


No comments: