Saturday, May 28, 2011

What Is Moral Relativism?

Moral relativism is a term with several distinct uses. It helps to understand these distinctions because it's possible for a person to affirm one form of moral relativism without necessarily committing to other forms.

Current version: 2.2.0 — last updated June 1, 2011.

Actions and Principles

Before looking at what philosophers tend to consider moral relativism proper, let's look at action relativism.1 Is it always wrong to lie? Is it always wrong to cut off another human being's foot? Is it always wrong to commit suicide? If you think it can be right to lie to save lives, or cut off a foot to stop the spread of an even worse infection, or commit suicide to let someone else use the rest of the oxygen in a sealed chamber, then you affirm action relativism.

The reason action relativism isn't usually classified as a form of moral relativism is that a person can believe particular kinds of physical actions can be right or wrong depending on the wider situation, but still believe there is a single correct moral standard (or set of principles, or values).

Joseph Fletcher's situation ethics2 is a good example of action relativism. He was a priest who believed that love is the ultimate standard for moral action and that specific prohibitions only usually serve the basic moral principle, so in extreme situations it might be morally right to follow the principle rather than the usual act prohibitions. Classical utilitarianism also identifies a fundamental moral principle which might cause the same kind of physical act to evaluate as right in one situation, but wrong in another.

A weak argument against action relativism is that it's preposterous to claim murder might be morally right depending on the situation. But 'murder' is just defined as morally wrong killing of another person; it's a label for a type of physical action combined with a judgment about that action. An action relativist would instead claim that not all killing of other people is murder, which is a very widespread view!

Descriptive Relativism

Now we're to the first kind of relativism philosophers tend to worry about. Descriptive relativism3 is true if individuals do in fact make moral judgments according to varying fundamental moral standards, or principles, or values.

Part of what makes descriptive relativism interesting is that people tend to think it's either obviously true or obviously false. It can take a little work to show that it's a question worth closer study.

Obviously true!

It may seem moral values clearly vary, especially across cultures. But keep in mind that non-value beliefs shape how people apply their values. It could be the case, for example, that everyone's conception of moral rightness is based on doing what makes people happiest in the long term, even if it's unpleasant in the short term. Since many people believe actions in this life will affect long-term happiness in an afterlife, this same fundamental value could translate into a variety of action judgments. Maybe even torturing people to death, if the torturer believes it's in the torturee's best interest in the afterlife!

To prove descriptive relativism, an investigator must be able to show that people's moral judgments can in fact differ in the same situations when all relevant non-moral beliefs are the same.

Obviously false!

It may seem moral values clearly stay the same and any differences found by the kind of investigation described above aren't moral differences. After all, who has heard of a culture where killing anyone else at any time is considered ok? So maybe morality is that common core and any genuine differences in values are in non-moral values, like tastes in music.

The trouble with this project of finding universal human values is that it seems unlikely to produce much of a common core. Slavery, infanticide, and vendettas — to name a few practices — have been considered morally proper in various societies, and it's hard to see how these differences can all be explained by an appeal to common values tempered by different beliefs. And if the common core is small, who is going to grant that their own values outside the core aren't really moral values?

Metaethical Relativism

A person can believe descriptive relativism is true, yet also believe there is one correct moral standard; those who make moral judgments according to other standards are simply using incorrect standards!

Metaethical relativism goes beyond merely describing people as holding fundamentally different values by denying that there is such a thing as 'the correct set of values for morality' or 'the correct moral principles' or 'the correct moral standard.' Another way to describe metaethical relativism is to say moral claims like "Slavery is wrong." are incomplete. We have to fill in the blank: "Slavery is wrong according to standard S." before the claim can be true or false...and there is no privileged standard.

The details of how moral claims are relative to standards and what constitutes a standard can vary, which allows for different kinds of metaethical relativism. To illustrate these differences, here are three imaginary people considering a specific case of Charlene seeking an abortion:
Ada considers it morally wrong, and her culture considers it morally wrong.
Betty considers it morally neutral, but her culture considers it morally obligatory.
Charlene considers it morally obligatory, but her culture considers it morally neutral.
Agent relativism4 selects S (the standard) by reference to the person who performs the action. If S is selected by the agent's personal values, then Charlene's abortion is morally obligatory because of Charlene's own values. If S is instead selected by the agent's culture's values, then the abortion is morally neutral. We might call these personal agent relativism and cultural agent relativism, respectively.

Appraiser relativism (aka speaker relativism) selects S by reference to the person making the judgment. If S is selected by the appraiser's personal values, then — simultaneously! — Ada is correct to call it wrong, Betty is correct to call it neutral, and Charlene is correct to call it obligatory. And if S is selected by the appraiser's culture's values, then Ada is correct to call it wrong, Betty is incorrect to call it neutral (because she must call it obligatory to be correct), and Charlene is wrong to call it obligatory (because she must call it neutral to be correct). We might call these personal appraiser relativism and cultural appraiser relativism respectively.

Notice that agent relativism gives a single wrong/neutral/obligatory judgment for any given action because one agent translates into one definite S.5 Meanwhile, appraiser relativism's multiple appraisers can translate into multiple S's which may give different judgments about the same act. In both varieties, S can vary by act but we can further distinguish relativist theories which allows S to vary for the very same act.

It may seem like a logical contradiction to say Ada is correct to call Charlene's abortion wrong and Charlene is correct to call it obligatory (therefore not wrong). But both can speak truly if Ada's 'wrong' means something different from Charlene's 'wrong.' For example if S1 represents Ada's moral values and S2 represents Charlene's moral values, then:
Ada: Charlene's abortion is wrong according to S1.   ...and...
Charlene: My abortion is not wrong according to S2.
need not be in logical conflict. Ada and Charlene are talking past each other. A common criticism of this kind of relativism is that disagreements about moral judgments seem to be conflicting assertions. People think they're disagreeing, not talking about different things! Relativists could respond by saying that moral judgment disagreements seem so intractable precisely because people confuse disagreements about standards and disagreements about whether an action conforms to a standard.

A Note on Relativism and Contextualism

Some philosophers would label the above description of metaethical relativism as metaethical contextualism to contrast it with ethical theory that uses semantic relativism.6 Under this terminology, relativism would be the view that Ada is asserting a proposition and Charlene is denying the very same proposition, but the proposition might be true from Ada's point of view and false from Charlene's point of view. Meanwhile, contextualism is the view that superficially similar moral language can express different propositions depending on context of use, which leaves the law of non-contradiction unchallenged.

Normative Relativism

Let's assume metaethical relativism is correct and we know it. What impact does this have on our own moral judgments? Does this mean normative relativism is right that we must tolerate rather than condemn practices in other cultures that we would consider morally wrong in our own? To use a common example in ethics literature, does this mean that we can't condemn female genital mutilation as practiced in parts of Africa and the Middle East? It depends on the type of metaethical relativism.

Cultural agent relativism — We can't correctly condemn it when the agent's culture supports it.
Personal agent relativism — We can't correctly condemn it if the agent's own values support it.

Cultural appraiser relativism — We can correctly condemn it because our culture condemns it, but cultures which support it can correctly support it.
Personal appraiser relativism — Anyone who condemns it or supports it is correct.7

So, no, normative relativism does not logically follow from metaethical relativism. Both kinds of appraiser relativism allow cross-cultural condemnation. And there may be other varieties of metaethical relativism which also allow cross-cultural condemnation. We just can't condemn other cultures for violating a single correct moral standard, because we've already forfeited that idea by accepting metaethical relativism.

Personal View

I affirm descriptive and metaethical relativism, and deny normative relativism. Specifically, I affirm something close to Gunnar Björnsson and Stephen Finlay's metaethical contextualism, which we might call ends relativism.
We believe that normative “ought” claims are doubly relative to context, being relativized both to (i) bodies of information and (ii) standards or ends.6
Contrary to agent and appraiser relativism as described above, this theory focuses on whichever S can be inferred from the context of a moral judgment, not on whomever is doing the acting or judging. For example, if I say "Slavery is wrong," some further investigation might conclude that I have the end of human freedom in mind and am claiming slavery thwarts that end. Now suppose I ask myself, "Was St. Paul right to send Onesimus back to his master?" I might answer 'yes,' if I'm now comparing Paul's action against some ends endorsed by ancient Jewish or Roman culture, or I might answer 'no,' if I'm still comparing his action against the end of human freedom. People tend to be referring to ends they themselves care about when they make moral claims, but they might not be.

A reason to accept metaethical contextualism is that moral language is normative language, and normative language is well explained by end relative semantics. Suppose your friend says, "I'm bored. What should I do?" Context strongly suggests an end for that 'should,' namely: your friend not being bored. Or suppose a scientist asks, "Should I run this experiment again?" The context suggests an end of knowing what results are possible...unless she says it in an exasperated tone and you know a funding organization encourages excessive testing, in which case the end of satisfying the people with money is suggested from context.

Of course, it could be the case that all normative language uses end relative semantics but there is still one correct morality composed of one correct end or set of ends, e.g. "maximizing the well-being of conscious creatures," "pleasing God," or "maintaining a stable society." However, since 'correct' is itself a normative term, it requires a prior end. At some point it seems we just need to pick an end to get started, and admit subsequent usage is qualified by that end.

1. 'Action relativism' term from Cornman, J. W., Lehrer, K., Pappas, G.S. (1992). Philosophical problems and arguments: an introduction. Indianapolis, Indiana: Hackett Publishing Company, Inc. p. 287
2. Fletcher, J. (1966). Situation ethics: the new morality. Louisville, Kentucky: Westminster John Knox Press.
3. 'Descriptive,' 'metaethical,' and 'normative' relativism terms from Brandt, R. (1967). Ethical Relativism. In P. Edwards (Ed.), The encyclopedia of philosophy: Vol. 3 (pp. 75-78). New York: Macmillan.
4. 'Agent' and 'appraiser' terms from Lyons, D. (1976). Ethical relativism and the problem of incoherence. Ethics 86. pp. 107-121. 
5. Assuming each person is a member of just one culture, which is a very questionable assumption.
6. Björnsson, G., Finlay, S. (2010). Metaethical contextualism defended. Ethics 121. pp. 7-36.
7. Assuming the individual's belief is correct that the action promotes or opposes her own principles. If Ada's opposition to abortion depends on her belief that cartesian dualism is true, and it isn't, then she is misjudging relative to her own values.

Saturday, May 21, 2011

Scientific Method in Practice (Pt. 4)

In this series of posts, I'm re-reading Hugh G. Gauch, Jr.'s philosophy of science textbook Scientific Method in Practice (Google Books).

[Series Index]

Middle History

In his great work Mathematical Principles of Natural Philosophy, Isaac Newton gave 'four rules of reasoning in philosophy' which promoted parsimony — "Nature is pleased with simplicity, and affects not the pomp of superfluous causes" — and realism:
In experimental philosophy we are to look upon propositions collected by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.1
If all the 'philosophy' terminology sounds weird, I should point out that 'natural philosophy' was what people called science before they called it science.

Newton's views probably seem entirely reasonable, but there were (and are) radically different approaches to science. You see, philosophers are often very worried about the possibility that our perceptions are misleading about the true nature of reality. George Berkeley 'solved' this problem — in his own estimation — by undercutting an important assumption:
All this Scepticism follows, from our supposing a difference between Things and Ideas, and that the former have a Subsistence without the Mind, or unperceived.2
He simply denied that Things exist apart from Ideas, and so skeptics don't have anything to be skeptical about! As a bonus, his solution undercut 'all the impious Schemes of Atheism and Irreligion' as well as 'Idolatry' and 'Fatalists.' It is true that many notions assume the existence of an external world, so I suppose a person who feels no need of it himself can wipe out a sea of contrary opinions in the one stroke of declaring idealism.

Between the positions of Newton and Berkeley, David Hume was agnostic about the question of an external world:
As to those impressions, which arise from the senses, their ultimate cause is, in my opinion, perfectly inexplicable by human reason, and ‘twill always be impossible to decide with certainty, whether they arise immediately from the object, or are produc’d by the creative power of the mind, or are deriv’d from the author of our being. Nor is such a question any way material to our present purpose. We may draw inferences from the coherence of our perceptions, whether they be true or false; whether they represent nature justly, or be mere illusions of the senses.3
Hume apparently didn't think it mattered whether our senses are connected to an external world, so long as they are consistent enough for us to learn from their patterns. This attitude still comes up among practicing scientists today who focus on constructing models to fit the data, without worrying whether the model describes external reality.

Thomas Reid was explicitly responding to both Berkeley and Hume when he wrote:
All reasoning must be from first principles; and for first principles no other reason can be given but this, that, by the constitution of our nature, we are under a necessity of assenting to them. […] reason can neither make nor destroy them; nor can it do any thing without them: it is like a telescope, which may help a man to see farther, who has eyes; but without eyes, a telescope shews nothing at all.4
He went on to draw an analogy from the way a mathematician must assume axioms, to the way a historian or witness must assume some trust for memory and senses, and to the way a natural philosopher (i.e. a scientist) must assume 'that the course of nature is steady and uniform.'

Gauch then spends a couple of pages discussing Immanuel Kant's contribution to the ongoing debate, but I'm having a really hard time understanding it even after reading other articles and excerpts. Apparently he thought space and time weren't part of external reality, but instead are the subjective medium upon which our sense experience is arranged. Or something. According to Gauch:
Such thinking was the beginning of constructivist or anti-realist views of science, that truth is constructed by us rather than discovered about nature.5
A quick overview of the logical empiricism (or logical positivism) movement of the early 20th century closes out this chapter. My short version of his short version is that the logical empiricists tried to restrict meaningful language to whatever could be empirically observed or logically concluded, hence the name. Sounds reasonable at first, but since this precludes any sort of metaphysical presuppositions — like there being an external world that we're experiencing — it became very detached from common sense.


1. http://en.wikisource.org/wiki/The_Mathematical_Principles_of_Natural_Philosophy_%281846%29/BookIII-Rules
2. http://en.wikisource.org/wiki/A_Treatise_Concerning_the_Principles_of_Human_Knowledge
3. http://en.wikisource.org/wiki/Treatise_of_Human_Nature/Book_1:_Of_the_understanding/Part_III
4. Reid, T. (2000). An inquiry into the human mind on the principles of common sense. (D. R. Brookes, Ed.) Edinburgh, Scotland: Edinburgh University Press Ltd. (Original work published 1764). p. 71
5. Gauch, H. G., Jr. (2006). Scientific method in practice. Cambridge: Cambridge University Press. p. 67

Sunday, May 15, 2011

Demarcation

What distinguishes a moral 'ought' from a non-moral 'ought'? Or, if you agree that all 'oughts' are relative to ends, what distinguishes a moral end from a non-moral end?

This is the question that has been bugging me as I fall asleep nights lately. I see two major contenders:

1. A distinctively moral end is one which everyone is expected (by someone) to weigh against any conflicting ends, perhaps even in an overriding way.

2. A distinctively moral end is one which has to do with the welfare of others.

The first option is attractive because it captures 'oughts' which don't seem to hinge on the welfare of others, but get clumped together in social discourse with 'oughts' that do. The second option is attractive because it tracks much more closely with what I personally consider the moral domain, or at least the morality I care about.

I suppose a third option would be to get even more narrow than (2) and only classify a specific normative ethic as morality properly-so-called. This is a historically popular route, but the next person who touts another normative ethic will reject it.

I'm leaning toward (1) because I'm not trying to demarcate the ends people should consider moral, but which ends they already do consider moral. And when people do consider an end to be moral, they tend to expect others to take it into account (at least) in all circumstances.

Sunday, May 8, 2011

An Argument from Selfish Theistic Morality

One objection I've heard against non-theistic ethics is that it wouldn't necessarily be in every person's best interest to act morally. Even if it's usually in a person's best interest to act morally, there will be times this is not the case, which can lead to an argument like this:

1. M is the moral thing for me to do, and M is not in my best interest. (premise)
2. M is the moral thing for me to do. (1)
3. M is not in my best interest. (1)
4. It's the case that agent A ought to do X iff X is in A's best interest. (premise)
5. It's not the case that I ought to do M. (3 and 4)
6. It's the case that X is the moral thing for A to do iff A ought to do X. (premise)
7. It's not the case that M is the moral thing for me to do. (5 and 6)

Yikes! Something has gone wrong here, since (2) and (7) are in direct contradiction.

Denying the (Possibility of the) Premise in (1)

This is the route taken by some theists who believe it is essential to morality that God will step in and make it be the case that acting morally is always (eventually) in our best interest, and vice-versa. Usually by means of a reward and punishment afterlife. Reincarnation schemes can work too.

I positively affirm that sometimes acting morally is not in one's best interest now or later, so I'll need to deny another premise.

Denying the Premise in (4)

This premise equates what one 'ought' to do with what is in one's best interest. But 'ought' in what sense? I maintain that 'ought' always takes an end, which might be in order to fulfill one's own best interest, or in order to comply with the law, or in order to improve the happiness of society, etc.

So I do deny the premise in (4), unless the end involved is something like in order to fulfill one's own best interest.

Denying the Premise in (6)

Finally, this premise equates what one 'ought' to do with what's the moral thing for one to do. I give the same answer as before: it will depend on which end the 'ought' takes.

I deny the premise in (6), unless the end is a moral end. Which ends are moral ends? No need to answer that in general now. There's a more specific issue at hand:
Is in order to fulfill one's own best interest a moral end?
Answering 'yes' to this would supply one of the missing premises which would then force me to deny the possibility of (1), even with my end-relational view of 'ought.'1 But I answer 'no' for the same reason I affirm the possibility of (1): I maintain that moral 'oughts' can be true even against an agent's own best interest.

Saving a drowning child is still what I morally ought to do, even if I don't have a guarantee that my own best interest is thereby promoted. I invite opponents to openly disagree with this.

Wait a minute, why not just give the drowning child example up front instead of going through the whole argument analysis rigmarole? I wanted to show that affirming (1) doesn't lead to a contradiction. It would also take affirming (4) and (6), which is consistent with denying the possibility of (1) but isn't forced by affirming (1).

Simply put, I can separate my own best interest from moral rightness...if I consistently distinguish my own best interest from moral rightness. I do so with the crazy notion that the moral thing for me to do has to do with the best interests of everyone affected, without privileging myself.


1. We would have to add that fulfilling one moral end gives the same result as fulfilling moral ends in general. The 'ought' in (4) would also need to be specified as having the best interest end I mentioned.

Friday, May 6, 2011

What Is Desirism?

In one sense, this is an easy question: Desirism is the moral system of Alonzo Fyfe as promoted on his blog1 and in his collaborative podcast with Luke Muehlhauser.2 At the same time, it's a hard question because Fyfe's views have changed significantly over time and no definitive sketch has been kept current. My purpose here is to sketch what I think Desirism is all about, and update this post when my own understanding of his moral system changes.

The following sketch will be in my own words, since I'm interested in the ideas, not Fyfe's phrasing or style of explanation. Nor should anyone take this as anything more than a bystander's attempt at describing Desirism. I am not speaking for or against the system here.

Current version: 1.3.2 — last updated May 10, 2011

Explaining Behavior

Proponents of moral systems tend to start with some aspect of morality which they have an insight about, then they try to account for other aspects by building on this core. Desirism's starting point is an explanation of human behavior, or as philosophers would say: an action theory.3

Action theory is relevant to moral theory, since the question of how voluntary action works can bear on the question of which actions, morally speaking, we ought to perform.

I won't get into all the details of Desirism's action theory, but basically: a person's attempted actions are determined by their desires and beliefs. If I want a cookie and I believe a cookie is in the left jar instead of the right jar, I will try to open the left jar...unless I have some stronger desire which outweighs my cookie desire.

This might sound a tad obvious and silly, but it is a substantial claim. It's opposed to the notion that we can act 'from duty' against all our desires, which is how Kant defined moral action.4

Shaping Behavior

It follows from the above that if we want someone to make a different choice in a given circumstance, we must either change her desires or her beliefs.

From what I know of Fyfe, he doesn't advocate making stuff up to shape the beliefs of those who buy into it. Instead, he focuses on the process of social praise and condemnation to shape the desires of those affected by such peer pressure.

Suppose most people in a population don't like rock music. Heck, they don't even like thinking about other people listening to rock music. What's going to happen? Some of them are going to condemn rock music and, probably, some others who like rock music will start to like it a little less. (This may backfire with teenagers.)

I see a parallel here with biological evolution as a way of describing the "change in the frequency of alleles within a gene pool from one generation to the next."5 Desirism can be seen as a model for the dynamics of held-desires in a population over time.

Judging Behavior

What does it mean to say an action is 'morally wrong' or 'morally right'? This is where Desirism gets fuzzy, in part because the detailed podcast series hasn't gotten around to addressing this yet. Here's how Fyfe seems to handle normativity:

When we say a person 'ought' to do X or 'ought not' to do Y, we are implying there is a reason for taking (or not taking) an action. According to Desirism, desires are the only possible reasons for taking an action. (At least when mistaken belief isn't complicating things.) And therefore, strictly speaking, 'Sam ought to do X' is only true if X properly corresponds to Sam's desires.

Wait, what?! Doesn't this throw moral right and wrong out the window entirely if Sam only 'ought' to do whatever conforms to her personal desires?

Not so fast. Desirism locates normativity at a different level than usual. Instead of focusing on what agents ought to do in a given situation, Desirism focuses on which desires agents ought to have. The colloquial assertion 'Sam ought to do X' is taken as an elliptical form of, 'Anyone ought to have Z desire[s], which would lead Sam to do X.'

What are these desires 'anyone ought to have' and how can we justifiably claim people 'ought' to have them? We can answer both at once: the desires 'anyone ought to have' are those which — when held by individuals — fulfill other desires more so than thwart other desires. Just as the desires an individual happens to hold justifies her taking desires-fulfilling action, the full set of desires held by desiring beings justifies their holding desires-fulfilling desires.

Of course there is one big difference between these kinds of 'ought.' An individual will invariably do what she 'ought' according to her own desires. That's just how people work. But an individual might not hold the desires she 'ought' to hold according to desires considered more generally. Furthermore, she's not going to be intrinsically motivated to hold these — as Fyfe calls them — good desires. She must be extrinsically motivated, and this happens through a process of peer pressure, as explained in the Shaping Behavior section above.

When we say a person did 'wrong,' we're applying social pressure against anyone listening to not desire doing the condemned thing. Vice-versa for calling an action 'right.' According to Desirim, there is an important sense in which such judgments can serve more than just an emotive role: they can be correct or incorrect insofar as the desires being encouraged (or discouraged) facilitate other desires.


1. http://atheistethicist.blogspot.com/
2. http://commonsenseatheism.com/?p=11626
3. http://en.wikipedia.org/wiki/Action_theory_%28philosophy%29
4. See about halfway through the First Section of Kant's Groundwork of the Metaphysics of Morals.
5. From http://www.talkorigins.org/faqs/evolution-definition.html

Wednesday, May 4, 2011

The Subject Matter of Ethics

This post's title is borrowed from the first chapter of Principia Ethica. Near the beginning of that chapter, G.E. Moore gave his answer:
I am using ['Ethics'] to cover an enquiry for which, at all events, there is no other word: the general enquiry into what is good.1
And he did mean it when he said 'general.' Though we often speak of 'moral good' and 'non-moral good,' Moore's view of ethics encompassed both. His problem — in my view — was failing to notice that 'good' is a term with a variable missing; all that is good is good for an end.2 When we distinguish between moral and non-moral good, we are distinguishing between moral and non-moral ends.

For example, we might say it's good to check more than Wikipedia because we aren't confident in reaching the epistemic end of having true beliefs by checking Wikipedia alone. Or we might say that it's good to avoid saturated fats. Why? Because the end of remaining healthy is endangered by consuming saturated fats in high quantity.

What end is at stake when we claim things are morally good? I don't have the answer today, but I do have two suggestions:

First, because of the end-facilitating relationship between 'good' things and the sense in which they are good, we should be able to survey the things we are most certain are morally good or bad and try drawing an inference to the best explanation, i.e. which end most plausibly generates these 'good' and 'bad' judgments? Yes, this is an empirical approach to defining moral goodness.

Second, we shouldn't assume all moral judgments are based on one, fundamental end. There's good reason to think our judgments are drawn from multiple ends,3 and that moral goodness is a complex — perhaps even incoherent — concept for this very reason.


1. From http://fair-use.org/g-e-moore/principia-ethica/chapter-i
2. http://wordsideasandthings.blogspot.com/2011/03/answering-moore.html
3. See http://faculty.virginia.edu/haidtlab/mft/index.php

Sunday, May 1, 2011

On 'The Objectivist Ethics' (Pt 2)


Last time, I showed how Objectivist metaethics define 'good' and 'evil' according to what extends or shortens a living being's own life. In this post I'll explain how Ayn Rand applies this principle to specifically human action.


Plants Have It Easy
The simpler organisms, such as plants, can survive by means of their automatic physical functions. The higher organisms, such as animals and man, cannot: their needs are more complex and the range of their actions is wider. The physical functions of their bodies can perform automatically only the task of using fuel, but cannot obtain that fuel. To obtain it, the higher organisms need the faculty of consciousness. A plant can obtain its food from the soil in which it grows. An animal has to hunt for it. Man has to produce it.
So living beings without minds carry out their moral good automatically, just as humans automatically digest their food once they have eaten it. By contrast, thinking beings must have proper thoughts to fulfill their moral imperative to stay alive.

Rand goes on to argue that 'animals' (i.e. non-human animals) may have to think, but their thinking is automatic and rigid: 'an animal has no choice in the knowledge and the skills that it acquires; it can only repeat them generation after generation.' We might say that animals aren't able to think about their thinking, and then adjust their thinking.

Humans Must Choose To Think Properly
Man’s particular distinction from all other living species is the fact that his consciousness is volitional.
In order to fulfill their moral imperative to stay alive, humans require what Rand calls 'conceptual knowledge' which doesn't come naturally, but must be intentionally acquired. Basically, she means abstract reasoning. And not just accepting abstractions given by society, but making an effort to make the right abstractions which effectively lead to a longer life.

It's a lot like Sam Harris' claim that 'science can determine human values,' except with a completely self-oriented moral goal.

And Now For Something Completely Different

Up until this point, I've disagreed with Rand's definition of moral goodness, but could at least appreciate Rand's consistency given her premise. Now she places an additional restriction on human good and evil which does not line up with her groundwork.
If some men attempt to survive by means of brute force or fraud, by looting, robbing, cheating or enslaving the men who produce, it still remains true that their survival is made possible only by their victims, only by the men who choose to think and to produce the goods which they, the looters, are seizing. Such looters are parasites incapable of survival, who exist by destroying those who are capable, those who are pursuing a course of action proper to man.
Now suddenly a human's good is not just whatever helps her live longer, but what helps her live longer...so long as she does not rely on other people for her survival. The problem with this is that relying on other people — to some extent — is one of the most effective ways for a human to live longer, especially if one can manage to take more than she gives.

If Rand were consistent she would say:
It's morally good for me to 'mooch' as much as I can from others.
It's morally good for others to 'mooch' as much as they can from me.

It's morally bad for me to allow others to 'mooch' from me.
It's morally bad for others to allow me to 'mooch' from them.
Instead, we get this from her:
The basic social principle of the Objectivist ethics is that just as life is an end in itself, so every living human being is an end in himself, not the means to the ends or the welfare of others—and, therefore, that man must live for his own sake, neither sacrificing himself to others nor sacrificing others to himself. To live for his own sake means that the achievement of his own happiness is man’s highest moral purpose.
Which makes me conclude that she's both incorrect about the meaning of morality and inconsistent in its application. Morality — if it means anything at all — has to do with taking the welfare of others into consideration, while Rand's metaethics explicitly deny this. The only place Objectivists manage to present any façade of morality is in that very part of Rand's normative ethics which flatly contradicts her (anti-)moral foundation.