Saturday, April 30, 2011

Defining Personhood

Personhood is a highly contested label placed on whichever more substantial concepts qualify an entity for direct moral concern.

For this reason, a popular way to go about giving a substantial definition of personhood is to identify which entities one is already directly morally concerned about, and find any combination of criteria which kick out that same list and no more.

If this approach sounds scandalously circular, let me point out that it can still fail! It may very well be the case that one's intuitive feelings of 'this is a person' and 'that is not a person' can't be neatly justified by a set of principles, even if any consistent set of principles will do (beyond sheer intuition affirmation). So even this very ad hoc exercise can shake up intuitions and motivate one to accept a reformed definition of personhood, which is what happened to me.

A Proposed Personhood

Rather than work up to my own take on personhood, I'll present it and then defend it.
A person is an entity which has had conscious experience and desires, and is still capable of conscious experience and desires.
Or, a shortened version which I'm stipulating as taking the same meaning as the first version:
A person is an entity which has been sentient, and is still capable of sentience.
In catchier form:
Personhood begins with sentience; personhood ends with the loss of the capacity for sentience.
Hopefully no one will claim this overall criterion is completely in the wrong neighborhood. I'm not calling the Grand Canyon a person or questioning the personhood of blog readers. But I do expect most will think it too exclusive or too inclusive about some important classes of entities.

The Basic Justification

By 'conscious experience' I mean having a first-person point of view. Inner experience. The thing David Chalmers tries to draw out as the 'hard problem' of consciousness.1 And by 'desires' I include pain and happiness, along with the notions of fulfillment, frustration, well-being, love, wishing, preferring, etc. Having any of these counts as having some desires. By putting conscious experience together with desires, we are talking about an entity to whom things matter. And entities to whom things matter are the proper targets of direct moral concern.

Though I would quibble with his vocabulary and intended scope, I think Kant was on a parallel track when he wrote:
Beings whose existence depends not on our will but on nature's, have nevertheless, if they are irrational beings, only a relative value as means, and are therefore called things; rational beings, on the contrary, are called persons, because their very nature points them out as ends in themselves, that is as something which must not be used merely as means, and so far therefore restricts freedom of action (and is an object of respect). These, therefore, are not merely subjective ends whose existence has a worth for us as an effect of our action, but objective ends, that is, things whose existence is an end in itself;2 (emphasis added)
Essentially, there is an all-important difference between making use of non-sentient entities to fulfill our desires, and using sentient entities to fulfill our desires. What is morality if it isn't taking the desires of others — or others with desires — into account?

'Too Limited!' Objections

It doesn't count sufficiently brain-damaged human beings!

Suppose a human's brain were entirely removed and destroyed, and the rest of the body kept alive by advanced, non-sentient machines. Would you count what's left as a person?

Now, there may be important questions about whether a given patient has lost all capacity for sentience. It may be ethically sound to play it safe when we're not sure, but extreme cases like total brain removal and destruction demonstrate the principle.

It doesn't count early stages of unborn human life!

That's correct, because we have good reason to believe an embryo has never had a first-person inner experience. It is partially analogous to the former example of a brainless adult body, except that it will likely start a new sentient existence in the future.

What about temporarily unconscious humans?

The above definition covers humans in dreamless sleep or in a cryogenic state. The difference between killing an embryo and killing a sleeping adult is, respectively, that of preventing a personal life from ever starting and that of permanently ending a personal life.

'Not Limited Enough!' Objections

It counts non-humans!

If you believe dogs have a first-person experience and can suffer when hurt, it shouldn't be difficult to accept that how we treat dogs is morally relevant for this very reason. Does a lobster feel pain like we do when it is boiled alive? I don't know, but I hope the answer would make a difference to you.

Also, peaceful aliens aren't going to land and share their secrets if we maintain a humans-only attitude like that!

It counts later stages of unborn human life!

That's also correct, because we have good reason to believe that from (roughly) 20 weeks and on, a developing human has first-person consciousness and the ability to suffer.

It counts sentient beings who lack rational thought, or moral sense, or awareness of self identity, or the concept of time, or language, etc!

These are all popular personhood criteria, which can be intuitively rejected in single move. Imagine an adult human who lacks any one of these by birth or by accident, but is still sentient. Are you willing to write off this entity as a person?

What About Metaethics?

This personhood definition should be compatible with a variety of metaethical views, including but not limited to my own.

Whatever your deep view on the nature of morality, it comes down to whether you think it matters how you treat fellow sentient beings. Wait; scratch that. It matters regardless of what you think because it matters to them.

1. See
2. From Fundamental Principles of the Metaphysics of Morals.

Friday, April 29, 2011

On 'The Objectivist Ethics' (Pt 1)

Ayn Rand was one of those reforming moralists who thought there was something deeply wrong with popular morality. It would be easy to write her off as selfishly motivated in this, but since a lot of her point is that selfishness is improperly maligned, it's worth hearing her out.

The Objectivist Ethics — Ayn Rand (search link)

Value, a Product of Life
The concept “value” is not a primary; it presupposes an answer to the question: of value to whom and for what? It presupposes an entity capable of acting to achieve a goal in the face of an alternative. Where no alternative exists, no goals and no values are possible.
For a moral system named 'Objectivism,' I find it interesting how she starts by writing off the notion of just-out-there-in-the-world value. She is limiting 'value' to what is often called instrumental value, and only for goals acted on by living beings (as opposed to potential instrumental value).

Why the focus on living beings? Because, supposedly, living beings are the only entities which exist under threat of non-existence:
Matter is indestructible, it changes its forms, but it cannot cease to exist. It is only a living organism that faces a constant alternative: the issue of life or death.
She doesn't only mean life capable of intentional action, or even conscious life, but 'all living organisms, from the simple to the most complex' which includes 'the single cell of an amoeba.' Unless she thinks amoebas have minds, however, I don't see how she can claim her fundamental distinction is 'objective' rather than an arbitrary metaphysical view. Why is an amoeba under the threat of non-existence but a star is not? Why do the active processes of an amoeba count but those of a star do not?

Life, the Goal of Life
On the physical level, the functions of all living organisms [...] are actions generated by the organism itself and directed to a single goal: the maintenance of the organism’s life[....] It is only an ultimate goal, an end in itself, that makes the existence of values possible. Metaphysically, life is the only phenomenon that is an end in itself: a value gained and kept by a constant process of action.
This boils down to the claim that there is only one fundamental kind of goal in the universe: every living being's goal of remaining alive. All other goals are ancillary to that. And since value can only exist with reference to a goal, the only things of value to a living being are those which extend its life.
An organism’s life is its standard of value: that which furthers its life is the good, that which threatens it is the evil.
So there you have it. Objectivist metaethics hold that moral terms only properly refer to what helps or hinders a living being's own continued state of being alive.

How 'objective' vs. 'subjective' is this? These are fuzzy terms, but I will say Objectivism is objective in the sense of being attitude-independent and in the sense of there being a fact of the matter whether something will extend a given living being's life. Objectivism is subjective in the sense that which things are valuable, good, or evil are entirely relative to each individual living being and in the sense that Rand's way of distinguishing life from non-life is idiosyncratic and arbitrary.

Next time, I'll look at how Objectivist ethics are applied to human action in particular.

Thursday, April 28, 2011

On 'Non-descriptivist Cognitivism'

In their paper 'Non-descriptivist Cognitivism: Framework for a New Metaethic,' Terry Horgan and Mark Timmons (H&T) challenge the widely held assumption that beliefs must be aimed at describing the world. They carve out space for genuine beliefs which aim at how the world ought to be, then argue that moral judgments are of this kind.

Non-descriptivist Cognitivism — Terry Horgan and Mark Timmons (search link)

Moral Judgments: Beliefs or Something Else?

One of the major divisions in moral philosophy is whether moral judgments express beliefs or, alternatively, something else like attitudes (emotivism) or demands (prescriptivism).

Cognitivism — moral judgments (primarily) express beliefs
Non-cognitivism — moral judgments (primarily) express something other than beliefs

Perhaps the strongest argument in favor of cognitivism is that moral judgments behave like beliefs in natural language as well as in logical form. H&T affirm that moral judgments have the 'logico-grammatical trappings of genuine beliefs' and give this example:
Either Jeeves had already mailed Uncle Willoughby's parcel or Bertie ought to mail it.
It's hard to see how 'Bertie ought to mail it' could be a simple attitude expression, since the speaker would either have an attitude or not whatever the status of an unknown fact off somewhere else. This and other considerations about our use of moral language present a substantial obstacle for non-cognitivism.

Moral Judgments: Built-in or Contingent Motivation?

Another major division in moral philosophy is whether or not a person making a moral judgment is necessarily motivated by it, at least to some extent.

Internalism — accepted moral judgments necessarily motivate
Externalism — accepted moral judgments don't necessarily motivate

A strong argument in favor of internalism is that moral beliefs seem to have an unusually close connection to motivation. As H&T put it:
Typically, anyway, moral judgments directly dispose us toward appropriate action, independently of our pre-existing desires—whereas ordinary nonmoral beliefs only become action-oriented in combination with such prior desires.

Internalism pairs more easily with non-cognitivism; and externalism with cognitivism. It's easy to view beliefs as only contingently motivating and other kinds of expressions as having the motivation built-in.

But H&T are proposing a view that is both cognitivist (moral judgments express beliefs) and internalist (accepted moral judgments necessarily motivate). Taken individually, these views have a lot going for's the combination that seems dubious. If they can argue convincingly for a method of harmony, their overall view becomes an alloy of two strong elements.

The Essence of Belief

According to H&T, a belief is 'a kind of psychological commitment state' with 'way-the-world-might-be content'. Notice the word 'might' in that definition. Usually, we think of beliefs as purporting to represent how the world is, not just might be. H&T are leaving the basic definition of belief open for two distinct subtypes of belief: descriptive beliefs and evaluative beliefs.

The belief that 'Bertie will mail the parcel' is descriptive.
The belief that 'Bertie ought to mail the parcel' is evaluative.

Is this concept of 'evaluative belief' legitimate? I'm undecided, but here is the proposed model:

Anatomy of a Descriptive Belief
<core descriptive content: 'that Bertie mail the parcel'> ← is-commitment

Anatomy of an Evaluative Belief
<core descriptive content: 'that Bertie mail the parcel'> ← ought-commitment

Note: Evaluative beliefs do share a core descriptive content, but the ought-commitment makes it a non-descriptive belief overall.

Tension Relief

I don't think it will be controversial that evaluative beliefs support internalism just as easily as non-cognitivism can. They have a built-in ought-commitment!

The real test is whether H&T's model solves the problems non-cognitivism has with the 'logico-grammatical trappings' of moral judgments. Let's go back to the earlier example:
Either Jeeves had already mailed Uncle Willoughby's parcel or Bertie ought to mail it.
How is this understood under Non-descriptivist Cognitivism?
On our view, such a belief is to be understood as a logically complex commitment state with respect to a sequence of core descriptive contents.
Specifically, the person uttering this complex moral belief is saying she is close to holding an ought-commitment to 'that Bertie mail the parcel' and will inferentially hold it if she comes to hold an is-commitment to 'that Jeeves had not already mailed the parcel.' (Or at least that's best I can do to characterize their take on that particular kind of complex moral judgment; I'm not confident I have it right.)

Does This Work?

No idea. I'm having a hard time accepting evaluative beliefs as properly-so-called beliefs. Even if the 'logico-grammatical' stuff does work, I would be inclined to say this shows moral judgments don't need to express beliefs to get past that challenge. Why couldn't non-cognitivists use the same tools?

Nor do I accept that 'ought' is a distinct sibling to 'is,' because — following Stephen Finlay's end-relational theory — I consider 'ought' to be a matter of whether, given an implied end, an action is more likely than not to precede that end. But this is a minority externalist view which does not have the initial plausibility of internalism.

Mostly I wanted to explain where Horgan and Timmons are coming from. It's undoubtedly a thought-provoking paper.

Wednesday, April 27, 2011

Books I Want To Write

Titles subject to change if I come up with something snappier...

Morality Without the Mystery

About a year ago I became keenly interested in the question, "What is morality, anyway?" When I began to dig into moral philosophy, I found — to put it nicely — too many answers. Not only do moral philosophers hold all sorts of opposing ideas, they have conflicting tests for what counts as acceptable answers about the nature of morality. It's a mess.

Why add to the mess? Unfortunately we can't just ignore moral philosophy because such questions are central to personal and political life. I have a lot of sympathy for Sam Harris' tactic of largely ignoring metaethics in favor of presenting a moral system which captures much of what we already think about moral goodness. But this has been tried before and it's still worth asking why we should go with one well-meaning solution over another.

I want to write a book that is accessible to an interested general audience, but also lays out my view in the proper philosophical terms, as popular level physics books do when they include the rigorous mathematics without requiring general readers to understand the equations.

Why Christianity Is Probably Not True

If I had blogged through the mid 00's, this would have been my main focus rather than metaethics. It's no longer a burning question because I've gone through the dialectic process long enough to put it to rest, unless something new comes up.

I want to write the book I wish I'd found way back when I first started to consider whether my Christian beliefs were mere culture or how the world really is.

Again, why add to a well-tread book category? As a skeptic of Christianity, I find most skeptical-of-Christianity books downright embarrassing. More than embarrassing: counterproductive. It's true that many Christians never challenge their beliefs by picking up a critical book, but what really concerns me are all the Christians who do make some honest effort to find out what unbelievers have to say...and find skeptics insisting — against secular history — that it's ridiculous to believe Jesus ever existed. That sort of thing.

My goal would be to present what I consider to be the best arguments against Christianity, without attacking the intelligence or good will of Christians in general. I'm not out to upset the lives and hopes of happy believers, so much as to engage with those interested in the critical question of truth and to show unhappy believers that there is reason to doubt.

The Useful Time Traveler: What To Know Before You Go

One of my favorite daydreams is to consider how I would explain some modern concept or bit of technology to, say, a renaissance scientist.

Last week I was reading Francis Bacon complain that he could heat things up with fire but didn't have any way to cool things down drastically. I could tell him about refrigeration, but explaining how it works would be much better. And if I wanted to explain that fire isn't really an element, it would be nice if I could explain what fire is instead.

It's amazing how many things we think we understand, but we can't explain except in the most superficial way. The time travel thought experiment is really a way to spark interest in the great wealth of practical and theoretical knowledge we take for granted in the age of 'someone else knows how that works.'

Monday, April 25, 2011

Scientific Method in Practice (Pt. 3)

In this series of posts, I'm re-reading Hugh G. Gauch, Jr.'s philosophy of science textbook Scientific Method in Practice (Google Books).

[Series Index]

Philosophy of science — parts of it anyway — has been the subject of debate since classical times. I will examine three periods of this history in the next three posts.

Why bother with what people thought about science when they were so ignorant and mistaken about things we take for granted today? Because their discussions of scientific method are still relevant. If there's one thing I want everyone to take from this series, it's that science is primarily about how we find things out, not what we've found so far.

Three Extremes
There is an overarching theme regarding scientific method that runs throughout this entire history, and by being alerted to that theme from the outset, a reader is likely to gain twice or thrice as much insight. That overarching theme is the subtle and indecisive struggle over the centuries among empiricism, rationalism, and skepticism, caused by an underlying confusion about how to integrate science's evidence, logic, and presuppositions.1 (emphasis added)
The other major theme is the interplay of scientific discovery and divine revelation. What should we believe when there is tension between the discovered and the revealed?

Early History

Aristotle rejected his teacher Plato's idealism in favor of believing the world we experience is the real world (nutty I know). He outlined deduction and induction, and had very high expectations of scientific knowledge: "Consequently the proper object of unqualified scientific knowledge is something which cannot be other than it is" and "For indeed the conviction of pure science must be unshakable."2 The desire for certain knowledge about the natural world would shortly be reinforced by the success of Euclid's geometry treatise Elements which was widely taken to offer certainty of that domain's basics and much of what follows from it.

Gauch praises Aristotle as a pioneer who got things mostly right, then points out one general and one specific flaw in his approach:
Aristotle's choice of geometry as the standard of success and truth for the natural sciences amounts to asking deduction to do a job that can be done only by a scientific method that combines presuppositions, observational evidence, deduction, and induction. [....] The greatest specific deficiency of Aristotle's science was profound distaste in manipulating nature to carry out experiments.3
This 'brief history of truth' — as Gauch calls it — moves from Aristotle in the 300s BCE to St. Augustine who wrote around 400 CE. In Against the Academics, Aristotle argued that skepticism need not make us think knowledge is unobtainable and judgment should be perpetually suspended. He also dealt with the relationship between Christian revelation and the findings of natural science. I would like to share two relevant passages not quoted by Gauch. The first expresses Augustine's view of truth discovered by 'heathens,' including facts about the natural world.
Moreover, if those who are called philosophers, and especially the Platonists, have said anything that is true and in harmony with our faith, we are not only not to shrink from it, but to claim it for our own use from those who have unlawful possession of it. For, as the Egyptians had not only the idols and heavy burdens which the people of Israel hated and fled from, but also vessels and ornaments of gold and silver, and garments [….] in the same way all branches of heathen learning have not only false and superstitious fancies [….] but they contain also liberal instruction which is better adapted to the use of the truth [….] These, therefore, the Christian, when he separates himself in spirit from the miserable fellowship of these men, ought to take away from them, and to devote to their proper use in preaching the gospel.4
Here the relationship is one of pre-commitment to religious teachings and plundering of natural science (and philosophy) only as far as agrees with and benefits these religious teachings. But Augustine was a man of many opinions over time. This second excerpt shows a much different perspective:
Usually, even a non-Christian knows something about the earth, the heavens, and the other elements of this world, about the motion and orbit of the stars and even their size and relative positions, about the predictable eclipses of the sun and moon, the cycles of the years and the seasons, about the kinds of animals, shrubs, stones, and so forth, and this knowledge he holds to as being certain from reason and experience. Now, it is a disgraceful and dangerous thing for an infidel to hear a Christian, presumably giving the meaning of Holy Scripture, talking nonsense on these topics; and we should take all means to prevent such an embarrassing situation, in which people show up vast ignorance in a Christian and laugh it to scorn.5
It's clear from context that Augustine is not only talking public relations but that he recognizes the competence of non-Christians in discovering truth about the natural world; truth which can even shed light on the meaning of scripture. He still holds that divine revelation is certain, but he's willing to place confidence in well-established natural science over human understanding of scripture, when the two are in conflict.

Albert the Great of the 13th century is the last stop in my abbreviated version of Gauch's abbreviated early history. It's my habit to look up citations and primary sources when convenient, but this time what little I found doesn't quite back up Gauch's explanation. So let's suppose he got it right for the sake of what right? Suppositional reasoning, appropriately enough!
Following Albertus's example, the statement "You are sitting" has suppositional truth given that "I see you sitting," given the presupposition of business as usual between us and the world [....] Any any rate, what was so brilliant about suppositional reasoning was that it admitted that science had been in need of presuppositions, and yet in granted those presuppositions in the context of science's business as usual. [....] Furthermore, suppositional reasoning provides partial and yet substantial common ground for all scientists, regardless whether an individual's worldview is Christianity, Islam, naturalism, or something else.6
The idea here is that scientists can set aside many deep metaphysical worries by presupposing something like common sense realism as a starting point. Today, the popular notion which is supposed to provide a separation between science and metaphysical debates is methodological naturalism, but I would probably find this objectionable if I were religious. I ask naturalist readers to consider how they would feel about the term 'methodological deism.' Can't we use suppositional reasoning that starts closer to the human level in a way that doesn't borrow language from a contentious 'deep' view like naturalism?

There's a whole chapter on scientific presuppositions coming, so more on this topic soon!

1. Gauch, H. G., Jr. (2006). Scientific method in practice. Cambridge: Cambridge University Press. p. 42
2. Aristotle. Posterior analytics. Book I, Part 2.
3. Gauch. p. 48 
4. St. Augustine. On christian doctrine. Book II, Chapter 40.
5. St. Augustine. The literal meaning of genesis. Book I, Chapter 19, Section 39.
6. Gauch. p. 54

Sunday, April 17, 2011

What Is Moral Realism?

There doesn't seem to be a single, widely accepted definition of moral realism. Let's look at some candidate definitions.

According to Sayre-McCord...

Geoffrey Sayre-McCord has a very inclusive definition of realism in general and moral realism in particular:
Wherever it is found, I'll argue, realism involves embracing just two theses: (1) the claims in question, when literally construed, are literally true or false (cognitivism), and (2) some are literally true. Nothing more.1
In other words, moral realism is a synonym for success theory. Picture a little flow chart:
  1. Do moral claims aim at truth?
    no — moral non-cognitivism
    yes — moral cognitivism, continue to the next step.
  2. Are moral claims sometimes true?
    no — error theory
    yes — success theory
The above breakdown is very standard terminology. As we'll see, other philosophers add further criteria (beyond success theory) for a view to count as moral realism.

To be fair, I should explain that Sayre-McCord is advancing a general definition of realism. He's not endorsing all success theories as viable options within moral philosophy. For example, he counts moral subjectivism as a form of moral realism, claiming the faults of subjectivism are for other reasons besides the realism/anti-realism distinction. I highly recommend his paper on the topic:

The Many Moral Realisms — Geoffrey Sayre-McCord (search link)

So, according to Sayre-McCord, moral realism is just the view that some moral statements are true.

According to Gray...

James Gray starts with Sayre-McCord's definition and adds one more restriction:
A moral realist believes that there is at least one moral fact, and moral facts are not reducible to nonmoral facts.2
He goes on to explain why:
Some moral realists might argue that morality is reducible to non-moral facts. [...] I don’t agree that this is moral realism because once we can reduce morality to non-moral facts, we can say, “We thought morality was real, but now we know we were talking about something else.” Morality at that point can be dispensed with.2
My objection to this is that we aren't so stringent about realism in general. I'm a realist about sandwiches, but a sandwich is nothing but bread, meat, mustard, etc. Should I say, "We thought sandwiches were real, but now we know we were talking about something else"?

I think what's going on here is that 'moral realism' is treated as a label for acceptable moral theories. Since Gray thinks non-reducibility (i.e. moral non-naturalism) is needed for morality to be as important as we commonly think it is, he's including it in his definition of moral realism. This would also explain the tendency for philosophers to include things like motivational internalism, the notion that anyone who accepts a moral judgment as true necessarily has some motivation to comply with it.

I'm more sympathetic with Sayre-McCord's approach of sticking to the question of realism and counting other considerations separately, but while Gray is too restrictive, Sayre-McCord is too permissive.

According to Brink...
A realistic view about ethics presumably asserts the existence of moral facts and true moral propositions. But a moral relativist who thinks that moral facts are constituted by an individual's or social group's moral beliefs is able to agree with this. Moral realism, it seems, is committed to moral facts and truths that are objective in some way. But in what way?3
Now we're getting somewhere! This 'objectivity' criterion is a popular addition to mere success theory, but it can be tricky to define the kind of objectivity needed. It's easy to exclude too much.

For example, we could say realism requires full independence from mental facts, but then there would be nothing objective in the science of psychology. Or we could say that anything caused by minds can't count as realism, but Brink points out this would stop us from being realists about tables, chairs, or anything else manufactured by thinking beings! (I'll add that Theists couldn't be realists about the natural world.)

Brink settles on the following distinction:
Whatever else realists might claim, they usually agree on the metaphysical claim that there are facts of a certain kind which are independent of our evidence for them. [...] Not only does ethics concern matters of fact; it concerns facts that hold independently of anyone's beliefs about what is right or wrong.4
I'm not sold on this 'independent of our evidence' phrasing, but I do think he's onto something with the criterion of belief-independence. Otherwise, "Everyone believes X is morally permissible, therefore X is morally permissible" would count as moral realism.

At the same time, psychological facts like pain can still play a role in moral realism. This is important because it seems likely morality is — at least in part — about mental facts. In "Naturalism, Theism, Obligation, and Supervenience" (search link), Alvin Plantinga mentions two thought experiments:
  1. Everyone in the world believes it is morally acceptable to torture people for the fun of it.
  2. Everyone in the world desires that the behavior of torturing people for the fun of it were more widely practiced.
Plantinga think any real moral facts must hold independently of both situations. Brink's distinction agrees for (1), that torture would still be wrong if everyone believed it were permissible. However, it is at least arguable that the psychological fact of being extremely undesirable to victims plays some role in torture being wrong. If everyone (including victims!) somehow desired an increase in torturing for fun, wouldn't that affect the real moral facts?

Brink's objectivity criterion might need further refining, but I'm in broad agreement that moral realism can take into account some mental facts even as it excludes others. His comparison to psychology is very apt. While psychological facts depend on minds (being the study of minds), we can still be mistaken about how our own minds work. Similar deal for morality.

The Big Lesson

If you do use the term 'moral realism,' please give relevant details on what you mean. For now, at least, it's not the clearest term in philosophy's word bank.

1. Sayre-McCord, J. (Ed.). (1998). Essays on moral realism. Cornell University Press. 
2. From
3. Brink, D.O. (1989). Moral realism and the foundations of ethics. Cambridge University Press. p. 14
4. Ibid. pp. 15, 20

Thursday, April 14, 2011

Good Means Helpful

Let me boil down all this philosophical rambling about 'end-relational theory of normative terms' to a catch phrase:
Good means helpful.
That's the bulk of it. If I'm asked what it means to call something good, my response is, "It means that thing is helpful."

Of course there is a difference between 'good' and 'helpful.' People are much more likely to ask the obvious follow-up question, "Helpful for what?" than they are to ask "Good for what?" In some contexts, it's downright jarring for someone to ask the follow-up question about 'good,' e.g:
"Helping the suffering is good."

"Good for what?"

"How can you ask that?!"
I suggest this difference doesn't come from the meaning of 'good' itself. Instead, we associate 'good' more strongly with certain conventional answers, while we don't associate 'helpful' so strongly with any particular answers. 'Helpful' is the more open-ended form of 'good.'

One of the major conventional answers to "Good for what?" is "reducing unnecessary suffering." This is why the above statement sounds so obvious. Spelled out, it would be:
"Helping the suffering is helpful [for reducing unnecessary suffering]."
If some idiot or philosopher asks, "Good for what?" in such an obvious case, we take her to be rejecting conventional (typically unstated) ends and asking for an unconventional end. Probably even a selfish end! This is what draws our ire.

To review, both 'good' and 'helpful' have the same meaning, the same definition. And both words have a blank space in this definition where an end needs to be plugged in, but we're much more likely to fill in the blank for 'good' without asking for clarification.

(Frankly, I'm surprised this isn't already a popular catch phase in moral philosophy. Is there something embarrassingly wrong with it? I'll risk it. As Francis Bacon wrote: "[T]ruth emerges more readily from error than confusion.")1

1. From

Wednesday, April 13, 2011

Category Shuffle

Readers of this blog may notice the tag cloud is gone, replaced by a drop-down box for 'categories.' I spent some time the last few days re-thinking my tagging scheme. My research turned up a neat neologism 'folksonomy,'1 the much more organized Library of Congress Subject Headings,2 and some attempts to link up popular and formal categorization schemes.3

I also learned about the difference between categories and tags,4 as promoted on WordPress but not Blogger. Basically, categories are broad and tags are fine-grained. Some blogs use both, while others just use one. My former tags were an ungainly mix of broad and narrow, so I opted for a new categories-only scheme to keep things simple and neat.

(Besides, if I tried to tag things out, I'd probably feel compelled to go full blast with LoC Subject Headings.)

After deciding to use categories, the next challenge was defining the categories. I found a few lists of 'branches of philosophy,' but wasn't satisfied. What follows is my (starting) list of categories and some notes on what might be somewhat idiosyncratic understandings of them.

Epistemology — Knowledge, justification, truth, philosophy of science.
Metaethics — The meaning of moral language, moral ontology.
Normative Ethics — Methods of determining right and wrong.
Philosophy of Language — Meaning, truth-conditions, logic.
Philosophy of Mind — Consciousness, free will, action theory.
Political Philosophy — Civil liberties, law, penalties.
Value Theory — The meaning and methods of comparing value.
Misc. — Misc.

Perhaps the most questionable decision is separating Value Theory from ethics. One reason is that value, generally speaking, has broader uses than those invoked in ethics, e.g. mathematical, monetary, and aesthetic value.

You may have also noticed the lack of a Metaphysics category. Simply put, my metaphysics is scientific realism and my other philosophical views assume it. I might add a Metaphysics category later on, if I have occasion to treat it separately.

The final challenge was presentation. I wanted a drop-down box, but Blogger doesn't include a widget specifically for that. For my own future reference as much as anything, the procedure is: Design → Page Elements → Add a Gadget → HTML/JavaScript. Then paste the following and tweak as needed:
<div class="widget-content">

<select onchange=

<option /> - Categories - 

<option value="URL" />CATEGORY
<option value="URL" />CATEGORY
<option value="URL" />CATEGORY

Where URL might be:

and CATEGORY might be:

And now back to blogging rather than metablogging.


Tuesday, April 12, 2011

Hypothetical and Personal Reasons

As far as I can tell, ends don't have any source of importance besides their importance in realizing other ends or their importance to a person who desires them. (Or indirect importance to a person because of their importance in realizing other ends which that person desires.)

These two basic kinds of ends-importance generate two corresponding types of practical reasons:
Hypothetical reasons for action. Reasons which only take into account circumstance and an end (or a coherent aggregate of ends).

Personal reasons for action. Reasons which also take into account an agent's desires.
Hypothetical reasons for action only become reasons an agent has if appropriate desires are held. What I like about Stephen Finlay's semantics1 is that normative truths can hold independently of desires. I can deny that individuals necessarily have reason to act toward moral ends, yet affirm the truth of impersonal moral 'oughts.'

For example, I might affirm, 'It is wrong to steal' or 'It is wrong for you to steal,' but not 'Because it is wrong to steal, you necessarily have reason not to steal.'

I realize many philosophers want to compare ends in a way that's desire independent and necessarily gives personal reasons, but I'm very skeptical about that project.


Sunday, April 10, 2011

On 'The End-Relational Theory of 'Ought' and the Weight of Reasons'

This paper1 by Daan Evers begins with an explanation of Stephen Finlay's view of the meaning of 'ought,' followed by an extension of that view, and finally a criticism. Readers of this blog are encouraged to read Finlay's own explanation in 'Oughts and Ends,'2 my condensed explanation,3 Evers' condensed explanation,1 or — for bonus points — all three! But for now...

End-Relational Theory: Extra Short Version

'Ought' — when used in an imperative rather than merely predictive sense — breaks down into an implied end (the goal kind of end) and a claim that a particular action is the most likely available way to realize that end.

Suppose I say, "You ought to look both ways before crossing the street." There is an implied end here: not being hit by an automobile while crossing the street. Further, I'm claiming that looking both ways is the most likely available way to not be hit by an automobile while crossing the street.

If that sounds ridiculously obvious, good!

Reasons to Look Both Ways

According to Evers, 'You ought to look both ways' entails 'There are reasons for you to look both ways' as well as 'The collective weight of the reasons for you to look both ways is greater than the collective weight of reasons for you not to look both ways.'

Also, we should be able to start from 'The balance of reasons favors that you look both ways' and validly conclude 'You ought to look both ways.' (I'm inserting the street-crossing example into Evers' more abstract language.)

Sounds reasonable enough to me. Evers goes on to show how an end-relational understanding of 'ought' can be translated into 'the collective weight of reasons.' But then he questions whether it can be made to work in the other direction, i.e. whether we can start with 'the collective weight of reasons' and validly conclude with an end-relational 'ought.'

Ought —> Weight of Reasons

Evers constructs a theory of reasons in steps I won't reproduce here. Instead, I'll skip to his definition of what it means to say a person has 'most reason' to do something relative to an implied end. (Again, I'm paraphrasing and making his abstract language into more specific terms):
You have most reason to look both ways (relative to the goal of not being hit by an automobile while crossing the street) iff the reasons to look both ways collectively raise the probability of not being hit to a value higher than alternate options do.
So the claim "You ought to look both ways..." can be translated into "You have most reason to look both ways..." in a way that retains end-relational and probabilistic aspects.

Weight of Reasons —> Ought

Now suppose we start with:
You have most reason to never point a gun at a person you don't intend to shoot (relative to the goal of you not accidentally shooting a person).
Can we translate this to an 'ought'? Yes!
You ought to never point a gun at a person you don't intend to shoot (in order that you not accidentally shoot a person).
But if this seems to work, then what criticism is Evers raising?

A Problem?

Without paraphrasing this time, I'll let Evers introduce the problem...
So Most Reason generates intuitively plausible results for the weights of reasons relative to a common end. However, our reasons derive from many different ends, and a theory of weight should allow for comparisons of the strength of reasons derived from different ends. As I will argue now, Most Reason fails here.
Oh. He's appealing to the intuition that the weight of reasons to take one action rather than another is often not relative to a single end, but relative to a multiplicity of ends. Evers offers two strategies to save the theory of weight he previously explained:
  1. Deny that reasons properly 'derive their status as reasons from different ends.'
  2. Claim we can always construct a 'superend' composed of lower-level ends, and so preserve the single-end semantics which didn't cause a problem.
Evers doesn't think either strategy is 'feasible,' but then he only argues against (2) without explaining why he thinks (1) is infeasible. Maybe he just didn't think anyone would go for (1), but I genuinely do think (1) is the correct solution.

My argument for (1) will be by analogy.

Philosophers have, at times, argued for the existence of a simple ought or 'ought simpliciter' or 'ought all things considered.' Though playing the rebel and rejecting this notion, Finlay's end-relational theory 'reveals remarkable unity both within normative language, and between it and related nonnormative constructions.' Additionally, as shown in this paper, end-relational theory 'generates intuitively plausible results for the weights of reasons relative to a common end.'

So why not consider playing the rebel again and reject the notion of simple reasons or 'reasons all things considered'? Or, more to the point, 'all ends considered'? In other words, why not affirm that having most reason for an action implies a specific end, just as affirming that one ought to perform an action implies a specific end?

3. (preprint PDF)

Thursday, April 7, 2011

Scientific Method in Practice (Pt. 2)

In this series of posts, I'm re-reading Hugh G. Gauch, Jr.'s philosophy of science textbook Scientific Method in Practice (Google Books).

[Series Index]

Four Bold Claims
[J]ust what about science is good and worthy of respect? [...] For the sake of concreteness, any little exemplar of scientific thinking will suffice. So envision a scientist declaring that "Table salt is composed of sodium and chlorine." What claims attend this statement?1
It may seem a bit silly to stop and question a scientific fact as trivial and uncontroversial as the chemicals in table salt, but it is an amazing claim in its own right. We live in a privileged age that takes it for granted that the vast array of physical substances we encounter are composed of a mere hundred or so more basic substances (elements), which produce very different effects in combination. Heck, in pure form, chlorine is a poison and sodium easily explodes! In an earlier age, a person acquainted with table salt, chlorine, and sodium may well have rejected the above claim as utterly ridiculous.

Let's start again. Why should you or I believe this modern scientific story about what table salt is really made of? Gauch identifies four 'bold' claims of scientific method in general which attend specific claims, then gives his answer in a nutshell:
The full force of science's claims results from the joint assertion of all four: rationality, truth, objectivity, and realism. Science claims to have a rational method that provides humans with objective truths about physical reality.2
This is what (allegedly) sets science apart from mere cultural tales about the nature of the world. It's also why everyone wants science on their side if they can manage it, and disparages science only when they can't.

Bold Claim #1: Rationality

I'll jump straight to Gauch's template for rational knowledge claims:
I hold belief X for reasons R with confidence level C, where inquiry into X is within the domain of competence of method M that accesses the relevant aspects of reality. A rational belief is not an arbitrary guess, but rather is a justified conclusion based on specific reasons and evidence.3
The composition of table salt, for example, is believed for specific reasons within a framework of high probability according to a method competent in discovering facts about physical reality. Gauch does not claim science is the only rational method of inquiry, citing common sense and philosophy as additional methods. As we'll see in detail later, scientific method relies on these other methods for its presuppositions.
This business of giving reasons R for belief X must eventually stop somewhere, however, so not quite all knowledge claims can follow this formula. Rather, some must follow an alternative formula: I hold belief X because of presuppositions P.3
(On the occasion Gauch touches on moral philosophy — like in the rationality section — I try not to notice.)

Bold Claim #2: Truth
Truth is a property of a statement, namely, that the statement corresponds with reality.4
Well, that's obvious! What are the alternatives? For one, it could be argued that scientific truth is only concerned with bringing greater coherence to our beliefs. For another, it could be argued current scientific theory is just one way of organizing thoughts in a way that provides useful technology.

While scientific method does increase coherence in our beliefs and it does organize thoughts in a way that often provides useful technology, these are both side effects of scientific method aiming at truth; truth is coherent and often useful.
Indeed, every kind and variety of anti-scientific philosophy has, as an essential part of its machinery, a defective notion of truth that assists in the sad task of making truth elusive.5
Bold Claim #3: Objectivity

Gauch points out that "Table salt is composed of sodium and chlorine" says nothing about people who might hold this belief. An 'objective belief,' then, is one which concerns the object of the belief and not the believer or the state of believing. I like Gauch's repeated phrase 'priority of reality over beliefs.'

While the bumper sticker "It's a Jeep thing, you wouldn't understand" may forever be closed to further investigation to those of us who don't already 'get it,' an important aspect of science is that the reasons R mentioned in the rationality section above are open to anyone who cares enough to dig deeper.
Accordingly, the scientific attitude is not "I am a superior and unique person who alone knows fact X," but rather, "I know X, and so can any other human who cares to make the effort required to learn it." There is an essential humility in the understanding that science is public and shared.6
And yet the stereotype persists that scientists are some kind of belief-dictating elitists.

Bold Claim #4: Realism

I think the earlier three claims covered this one well enough indirectly.


While the discussion of these four claims hasn't done much to justify thinking science is successful at 'provid[ing] humans with objective truths about physical reality,' I hope the significance of scientific claims is more clear. It's not that science tells one story among other, equally valid cultural stories; science aims at being the correct story, so far as it goes.

1. Gauch, H. G., Jr. (2006). Scientific method in practice. Cambridge: Cambridge University Press. p. 29-30
2. ibid. p. 40 
3. ibid. p. 30
4. ibid. p. 31 
5. ibid. p. 34
6. ibid. p. 35

Monday, April 4, 2011

Stop Calling Everything 'Naturalistic Fallacy'

These days, just about any ethical argument which isn't based on pure feeling risks the following cursory dismissal:

"You are committing the naturalistic fallacy!"

Unfortunately a number of distinct things have been called 'naturalistic fallacy' which aren't what G.E. Moore, the fellow who coined the term, originally meant. Here are some things which aren't the naturalistic fallacy.

Impostor #1
There are two fundamentally different types of statement: statements of fact which describe the way that the world is, and statements of value which describe the way that the world ought to be. The naturalistic fallacy is the alleged fallacy of inferring a statement of the latter kind from a statement of the former kind.1
It was Hume, not Moore, who pointed out that arguments from what is the case to what ought to be the case must include a step that connects the former to the latter. This is commonly called the 'Is-Ought Problem,' though it should really be called the 'Is-Ought Reminder.'

Impostor #2
The fight against the naturalistic fallacy was supposed to be over. One of the great achievements of modern philosophy was to undermine arguments from Nature. No longer would educated people argue that homosexuality, powered flight, and the education of women were unnatural.2
I don't support the idea that whatever is 'natural' is good and whatever is 'artificial' or 'unnatural' is bad (or at least suspicious), as I'm quite fond of some unnatural technology and avoid natural poison whenever I can!

But this isn't the meaning of 'naturalistic fallacy' in modern philosophy.

Impostor #3
Do not fall victim to the naturalistic fallacy. Simply because something evolved does not mean that it is right or justifiable. Nor does it mean that it cannot be changed.3
A common variation, in which evolution rather than traditional prejudice defines 'natural' and 'unnatural.'

The Genuine Article
What Professor Moore means by the 'naturalistic fallacy' is the assumption that because some quality or combination of qualities invariably and necessarily accompanies the quality of goodness, or is invariably and necessarily accompanied by it, or both, this quality or combination of qualities is identical with goodness.

If, for example, it is believed that whatever is pleasant is and must be good, or that whatever is good is and must be pleasant, or both, it is committing the naturalistic fallacy to infer from this that goodness and pleasantness are one and the same quality.4
What's most striking about this genuine definition of 'naturalistic fallacy' is that any of the above impostors can be invoked without committing the naturalistic fallacy! For example, it could turn out to be the case that whatever naturally evolved is also good and nothing is good unless it has naturally evolved. We could then make correct judgments about goodness by checking whether a thing has naturally evolved or not. And, so long as we don't claim 'naturally evolved' and 'good' must therefore be one and the same quality, we avoid the naturalistic fallacy.

Another striking difference is that Moore's point concerned metaethics, while impostors tend to focus on normative ethics. Moore did have a lot to say about normative ethics, but he didn't write off other views as necessarily fallacious. Instead, he claimed that advocates of those other views tend to commit the (genuine) naturalistic fallacy and so they might reconsider their views once they realize this fault:
Now, I do not wish the importance I assign to this fallacy to be misunderstood. The discovery of it does not at all refute Bentham’s contention that greatest happiness is the proper end of human action, if that be understood as an ethical proposition, as he undoubtedly intended it. That principle may be true all the same; we shall consider whether it is so in the succeeding chapters. Bentham might have maintained it, as Prof. Sidgwick does, even if the fallacy had been pointed out to him. What I am maintaining is that the reasons which he actually gives for his ethical proposition are fallacious ones so far as they consist in a definition of right. What I suggest is that he did not perceive them to be fallacious; that, if he had done so, he would have been led to seek for other reasons in support of his Utilitarianism; and that, had he sought for other reasons, he might have found none which he thought to be sufficient. In that case he would have changed his whole system—a most important consequence.5
So, like Hume, Moore is often misunderstood as claiming whole classes of ethical views are invalid. Both of them were really just highlighting a consideration moral philosophers should take into account which had previously been overlooked.

1. from
2. from
3. Davis, S.F., & Buskist, W. (2008). 21st century psychology: a reference handbook. London: SAGE Publications, Ltd. p. 259
4. from 
5. from

Sunday, April 3, 2011

Scientific Method in Practice (Pt. 1) and Index

In this series of posts, I'm re-reading Hugh G. Gauch, Jr.'s philosophy of science textbook Scientific Method in Practice (Google Books).


Pt. 1 — Introduction, Science in Perspective, The Risk and the Reward
Pt. 2 — Four Bold Claims
Pt. 3 — Three Extremes, Early History
Pt. 4 — Middle History
Pt. 5 — Recent History
Pt. 6 — Presuppositions
Pt. 7 — Worldviews
Pt. 8 — [series is on indefinite hold]

Chapter One: Introduction
The central thesis of this book is that scientific methodology has two components, the general principles of scientific method and the specialized techniques of a given speciality, and the winning combination for scientists is strength in both.'1
Gauch illustrates this view with a diagram of Astronomy, Chemistry, Geology, Microbiology, and Psychology sharing a common core, 'principles of scientific method.' This is no trivial point. Among non-scientist academics — as well as among some scientists — the notion that any 'methods of enquiry apply with equal efficacy to atoms and stars and genes' has been challenged. Gauch's first-pass answer to such challenges works well as a survey of the main topics in his textbook:
"Do astronomers use deductive logic, but not microbiologists? Do psychologists use inductive logic (including statistics) to draw conclusions from data, but not geologists? Are probability concepts and calculations used in biology, but not in sociology? Do medical researchers care about parsimonious models and explanations, but not electrical engineers? Does physics have presuppositions about the existence and comprehensibility of the physical world, but not genetics?"2
The two major benefits Gauch believes will follow from improved education in core scientific methodology are, first, increased productivity and, second, enhanced perspective.3

This first benefit is highly practical and one Gauch understands from personal experience. As I mentioned in the 'Preview' post of this series, he made a breakthrough of sorts in his field of agricultural science merely by applying the principle of parsimony to crop yield statistics. How many other specific disciplines are neglecting useful general methods?

The second benefit, enhanced perspective of scientific method in 'philosophical and historical context,' makes science 'more interesting' and also 'better integrated.' I take this to mean science won't be seen as a niche concern for scientists, lacking immediate relevance for everyone else. 'Such perspective will also facilitate realistic claims, neither timid nor aggrandized, about science's powers and prospects.'4 Precisely what we need in a time when the role of science is widely underestimated and overestimated!

Chapter Two: Science in Perspective

This chapter begins with the claim that science is a liberal art. I admit this struck me as strange, since the two are often contrasted in phrases like 'liberal arts and science' or 'bachelor of arts / bachelor of science.' It took several re-reads of this section to understand what Gauch — and the American Association for the Advancement of Science (AAAS) — are trying to say when they make this strange sounding classification.

Maybe it will help to start with the example Gauch pulls from M.R. Matthews' Science Teaching.
To teach Boyle's Law without reflection on what 'law' means in science, without considering what constitutes evidence for a law in science, and without attention to who Boyle was, when he lived, and what he did, is to teach in a truncated way.5
Gauch calls this 'humanities-rich science' as opposed to 'humanities-poor science,' which would only teach Boyle's formula as fact and leave it at that. Notice the elements of liberal arts education in teaching Boyle's Law from this wider perspective: philosophical foundations, historical context, and the significance of scientific discovery in society. Matthews also makes the catchy distinction between merely 'training in science' and 'education about science.'6 This may be a bit off-topic, but I'm struck by the analogy of mere 'Bible study' vs. broader 'study about the Bible,' and the enlightening perspective on the former that comes from the latter.

The Risk and the Reward

As anyone acquainted with twentieth-century academic trends will know, the humanities include radically opposing views on truth, meaning, and science itself. Maybe science has kept away from the humanities just to stay sane! Or maybe this distance has contributed to the lack of grounding and the misuse of scientific-sounding terminology in certain movements. Gauch believes the rewards of re-joining science with the humanities outweighs the risks of subjecting science to criticism (which is happening anyway).
True, there are enough troubles in the humanities that a wanton relationship could weaken science. But much more importantly, there are enough insights and glories in the humanities that a discerning relationship could greatly strengthen science.'7
In other words, well-understood science can handle skeptical attacks and flourish within a broader perspective.

1. Gauch, H. G., Jr. (2006). Scientific method in practice. Cambridge: Cambridge University Press, p. 2
2. ibid. p. 4
3. ibid. p. 7
4. ibid. p. 8
5. Matthews, M.R. (1994). Science teaching: the role of history and philosophy of science. London: Routledge, p. 3
6. ibid. p. 2-4
7. Gauch (2006). p. 26