Saturday, April 30, 2011

Defining Personhood

Personhood is a highly contested label placed on whichever more substantial concepts qualify an entity for direct moral concern.

For this reason, a popular way to go about giving a substantial definition of personhood is to identify which entities one is already directly morally concerned about, and find any combination of criteria which kick out that same list and no more.

If this approach sounds scandalously circular, let me point out that it can still fail! It may very well be the case that one's intuitive feelings of 'this is a person' and 'that is not a person' can't be neatly justified by a set of principles, even if any consistent set of principles will do (beyond sheer intuition affirmation). So even this very ad hoc exercise can shake up intuitions and motivate one to accept a reformed definition of personhood, which is what happened to me.

A Proposed Personhood

Rather than work up to my own take on personhood, I'll present it and then defend it.
A person is an entity which has had conscious experience and desires, and is still capable of conscious experience and desires.
Or, a shortened version which I'm stipulating as taking the same meaning as the first version:
A person is an entity which has been sentient, and is still capable of sentience.
In catchier form:
Personhood begins with sentience; personhood ends with the loss of the capacity for sentience.
Hopefully no one will claim this overall criterion is completely in the wrong neighborhood. I'm not calling the Grand Canyon a person or questioning the personhood of blog readers. But I do expect most will think it too exclusive or too inclusive about some important classes of entities.

The Basic Justification

By 'conscious experience' I mean having a first-person point of view. Inner experience. The thing David Chalmers tries to draw out as the 'hard problem' of consciousness.1 And by 'desires' I include pain and happiness, along with the notions of fulfillment, frustration, well-being, love, wishing, preferring, etc. Having any of these counts as having some desires. By putting conscious experience together with desires, we are talking about an entity to whom things matter. And entities to whom things matter are the proper targets of direct moral concern.

Though I would quibble with his vocabulary and intended scope, I think Kant was on a parallel track when he wrote:
Beings whose existence depends not on our will but on nature's, have nevertheless, if they are irrational beings, only a relative value as means, and are therefore called things; rational beings, on the contrary, are called persons, because their very nature points them out as ends in themselves, that is as something which must not be used merely as means, and so far therefore restricts freedom of action (and is an object of respect). These, therefore, are not merely subjective ends whose existence has a worth for us as an effect of our action, but objective ends, that is, things whose existence is an end in itself;2 (emphasis added)
Essentially, there is an all-important difference between making use of non-sentient entities to fulfill our desires, and using sentient entities to fulfill our desires. What is morality if it isn't taking the desires of others — or others with desires — into account?

'Too Limited!' Objections

It doesn't count sufficiently brain-damaged human beings!

Suppose a human's brain were entirely removed and destroyed, and the rest of the body kept alive by advanced, non-sentient machines. Would you count what's left as a person?

Now, there may be important questions about whether a given patient has lost all capacity for sentience. It may be ethically sound to play it safe when we're not sure, but extreme cases like total brain removal and destruction demonstrate the principle.

It doesn't count early stages of unborn human life!

That's correct, because we have good reason to believe an embryo has never had a first-person inner experience. It is partially analogous to the former example of a brainless adult body, except that it will likely start a new sentient existence in the future.

What about temporarily unconscious humans?

The above definition covers humans in dreamless sleep or in a cryogenic state. The difference between killing an embryo and killing a sleeping adult is, respectively, that of preventing a personal life from ever starting and that of permanently ending a personal life.

'Not Limited Enough!' Objections

It counts non-humans!

If you believe dogs have a first-person experience and can suffer when hurt, it shouldn't be difficult to accept that how we treat dogs is morally relevant for this very reason. Does a lobster feel pain like we do when it is boiled alive? I don't know, but I hope the answer would make a difference to you.

Also, peaceful aliens aren't going to land and share their secrets if we maintain a humans-only attitude like that!

It counts later stages of unborn human life!

That's also correct, because we have good reason to believe that from (roughly) 20 weeks and on, a developing human has first-person consciousness and the ability to suffer.

It counts sentient beings who lack rational thought, or moral sense, or awareness of self identity, or the concept of time, or language, etc!

These are all popular personhood criteria, which can be intuitively rejected in single move. Imagine an adult human who lacks any one of these by birth or by accident, but is still sentient. Are you willing to write off this entity as a person?

What About Metaethics?

This personhood definition should be compatible with a variety of metaethical views, including but not limited to my own.

Whatever your deep view on the nature of morality, it comes down to whether you think it matters how you treat fellow sentient beings. Wait; scratch that. It matters regardless of what you think because it matters to them.


1. See http://consc.net/papers/facing.html
2. From Fundamental Principles of the Metaphysics of Morals.

13 comments:

  1. I actually agree with you fully on this. I never really found any of the objections to this focus on consciousness particularly convincing and I definitely agree that other animals who have sentient experiences deserve more rights than most humans give them.

    The one key objection that I think needs to be analyzed, however, is if we give more rights/moral precedence to things based on higher conscious experience. For example, it seems clear that the conscious experience of a cow is not the same of a crow, an incredibly social and intelligent animal. The reason this is important is, suppose, that there's a gorilla and a human baby. Supposing the gorilla has a "higher" level of consciousness than the baby, morally speaking, what should be done?

    Just my two cents on this, but otherwise, full agreement.

    ReplyDelete
  2. I'm leaving open questions about how we should treat different species and individuals, as there are many possibilities which still recognize the same set of beings as the proper targets of moral consideration.

    For example, recognizing the personhood of later-stage unborn humans doesn't automatically imply they must be treated exactly the same as a healthy adult human. A good number of Pro-Lifers consider abortion justified in the case that it comes down to the mother or child dying, though they fully accept the personhood of the child.

    So I don't take your concern so much as an objection as an important area I didn't address.

    ReplyDelete
  3. The problem with consciousness is that we don't have a weak grasp of how to estimate it beyond the human sphere. Is a cow really less conscious because it is less intelligent? As far as we know, it may very well be a cow feels far more deeply and has a far stronger subjective feeling and powerful feelings of contentment? So, maybe we should be spending our energies feeding cows tasty munch....

    Even if you think the latter isn't likely, I hope the thought experiment suggests to you that the depth of someone else's subjective feelings is not necessarily the key consideration we are considering in.

    I'm not sure what we do consider. I'm not even sure if just consciousness counts. We are used to think of distinct conscious persons and unconscious inanimate stuff, but what if things are almost opposite, with everything being highly-conscious and people hoarding in a bound, limited consciousness? Would we really want to save the rock instead of the man, just because the rock is more consciouss? Indeed, imagine real unconscious but rational agents seriously. Suppose we come up with highly intelligent computers, with self-preservation programming and all - but not, for this thought experiment, consciousness. Somehow, not saving a highly-intelligent computer, that interacts with humans and everything, feels about as wrong as not saving a conscious but relatively-stupid, short lived, and short-sighted human. Or is that just my intuition?

    At any rate, as a tentative panpsychist (or rather, pan-protoexperienciest), I'm more attracted to seeing a "person" a unity and coherence of feeling and rationality, than as sheer consciousness.

    Yair

    ReplyDelete
  4. Evening Yair,

    I agree it's very possible we're underestimating the consciousness of non-human animals, but I don't take this as an objection so much as a good reason — should we discover it — to change our moral behavior toward them. And we won't be in position of having to commit all our lives to other species because it would still be true that things matter to us too. Moral consideration doesn't require selflessness.

    .."I'm not sure what we do consider. I'm not even sure if just consciousness counts. We are used to think of distinct conscious persons and unconscious inanimate stuff, but what if things are almost opposite, with everything being highly-conscious and people hoarding in a bound, limited consciousness? Would we really want to save the rock instead of the man, just because the rock is more conscious? Indeed, imagine real unconscious but rational agents seriously."

    It sounds like you're considering pure consciousness on the one hand and desiring (if it can be called that) without consciousness on the other. That's why I combined the two. If a rock is conscious but has no desires, then there is a first-person 'it' but nothing matters to it (it can't suffer or be fulfilled, etc). If a computer has desires but no consciousness, then things matter to its behavior but there is no first-person 'it' to whom these things matter. We might not want to see the computer destroyed, but we're the ones pained by the idea, not it.

    Is this 'pan-protoexperienciest' notion anything like David Chalmers' property dualism?

    ReplyDelete
  5. Yair, I think we are all working on the basis of some sort of Argument from Analogy. It could very well be that whenever somebody is laughing, they are actually in severe pain and we would never know. Indeed, whenever I pet my cat and it begins to wag it's tail, what I take for excitement may indeed turn out to be something completely different.

    I'm also not particularly convinced by this idea of cows feeling much more than humans, mostly because nothing about their interactions suggests equal or greater "consciousness." On the other hand, suppose that a Spanish Conquistador had taken the time to analyze how the natives reacted and interacted amongst themselves. It seems abundantly clear that they too were human beings with the EXACT SAME sort of conscious experience as any other human being, even if we had no dialect to verify this, their very behavior suggests the breadth of conscious experience. Also think about crows. I think it's justified to think they have a much higher order of consciousness than cows because they rationalize more and even react to the death of a fellow crow, compared to a cow whose reactions are much, much more different.

    All in all, I think that based on behavior we can more or less estimate conscious experience. Your thought experiment about rocks having more consciousness also seems to me to be written in a way such that our intuition makes it absurd, but, upon consideration, I'm not sure that is the case.

    The reason we don't react to a rock being in some sort of danger is because it in no way responds to stimuli or does anything that comes even close to suggesting it. But IF a rock were to have higher consciousness, I think it would be completely different and in a way, we would be capable of connecting with and having some sort of exchange with these rocks--maybe even be aware of their higher sophistication. Given all this possible changes in our thinking of rocks (that is, not as the inanimate lumps of atoms they are), my intuition actually leans to preserving those beings with higher consciousness and potential for experiencing life.

    This brings us to the last point you raise, of a rational agent (here I take it you mean something like epistemologically rational) lacking consciousness. To alter your analogy a bit, I considered a super-calculator which could solve any mathematical equation or problem with it's huge database. The only reason I'd have for preserving this marvelous piece of machinery is for the benefits it would bring for conscious beings--indeed the improvements it could bring to conscious beings (intelligence, it seems, is a by product of sophisticated consciousness so I'll treat it as such) is what makes me want to preserve it. Nothing about it being particularly advanced really makes me want to take it under special consideration over, say, a mentally retarded (not PC, but...) person fully capable of enduring pain. On the other hand, what gives me much more incentive to save your highly-"intelligent" computer is this key phrase: "that interacts with humans." This seems to me to be like the sort of thing we derive analogy from to say something is conscious. Even self-preserving now seems to suggest that this computer has some sort of worry about it's own welfare. In my opinion, for all intents and purposes, a computer this sophisticated, would be pretty darn close to being conscious, if not already. I don't really know what else to ascribe to consciousness other than what we observe from analogy; and, if you're not satisfied with calling that conscious (and I confess my intuition is not really convinced on this), would it suffice to say that your intuition wants to preserve the computer because it shares many characteristics of conscious entities? and, really, how do we know something is truly "rational" in execution other than by the end results?

    ReplyDelete
  6. All in all, I feel that we have nothing else to go by other than analogy and I can't imagine any context when that isn't the case. The instant I can examine somebody else's "internal" consciousness, it would cease to be somebody else's and become part of my own. This lack of context for other conscious experiences via first-hand experience seems to suggest that analogy is more than enough to guide us--everything else is incoherent anyways.

    -Hope that made sense and didn't ramble on

    ReplyDelete
  7. Esteban R:
    All in all, I feel that we have nothing else to go by other than analogy and I can't imagine any context when that isn't the case.

    While I agree that analogy is our main tool, I am taking another clue from evolution. In short - combining the idea that we're conscious with the idea that we got there by evolution seems to me to lead to the idea that just about any information-processing system is conscious, and since every physical process is an information-processing process it follows that first-person consciousness is everywhere. This does not, of course, mean that rocks are coherent persons with memories, reasoning capacities, or even any semblance of psychological unity; rather, it just means that every physical system "feels like" something to its constituents. Only a few such systems (such as a human brain) develop mental coherence and rationality.

    This is the very tentative view that I called pan-proto-experientism.

    The problem is that this is all very "philosophical". I don't have any way to test this theory of mine, and I don't even have a good idea on how to delineate this mental coherence.

    I'm also not particularly convinced by this idea of cows feeling much more than humans, mostly because nothing about their interactions suggests equal or greater "consciousness."

    Going from the neuroscience-evolutionary perspective, however, it appears clear to me that our feelings are related to our primitive brains rather than to our higher cognitive abilities. I therefore see no a priori reason to suspect cows feel less than we do. Indeed, it appears to me that all mammals have our "self" modules, but the cow's isn't nearly as occupied as ours is with abstract notions and other aspects of our higher rationality - so if anything, I'd suspect the cow feels its emotions more fully, not being as distracted as we are by other things on our minds.

    Of course, the content of the cow's thoughts is also qualitatively different. It appears a cow isn't aware that it's about to die in a butchery, for example, nor does it feel grief for the loss of a fellow cow. And I certainly do consider that morally significant.

    Where we seem to part ways in this is that you attribute deep feelings to sophistication. I don't. Something can feel extremely powerfully, but not be sophisticated at all. A brain-impaired person might be flooded with a feeling of constant powerful euphoria, rendering him completely oblivious to everything else and incapable of thought - does this mean that we should seek to preserve this person above all others, since he is feeling so strongly? I think the answer is clearly "no". There is more to morality than that.

    ...

    ReplyDelete
  8. ...

    I considered a super-calculator which could solve any mathematical equation or problem with it's huge database. The only reason I'd have for preserving this marvelous piece of machinery is for the benefits it would bring for conscious beings ...

    Again, maybe my intuitions on this are unorthodox. But, for the sake of argument, consider such a sophisticated system does "shares many characteristics of conscious entities", but is not conscious. It not only calculates stuff - it constructs space colonies, populating them with new copies of itself and other forms of artificial-intelligence; it constructs elaborate scientific instruments to explore nature, gathering data and learning even though it has no "self-preservation" utility, as if seeking knowledge for its own sake; it constructs works of art and discusses the literary merits of its fellows' writings; and so on. But it isn't conscious. Wouldn't the spread of knowledge, of beauty, of this artificial-life, be of some value, even if it is not conscious?

    Yair

    ReplyDelete
  9. Garren Hochstetler:

    And we won't be in position of having to commit all our lives to other species because it would still be true that things matter to us too. Moral consideration doesn't require selflessness.

    Perhaps so. But this is just my point - that the intensity of the first-person feeling is not the sole issue to be considered here.

    I'd note another problem here - moral equivalence. Suppose I artificially create a new human species, H+. H+ is just like humans - intellectually, artistically, the works - but with a key difference: it feels twice as strongly as we do. Is saving the life of an H+ human twice as important as saving the life of a normal human? Should we examine how intensely do people feel when we weigh moral decisions, or should every person be morally equal to the other? I'm not pretending to know a very good answer to this question, I'm just saying my intuition that every person is morally equal to the other doesn't jibe well with the intuition that different people feel things more or less intensely.

    It sounds like you're considering pure consciousness on the one hand and desiring (if it can be called that) without consciousness on the other. That's why I combined the two. If a rock is conscious but has no desires, then there is a first-person 'it' but nothing matters to it (it can't suffer or be fulfilled, etc).

    Not quite. Its consciousness can (in this highly hypothetical scenario) change, so it can suffer - perhaps, say, if its scratched or broken. It would lack the intellect to understand that its consciousness has changed or why it has - but we would understand. And causing suffering should, presumably, be avoided even if the suffering can't understand its cause.

    If a computer has desires but no consciousness, then things matter to its behavior but there is no first-person 'it' to whom these things matter. We might not want to see the computer destroyed, but we're the ones pained by the idea, not it.

    Yes. What I'm not sure about is whether we, indeed, won't want to see this computer destroyed. I'm inclined to say so.

    Is this 'pan-protoexperienciest' notion anything like David Chalmers' property dualism?

    Very much so. As I explained above my motivation is rather more evolutionary, but I end up at essentially this position. The only problem is that I'm not quite sure how to interpret all properties in terms of their conscious content in a systematic way that's simple yet will produce the "correct" subjective feelings that we know, from our personal experience, that our brain has.

    Yair

    ReplyDelete
  10. I like this.

    Here's an objection to add to your list (not actually sure whether it belongs in the "too limited" or "not limited enough" category).

    I'm assuming you agree that it is prima facie wrong to kill a "person" (I say "prima facie" because I personally accept that killing can be justified under certain conditions and I presume others do too). I'm also assuming that it is the fact that a being possesses the property of personhood that makes the killing wrong (maybe you'll disagree with that).

    Now here's a scenario:

    Suppose in the not too distant future doctors invent an "memory-wiping drug" that can be used to completely wipe out all memories from the human brain (I'm including explicit and implicit memories in this). In other words, it can wipe the mental slate clean. The drug does not affect the capacity to have experiences in the future.

    Question: would it be wrong to use the memory-wiping drug on a healthy adult?

    (If you don't like considering that simpliciter, i.e. with no conditions, you could add a condition to the scenario stipulating that it was the only way to cure some illness, e.g. PTSD. I don't like doing this since it involves conflicting and I want to consider the value of personhood on its own)

    On your definition of personhood it seems like there would be no objection to using the drug. The definition you offered focuses solely on having had experiences in the past, and the capacity for having experiences in the future. It seems to say nothing about continuity between past and future.

    I'm not sure if this is a problem. It all depends on your intuitions about the memory wiping case but most people seem to think it is wrong to use the drug in the no-conditions case.

    ReplyDelete
  11. John D.,

    Have you been watching Dollhouse? If not, you should! Great two-season drama that raises similar philosophy of mind stuff for popular audiences.

    At any rate, you raise a good point. I want to address it by backing off a bit on placing the 'personhood' label on 'whichever more substantial concepts qualify an entity for direct moral concern,' but maintain that sentience is the concept which fits that bill. So…

    sentience — qualifies entities for direct moral concern
    personhood — might not track with sentience after all; has to do with the growth and maintenance of a personality which the imaginary drug would erase

    Murder might then be something like: destruction of a sentient person. This doesn't mean sentient non-persons can be hurt or destroyed without any moral consideration, but it would explain why we think there's more to destroy than sentience. Sentience can be a sine qua non for moral concern without being the only factor.

    ReplyDelete
  12. Yair:

    Sorry, I've been really busy and probably will be until a few weeks. I'll see if I can get back to you later. Skimmed your replies and they seem pretty interesting, so thanks for the response (and forgive how unfamiliar I am with all of this neuroscience stuff)!

    ReplyDelete
  13. Garren,

    I haven't been watching Dollhouse, sounds interesting though. I have, however, been reading stuff on abortion and personhood arguments recently and I think that influenced my comment.

    I think you might be interested in the approach offered by Michael Tooley in his book Abortion and Infanticide. He argues that our definition of person should appeal to descriptive properties, but that the properties that are selected would be selected for their evaluative functions (might sound a little weird, but he makes a reasonable case for this in the book). Specifically, he thinks we should ask ourselves the the question:

    What relatively permanent, non-potential properties, possibly in conjunction with other, less permanent features of an entity, make it intrinsically wrong to destroy an entity and do so independently of its intrinsic value? (Tooley, 1985, p. 87)

    There's some technical terminology here that Tooley spends a good deal of time unpacking, but I think you can see where it is going: Whatever properties we specify in answer to the above question would supply us with the definition of a person. That's why I mentioned the temporal relation property, it seems like that might be essential for making it wrong to destroy an entity.

    I think the problem, which you seem to acknowledge, with your original post is that "person" denotes an entity with greater moral relevance and entitlements than just "qualifies for direct moral concern". A person bears rights, and must fulfil obligations etc. etc. this seems to go beyond mere qualification for moral concern (which I presume means something like "must be taken into consideration in moral calculations or decision-making")

    I'm also not sure that sentience is the only property that qualifies an entity for moral concern. Something like the last man argument could be employed to test that intuition.

    ReplyDelete