Experimental Ethics: An Even Greater Challenge to the Doctrine of Double Effect

In his article Neuroethics: A New Way of Doing Ethics, Neil Levy (2011) argues that “experimental results from the sciences of the mind suggest that appeal to [the Doctrine of Double Effect] might be question-begging.” As Levy frames the Doctrine, the Doctrine is a moral principal that is meant to ground the intuitive moral difference between effects that are brought about intentionally versus those that are merely foreseen. More specifically, the Doctrine is supposed to ground the intuition that, when certain conditions are met, it is morally permissible to bring about a bad outcome that is merely foreseen, but, under these same conditions, it would not be morally permissible to bring about a bad outcome intentionally. Or, another way to put this, the Doctrine claims that it takes more to justify causing harm intentionally than it takes to justify causing harm as a merely foreseen side effect (Sinnott-Armstrong, Mallon, McCoy, & Hull, 2008).








The intellectual roots of the Doctrine of Double Effect begin with St. Aquinas and St. Augustine. The Doctrine has since played a central part in moral theorizing within both the Catholic Church and within secular moral theorizing.





Intuitive illustrations of the Doctrine include (adapted from (McIntyre, 2011)):



1. In a military campaign, it is typically judged impermissible to target civilians, but it is often judged permissible to target a legitimate military target (e.g., a WMD factory) even if the attack on the military target is foreseen to lead to civilian causalities.



2. Someone who thinks abortion is wrong, even in circumstances that would save the mother’s life, might nevertheless consistently believe that it is permissible to perform a hysterectomy on a pregnant woman with cancer, even if it is foreseen that the hysterectomy will lead to the death of the fetus.



However, as Levy points out, there is a great deal of evidence suggesting that if one judges an effect to be morally bad, it is more likely that one will judge the act to have been brought about intentionally (and, thus, not merely foreseen). Much of this evidence comes from a series of studies conducted by Joshua Knobe and others (for an overview see: Knobe, 2010). In the earliest of these studies, Knobe (2003)randomly assigned participants to read one of two stories about a chairman of a company who instituted a new profit-generating program. The only difference between the two stories was that the foreseen side effect of the program would either harm the environment (a morally negative outcome) or help the environment (a morally positive outcome).



The harm version read as follows:




The vice president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.”



The chairman of the board answered, “I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.”



They started the new program. Sure enough, the environment was harmed.



The “help” version was identical to the “harm” version except ‘harm’ was replaced with ‘help’.





After reading the stories, participants were asked if the chairman intentionally harmed [helped] the environment. What Knobe found was that the vast majority of participants were willing to say that the chairman intentionally harmed the environment, but very few were willing to say that the chairman intentionally helped the environment. These findings have led Knobe and others to claim that judgments of intentionality are sensitive to moral considerations. This pattern of asymmetrical attribution of intentionality (and other state-of-mind attributions) based (ostensibly) on manipulations of moral considerations has become known as the side-effect effect, or the Knobe effect.








When is it OK to harm the environment in the name of economic growth? Well, according to one reading of the Doctrine, under certain conditions, it may be permissible to harm the environment if the harm was merely foreseen (and, thus, non-intentional). But when is harming the environment construed as merely foreseen? According to the Knobe effect, probably not often.







If Levy’s construal of the Doctrine are correct, and if people’s judgments of intentionality are sensitive to moral considerations, then the Doctrine of Double Effect is circular and would be an unreliable guide for grounding judgments about the permissibility of actions.



For example, if one already thought bringing about civilian causalities was impermissible, then one would be more likely to judge that the foreseen bringing about of civilian deaths was intentional, even in cases where the bringing about of civilian deaths was a side effect of attacking a legitimate military target. Since the civilian deaths would be judged to be intentional, according to Levy’s construal of the Doctrine of Double Effect it would be impermissible to attack the legitimate military target. Impermissibility judgments feed into intentionality judgments which feed back into impermissibility judgments. Thus, the Doctrine is circular and question-begging!



However, the Doctrine of Double Effect is not always primarily construed as depending on the distinction between intentionally bringing about an outcome versus merely foreseeing an outcome will be brought about. Rather, under many traditional formulations of the Doctrine of Double Effect, the morally relevant distinction depends primarily on whether an act or outcome is a side effect or a non-side effect (i.e., means or a goal).[1]



To build an even stronger case against the Doctrine of Double Effect, one would also need evidence that people more readily construe bad outcomes as non-side effects. To the best of my knowledge, there is currently no published study that shows that moral considerations can affect people’s classification of an outcome as being a side effect or a non-side effect. However, my lab is currently exploring just this possibility, and the early results are in: it looks like moral considerations do have an impact upon whether people classify an outcome as a side effect or a non-side effect.



For example, we find that people overwhelming (about 82%) classify HELPING THE ENVIRONMENT in Knobe’s helping the environment version of the chairman case (see above) as a SIDE EFFECT. However, only 42 percent of people classify HARMING THE ENVIRONMENT in Knobe’s harming the environment version of the chairman case as a side effect.



To make sure this finding was not simply a consequence of the exact details of Knobe’s chairman cases, we also tested seven other cases that were modeled on Knobe’s chairman cases.



For example, in one of the cases, a scientist (instead of a chairman) is deciding whether to implement a new methodology (instead of a new program) that would help her get the results she wanted (instead of generating more profits). In one version of the story, the new methodology would also violate ethical guidelines (instead of harm the environment), and in the other version of the story, the new methodology would also conform to ethical guidelines (instead of help the environment).



In another example, a ship captain is deciding whether to take a new route that would help her arrive at the destination quicker. In one version of the story, the new route was a dangerous route and thus would put the crew into extreme danger, and in the other version of the story, the new route was a very safe route, allowing the safety of the crew to be ensured. Same basic structure of Knobe’s original chairman case, but the actor and outcomes were varied.



What we found was that in six of the eight cases (including Knobe’s chairman case) people were more willing to say that the bad outcome was not a side effect than they were willing to say that the good outcome was not a side effect. To put this differently, when the outcome was good (e.g., helping the environment, conforming to ethical guidelines, ensuring the crew’s safety), people overwhelming judged the outcome to be a side effect. But when the outcome was bad (e.g., harming the environment, violating ethical guidelines, putting the crew into extreme danger), people tended to be split on whether the outcome was a side effect or a non-side effect.



Thus, our initial evidence suggests that people’s classification of outcomes as side effects or non-side effects may, in part, depend on moral considerations. [2] If this is right, then even when construing the Doctrine as being primarily concerned with the distinction between side effects and non-side-effects, the evidence suggests that the Doctrine of Double Effect is circular and would be an unreliable guide for grounding judgments about the permissibility of actions.







Want to cite this post?

Shepard, J. (2012). Experimental Ethics: An Even Greater Challenge to the Doctrine of Double Effect. The Neuroethics Blog. Retrieved on
, from http://www.theneuroethicsblog.com/2012/08/experimental-ethics-even-greater.html



_____________________________________________________________________________

Notes



[1] While it is true that talk of intentions is present in almost all discussions of the Doctrine of Double Effect (even those that construe the primary distinction as side effects versus non-side effects), I take it that discussion of intentions play a role in the Doctrine in so far as intentions are a guide to distinguishing what outcomes should count as side effects versus non-side effects.



[2] In my view, technically, people’s asymmetric classification of outcomes as side effects/non-side effects is not dependent on moral considerations, but rather is dependent on a non-moral considerations that typically (though not always) correlate with the evaluative valence of an outcome. Getting into the details of my view is beyond the scope of the post and is unimportant for the particular point at hand. (It would turn out on my view that the Doctrine would be circular for a large portion of cases due to the nature of the correlation between the non-moral considerations and evaluative outcomes.)



Works cited



Knobe, J. (2003). Intentional action and side effects in ordinary language. Analysis, 63(3), 190–194.



Knobe, J. (2010). Person as scientist, person as moralist. The Behavioral and brain sciences, 33(4), 315–29; discussion 329–65.



Levy, N. (2011). Neuroethics: A new way of going ethics. AJOB neuroscience, 2(2), 3–9.



McIntyre, A. (2011). Doctrine of double effect. Stanford Encyclopedia of Philosophy. Retrieved from http://plato.stanford.edu/entries/double-effect/



Sinnott-Armstrong, W., Mallon, R., McCoy, T., & Hull, J. G. (2008). Intention, temporal order, and moral judgments. Mind & Language, 23(1), 90–106.

Comments

  1. Very cool to see the up-to-the-minute updates on what the psychological research says on the subject. Thanks Jason!
    I'm curious if perhaps one of the issues with the DoDE is language, specifically multiple aspects of the word "intention"- since western society has so been influenced by Aquinas, we are now only comfortable assigning guilt if an action is called 'intended.' For example, I remember being very surprised as a child to learn that "you did that on purpose" didn't mean the same thing as "you're in trouble" as I had only heard the phrase in the context of determining my guilt. Alternatively, (though this is way out of my field) I'm guessing the neuroscience/psych definitions of "intention" are much closer to "what action you wanted to cause" rather than "what should we hold you accountable for." (though that raises the question of how much the DoDE was a big change when first introduced- was this a humanist plea against a strict rule-based ethics, or perhaps an attempt at a proto-utilitariansim? I'll need to look into the history of these things...)
    That being said, the DoDE really doesn't make intuitive sense to me- most of the cases I've heard illustrating DoDE seem to be justified on other grounds- such as the fact that one can be justified in killing humans only if it would be worse not to (say, one can be justified in killing serial killers). Where DoDE seems like it actually is necessary is in the case of saying that a reluctant hero is justified in taking out a known terrorist, but Dexter isn't justified in cleaning up Miami (as he primarily intends/cares about killing people, not saving the most lives). Which is weird if DoDE is supposed to be codifying an ethical intuition, because I think most people (intuitively) root for Dexter, and in general for sublimating negative desires into net positives
    Looking forward to a discussion of the paper next week!

    ReplyDelete
  2. Hi Riley,

    You make a number of interesting points! I'll try to briefly share some of my thoughts on your comments. (And if you want to push me further on any of my thoughts feel free to "push away."

    1. We often assign guilt/responsibility for some outcome x in absence of the particular intention to x. An obvious case would be a drunk driver who accidently kills someone. Actually, we assign guilt/responsibility for a whole range of outcomes in which the actor was being reckless or negligent (though you are right to notice that our assignments of guilt/blame/responsibility are usually greater for those actions that are intentional).

    2. When scientists claim to be talking about such things as 'intentions' they (in most cases) at least posture like they are talking about the same thing everyday people are talking about when everyday people talk about 'intentions'. With this in mind, if the scientists are talking about something else when they use the term 'intention' they are either being sloppy or deceitful (or perhaps both). Of course, there are a lot of subtleties to the everyday use of 'intention' including how 'intention' interacts with moral judgments. So, I wouldn't expect any scientists to get their usage perfect, but their usage shouldn't be obviously or radically different. (Actually, some of my other research explores some of the more controversial subtleties concerning how our everyday use of the term 'intention' interacts with our moral judgments.)

    3. DoDE is a deontological doctrine that gives "rules" for when it might be OK to bring about a bad outcome. (Aquinas was famous for using it to justify self-defense, Augustine for just war.)

    4. There is a proportionality requirement built into DoDE, but this proportionality requirement does not make the Doctrine utilitarian.

    5. I am not sure if I understand the Dexter example.

    6. Not all of our seemingly relevant ethical intuitions need to conform with DoDE for DoDE to be right (or superior to utilitarianism). There is a "reflective equilibrium" that needs to be struck between our various intuitions across cases and our moral theorizing.

    ReplyDelete
  3. On your note about the scientific definition of "intention"- Dr. Wolpe brought up a very good point during the Journal club that discussed Levy's paper a few weeks ago: That participants who were asked if the CEO "intended" to help or hurt the environment were somewhat misleadingly constrained by the question. Specifically (and you'll have to forgive me, I forget if this was a speculation of Dr. Wolpe's or a follow up study he was referencing. Either way I think its a valid point) when pressed, participants would say that while the CEO didn't really intend for environmental damage to happen, he should still be held responsible. I think that this gets to the fact that ideas of intention and responsibility are closely intertwined in our culture (and I'm still wondering if Aquinas was partly responsible for that), to the point that sometimes we will substitute one for the other.

    How would you define 'intention' as it is being used here?

    My Dexter example (this is from the TV show of the same name) was to speculate that if you could perfectly predict the outcomes of an action (which is an underlying assumption I think for a lot of this debate, I'm thinking that 'intention' becomes a much different beast when failed plans start coming up), then intention boils down to "what part of this are you looking forward to?" In this example, the action is to kill serial killers. There are two outcomes to this action- a 'good' one, that innocent folk are safe, and a 'bad' one, that a minority of the population must be killed against their will. Again, the assumption here is that you can't have one result without the other. If this is the case, then the DoDE seems to say this action can only be performed in an ethical manner if it is done by someone who primarily cares about saving innocent lives, whereas Dexter Morgan (who is in it as way to justify his own need to kill) will always be in the wrong.

    ReplyDelete

Post a Comment

Popular posts from this blog

Why use Brain Cells in Art?

Misophonia: Personality Quirk, Symptom, or Neurological Disorder?

The Man Who Voled the World