Pain in a Vat

Previously on this blog I've discussed the case of cultures of living rat neurons, removed from their natural environment (the inside of the skull of a rat), and grown on top of an electrical interface that allows the neurons to communicate with robotic systems - effectively, we remove part of the rat's brain, and then give this reprocessed bit of brain a new, robotic body.  One of the stranger issues that pops up with this system is that it is extraordinarily easy to 'switch' between bodies in this situation. [1] For instance, I could easily write a computer program that creates a brief, pleasant sound reminiscent of raindrops every time the culture increases it's electrical activity.  Alternatively, the same burst of activity could be used to trigger an emotionless, electronic voice to say “Please help me. I am in pain.”






While nociception (the low-level transmission of pain information) and unconscious reactions to pain both occur in the spine and peripheral nervous system, the brain seems to hold the neurons that are responsible for the conscious sensation of pain.  This leads to the interesting suggestion that factory farmed chickens should be grown without their brains to prevent all that unnecessary suffering from occurring.  (And if you remove the feet, the chickens are stackable!) Image from here

Is it possible for a neural culture to feel pain?  This is admittedly an absurd suggestion.  Starting from common sense and our normal range of experience, if we see a small motionless, barely visible sliver of brain tissue sitting in a Petri dish, we have no reason to believe that we should feel sorry for it.  Even if we see that this sliver of brain tissue is actually quite active, generating a variety of patterns of electrical activity, such activity might seem so alien to us so as to not be worth our attention, much less our sympathy.  But as absurd as pain in a Petri dish might sound, we do in some sense have a duty  to explore the idea.  Pain and suffering are key moral issues in the treatment of biological systems.  Animal liberation ethicist Richard Ryder goes as far as to say that being able to experience pain is the only requirement for having rights [2], as pain is the only true evil.  If we are to consistently value the absence of pain, no matter who or what experiences it, do we need more comprehensive regulations that cover tissues as well as “full” animals? [3]



To begin with, let's be careful about what we mean when we use the word “pain.”[4]  Neuroscientists have long broken pain itself down into the “sensory” component (the location of the pain) and the “affective” component (the emotional, “unpleasant” side of pain).  The “affective” component is commonly held to be the “morally relevant” one. [6]  It is interesting to note that these two aspects of pain seem to be somewhat distinct at a neural level - for instance, morphine and endorphins both selectively inhibit the activity of structures that seem to underlie the affective component [7], and humans with damage to different brain regions can report either a loss of the sensory component (lesions of the somatosensory cortex [8]) or loss of the affective component (lesion of the anterior cingulate cortex). [9] So in principle, it might be possible to find the "affective pain circuit" in the brain, perhaps the Anterior Cingulate Cortex (ACC), and remove it from an animal.  Now we have an animal that can't suffer - but what about the tissue we removed (assuming it wasn't destroyed in the process)?  Is it still suffering? [10] Or does it need to be connected to the rest of the brain for that “suffering” to mean anything?






Electrodes placed in the Anterior Cingulate Cortex (ACC) of a patient suffering from chronic pain.  The electrodes were used to selectively lesion the ACC.  If the ACC had somehow been carefully removed, would we have had a moral obligation to prevent it from feeling pain?  From [9].

So if some sort of "affective pain circuit" were isolated [11], how would we determine if it was in pain or not?  Neural culture poses an interesting problem here.  The methods that have been used to suggest that other living systems feel pain, such as humans in a vegetative state, or non-human animals, don't work for neural culture.  The prime method for determining if a living creature is suffering from pain is to examine their behavior, and look for things like avoidance, emotional displays, or learning to associate neutral stimuli with pain. [12]  However, I've already pointed out how problematic the notion of “behavior” is for neural culture.  A second strategy might be to examine neural activity directly, and correlate that with the activity of “full” animals - however, there is disagreement over whether ACC activity means the same thing in different animals [15], so using that method to evaluate the significance of an ACC that was completely removed is even more problematic.



Without behavior or a known set of subjective correlates to use to determine if neural culture is suffering, we are left to mathematical and philosophical tools.  One such tool is Giulio Tononi's Integrated Information Theory (IIT) of consciousness. [16] For our purposes, the important parts of this theory are that the pattern of connections (not just the size, or the number of connections) in the network plays a big part in determining how conscious it is (called the “Phi” value), and the subjective state is defined by its relationship to all other possible subjective states.  So, pain would be defined by not being pleasure, and by all of the thoughts and actions and desires that pain can cause, or be caused by. [17] From this perspective, even a cultured slice of ACC wouldn't necessarily be experiencing affective pain when removed from an animal, as it's electrical activity wouldn't behave in the same way.  Additionally, as the network would be much smaller than the brain it was previously part of, the “Phi” value would be lower (and it could be argued that any affect that it did have was “less rich” or “less meaningful”).



However, note that currently even neural cultures are far too complex and difficult to measure to accurately estimate their Phi values, much less to understand the structure of their "qualia space."  Despite this, the promise of theories like IIT (or future developments within that theory) exposes a part that neural culture might play in the development of tools to scientifically evaluate consciousness.  With neural culture, we are forced to use theories that depend directly on tying network structure to conscious experience, rather than "surface level" features like behavior.  And despite current limitations, it is significantly easier to access the sorts of data required to use structural theories in culture, than it is in "full" organisms.  This accessibility associated with neural culture means that as both neurotechnologies (such as electrophysiology tools) and philosophical/mathematical tools (such as IIT) develop, neural culture will likely be one of the first places where the theory will be able to meet up with experiment to provide a rich understanding of what generates subjective experience.







Want to cite this post?

Zeller-Townson, RT. (2013). Pain in a Vat. The Neuroethics Blog. Retrieved on


from http://www.theneuroethicsblog.com/2013/01/pain-in-vat.html








References:

[1] Note that over time the culture could potentially learn the differences between these two systems, but in that moment immediately after that switch it would be very difficult to tell the difference.

[2] Note that Ryder's views are controversial, however, even within the animal liberation community.

[3] God, I hope not.  Dealing with IACUC is bad enough as it is.

[4] It is also important to note that pain and suffering are usually not equated.  David DeGrazia and Andrew Rowan defined [5] both pain and suffering as 'inherently unpleasant sensations' - that is, they are both feelings (rather than physical events), and are in part defined by their unpleasantness.  DeGrazia and Rowan differentiate pain from suffering by specifying that pain is sensed to be local to a specific body part, whereas suffering is not.  By specifying pain as an experience, DeGrazia and Rowan distance themselves from some of the neuroscience literature which at times interchangeably speaks of pain and nociception, the physical process by which painful stimuli are relayed to the brain.

[5] DeGrazia, David, and Andrew Rowan. "Pain, suffering, and anxiety in animals and humans." Theoretical Medicine and Bioethics 12.3 (1991): 193-211.

[6]Shriver, Adam. "Knocking out pain in livestock: Can technology succeed where morality has stalled?" Neuroethics 2.3 (2009): 115-124.

[7] Jones, Anthony K., Karl Friston, and Richard S. Frackowiak. "Localization of responses to pain in human cerebral cortex." (1992).

[8] Ploner, M., H-J. Freund, and A. Schnitzler. "Pain affect without pain sensation in a patient with a postcentral lesion." Pain 81.1 (1999): 211-214.

[9]  Foltz, EL and White, LE, Pain 'relief' by frontal cingulumotomy, J. Neurosurg., 19 (1962) 89-100

[10] As I'm implying that the Anterior Cingulate Cortex would be the region removed, I should be clear and that I don't mean to say that this is all the ACC does.  The ACC is a pretty complicated beast, and has been implicated as playing a role in decision making, the evaluation of errors, tasks that require effort, as well as processing of empathy and emotion.

[11] Currently, the closest experimental preparation to this would be to culture a thin slice of tissue taken from the ACC.  This is often done to investigate how the ACC differs from other regions of the cerebral cortex, including how these differences could lead to new drugs that decrease the affective component of pain.  While the ACC is the neural tissue that we might most suspect a priori to be suffering, in theory it could be possible to grow (whether by accident or design) neural circuits from scratch (by first breaking down the connections between the neurons, and then allowing them to re-grow, a process called dissociation) that in some way replicate the "suffering" experience of the in vivo ACC.  The following discussion applies equally to ACC slices and dissociated culture.

[12] While it is easy to imagine all of these behaviors being performed by an unfeeling robot that was attempting to trick us into feeling sorry for it, it is interesting to note how much that last item (learning) is suggestive of a subjective negative experience.  The ACC, as mentioned, is used for several things beyond just feeling bad- it also appears to be used for turning that bad feeling into a learning experience, where the ACC equipped neural system learns to avoid whatever caused that painful experience in the first place. [13]  Thus, animals that can learn from pain (a category which was recently found to include crabs [14]) might be equipped with other ACC-associated properties, like suffering.  I'm curious how tricky it would be to argue that the subjective experience of suffering is the mental correlate of high-level avoidance learning- implying that if one learns to avoid abstract entities through association with nociception, one is suffering.  This view would further imply that temporary suffering is natural and even necessary for life, and that morally relevant suffering is effectively attempting to avoid something that cannot be avoided.

[13] Johansen, Joshua P., Howard L. Fields, and Barton H. Manning. "The affective component of pain in rodents: direct evidence for a contribution of the anterior cingulate cortex." Proceedings of the National Academy of Sciences 98.14 (2001): 8077-8082.

[14] Magee, Barry, and Robert W. Elwood. "Shock avoidance by discrimination learning in the shore crab (Carcinus maenas) is consistent with a key criterion for pain." The Journal of Experimental Biology 216.3 (2013): 353-358.

[15] Farah, Martha J. "Neuroethics and the problem of other minds: implications of neuroscience for the moral status of brain-damaged patients and nonhuman animals." Neuroethics 1.1 (2008): 9-18.

[16] Tononi, Giulio. "An information integration theory of consciousness." BMC neuroscience 5.1 (2004): 42.

[17] IIT also gives us a framework to tackle the question of why neural tissue should be so privileged to consciousness- what about other biological networks, like the immune system?  What about non-biological networks, like simulated neural networks or even the internet?  IIT says that the extent of consciousness is determined by the variety of possible states the system can be in, as well as how much the sub compartments of the network communicate with each other.  Thus, in principle any well connected network could be conscious, but some neural systems seemed to be optimized for high levels of consciousness.


Comments

  1. I think this one is fairly easy. I never had any trouble working on Dr. Potter's cultures. :)

    I agree with modern psychology's differentiation between sensory pain and affective pain. This is a fundamental, critical moral issue.

    You seem to label what makes affective pain possible "consciousness." Then you ask what privileges neural networks for consciousness over other complex biological networks like the immune system. I haven't read Tononi's ideas about consciousness, but it sounds like a useful framework, and I don't know if he's missing these pieces I'm about to examine.

    Consciousness ("mind") is, essentially, an I/O machine with storage, and processing on internal states. It gains moral status when it reaches "sufficient" complexity. That is, when it has enough storage and feedback between different inputs and actions to build context for them, and to evaluate positive and negative sensations relative to these contexts.

    I haven't looked at it in depth, but it's possible that a mind has to have external actuators to develop this level of complexity.

    ReplyDelete
  2. You raise some excellent points, Rich! First, yes there is a bit of a jump in my argument. I kinda sneak in the implication that the moral part of 'affective' pain is the 'subjective' part, and thus tie the moral aspect of pain to consciousness. (Though the DeGrazia article I reference goes into more detail on why that might be necessary- DeGrazia goes as far as to define pain as subjective. Which is one of the reasons I think pain is so interesting in the first place!) I think it would be interesting to look into how the non-subjective components of 'affective' pain tie into morality, as well- perhaps even just the behavior associated with 'affective' pain, such as recuperation and distress signalling, could be examined as having moral relevance in and of themselves, as opposed to merely acting as the correlates of suffering. Is pain bad because it prevents me from doing what I want/should, or because it hurts?

    I think your idea of consciousness matches up quite well with Tononi's, though I haven't read any of Tononi's work that explicitly brings value into the equation. I'd be curious to see how necessary "value" is in having conscious experience, or if it is sort of tacked on to our minds for evolutionary (or even bootstrapping) reasons. Alternatively, maybe "value" just naturally falls out of certain types of "conscious" systems.

    External actuators- I agree that they would certainly change the (conscious) nature of any plastic system (including the neural cultures I talk about here), but as you say the question is how much (one of the reasons why mathematical theories of consciousness are so alluring). Additionally, it might be possible to engineer systems to an arbitrary level of "complexity" (in the intuitive, non-mathematical sense) without the use of external actuators- for instance, by programming in C rather than in vitro.

    ReplyDelete
  3. What would you mean by non-subjective components of 'affective' pain?

    It's possible we could use intelligence and planning to substitute for some of the chemical and practical introspection of the evolutionary and developmental processes...C might be a good starting point, although I think I would head immediately over to some language with more natural object orientation and introspection.

    ReplyDelete
  4. This gets a bit more into the philosophical side of pain, which is tempting to avoid because it is difficult to work with experimentally. The idea is that when you cut your finger, you perform all sorts of actions that everyone else can see (the objective part)- you immediately withdraw your finger, make emotional displays of your suffering, and favor the finger to prevent it from further injury. Is pain "bad" just because of those behaviors? Am I not allowed to poke someone with a needle just because it will inconvenience them for the rest of the day, as they are distracted (and perhaps distract others) from their daily tasks? Or is pain "bad" because of the way it feels (the subjective part), because it "hurts" and not because of the way you react to that "hurt?" This is an important distinction, as if the "badness" is purely behavioral (and not about feelings), then distracting any system from it's goal would be "bad"- even if it was just a rock that reaaaaally wanted to roll down hill. Alternatively, perhaps that view on pain needs to be qualified with a Tononi-like measurement of complexity- it is "bad" to prevent a complex system from trying to achieve a complex goal (we might say this system is an "agent"). That might imply that it is "worse" to hurt intelligent, capable folks than it is to hurt normal folks- which doesn't mesh very well with universal human rights.

    ReplyDelete

Post a Comment

Popular posts from this blog

Why use Brain Cells in Art?

Misophonia: Personality Quirk, Symptom, or Neurological Disorder?

The Man Who Voled the World