Neuroeconomics and Reinforcement Learning: The Concept of Value in the Neuroscience of Morals


By Julia Haas




Julia Haas is an Assistant Professor in the Department of Philosophy at Rhodes College. Her research focuses on theories of valuation and choice.





Imagine a shopper named Barbara in the pasta aisle of her local market.  Just as she reaches for her favorite brand of pasta, she remembers that one of the company's senior executives made a homophobic statement. What should she do? She likes the brand's affordability and flavor but prefers to buy from companies that support LGBTQ communities. Barbara then notices that a typically more expensive brand of pasta is on sale and buys a package of that instead. Notably, she doesn't decide what brand of pasta she will buy in the future.






Barbara’s deliberation reflects a common form of human choice. It also raises a number of questions for moral psychological theories of normative cognition. How do human beings make choices involving normative dimensions? Why do normative principles affect individuals differently at different times? And where does the feeling that so often accompanies normative choices, namely that something is just right or just wrong, come from? In this post, I canvass two novel neuroethical approaches to these questions, and highlight their competing notions of value. I argue that one the most pressing questions theoretical neuroethicists will face in the coming decade concerns how to reconcile the reinforcement learning-based and neuroeconomics-based conceptions of value.





One popular approach to the problem of normative cognition has come from a growing interest in morally-oriented computational neuroscience. In particular, philosophers and cognitive neuroscientists have turned to an area of research known as reinforcement learning (RL), which studies how agents learn through interactions with their environments, to try and understand how moral agents interact in social situations and learn to respond to them accordingly. RL research suggests that human choice depends on several distinct decision systems, where each decision system relies on a different computational algorithm to calculate 'value.' Roughly, value is calculated in terms of how much reward is associated with certain actions over time. Learned value assignments then underwrite choice and, where applicable, action selection.








The trolley problem, image courtesy of Wikimedia Commons.

Perhaps the most prominent RL theory of normative choice, presented by psychologist Fiery Cushman (2015), proposes that moral behaviors depend on one of the three systems typically identified in RL, what is known as the habit-based system. For example, Cushman suggests, American tourists frequently continue to tip in restaurants abroad, even when there is no local custom for doing so (2015, 59). More formally, one of the advantages of Cushman's view is that it may explain why participants provide surprisingly inconsistent responses to what is known as the trolley problem.





Typically, in switch versions of the trolley problem, people support the killing of a single individual in order to save five others, but find it difficult to endorse the harm of one agent in footbridge versions of the problem, where the harm is more ‘hands on.’ Since a purely numerical assessment favors the saving of five people rather than one in both cases, Cushman reasons, people’s tendency to resist harming the single agent in the footbridge version is “the consequence of negative value assigned intrinsically to an action: direct, physical harm” (2015, 59). That is, Cushman suggests, participants’ responses to the footbridge version of the dilemma may be underwritten by the model-free decision-system: since directly harming others has reliably elicited punishments in the past, this option represents a bad state-action pair, and leads people to reject it as an appropriate course of action.





A second approach to Barbara’s example comes from a branch of behavioral economics known as neuroeconomics. Like their RL-research counterparts, neuroeconomists employ the concept of ‘value’ to help explain how choices between multi-faceted alternatives are possible. In the context of neuroeconomics, however, value specifically refers to the ‘worth’ of a given commodity or action as computed by the agent - that is, it refers to subjective values. Correspondingly, within the framework of neuroeconomic research, understanding what takes place in choice amounts to uncovering how humans and other animals compute subjective value.





Extending this approach to the problem of normative choice, Shenhav and Greene (2010) asked participants undergoing fMRI to imagine scenarios in which they could save a group of individuals at the expense of leaving a single individual to die. For example, they invited participants to evaluate the moral acceptability of saving a group of skydivers with faulty parachutes at the expense of letting a single skydiver with a faulty parachute die. The number of skydivers in the group and the probability of the group’s survival varied from trial to trial (see Figure 1) (see Shenhav and Greene 2010, supplemental materials). Consistent with traditional economic and utilitarian models, they found that many of the study’s participants found it morally acceptable to sacrifice the life of one individual in order to prevent a greater loss of life. Interestingly, Shenhav and Greene also found that participants’ ratings of moral acceptability were correlated with degrees of activation in their posterior cingulate cortex and ventromedial prefrontal and medial orbitofrontal cortices, i.e., with brain activations relatively similar to those seen in instances of valuing physical goods (2010, 671, Table 1 (expected value)).








Figure 1: Shenhav and Green argue that "Average

moral acceptability ratings across trial value space

reveal a graded behavioral sensitivity to 'expected

moral value'" (669).



The RL and neuroeconomic approaches thus seem to overlap in several important ways. Both theories take value as a fundamental unit of choice. Both traditions also recognize that neurons in the OFC are responsible for encoding value in the brain (Padoa-Schioppa and Schoenbaum 2015). But the views diverge when it comes to characterizing when and how value is computed. In RL, value is something that is often learned gradually over time; by contrast, in neuroeconomics, it is suggested that subjective value is calculated online, i.e., at the time of choice. Consequently, it is not clear whether and how RL's algorithms can be used to model subjective valuation in neuroeconomic choice. This is a shame, because neuroeconomics could benefit from RL's strong computational foundations, and RL could benefit from the many incisive behavioral and neuroscientific experimental paradigms on offer in neuroeconomic research.





Increasingly, researchers in the two independent fields recognize the need to collaborate and find common conceptual and empirical ground. These kinds of conversations would benefit the field of neuroethics, too. Both of these intersecting disciplines will help up make sense of Barbara’s case in the years to come. In other words, it would help us gain a better understanding of the brain’s role in our moral experiences: How do my past learning experiences and present choice environment influence my future moral choices? What is the difference between something that just ‘feels wrong’ and something that has good reasons for being thought of as immoral? And – perhaps most importantly - how can I shape my own neural moral ‘values,’ as well as the neural moral values of those around me, to try and make more consistent decisions over all? The concept of value may turn out to be the basic unit of the neuroscience of morality.




Want to cite this post?



Haas, J. (2017). Neuroeconomics and Reinforcement Learning: The Concept of Value in the Neuroscience of Morals. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/03/neuroeconomics-and-reinforcement_7.html


Comments

  1. Thanks for the post , the issues presented are far greater more complicated , just worth to notice :

    At first place, that hero of yours (Barbara) has taken a decision being based not upon primary relevant factor, means: The quality of that brand of pasta (The taste, the consistency, the price, nutritional ingredients made in and so forth…) but rather, side discretion, had to do with: homophobic Remarque that has been recalled all of a sudden, yet, nothing to do with the pasta itself.

    Now , this is a very important observation , this is because , endless personal and contextual factors of such , may intervene :

    Whether the very problematic existence of dilemma may affect the buyer , whether the buyer is in a rush and needs to take hasty decision ,whether commercials influencing it , and so forth ……

    However, such side discretion data, may come from deep organs in the brain (like the amygdale or the Hippocampus ) those are glands, typically, very hard to monitor by imaging devices like FMRI and so forth…..

    Activity in the cortex (like pre frontal cortex) doesn't indicate much, but, a place of activity, but can't always depict the whole picture of all circles of the process. It is like an alien watching human being , monitoring his body , and suggesting , that fear has to do , with the heart , since ,there is in such situations , an increase pace of heart beating .

    So , the whole picture , is far greater more complicated , let alone , that intervention , or awareness of it , may affect the participants , and lead them to self deception .

    So, models presented in that post , are hardly , classic and less even the basic models . So, precaution and reservations to be taken are absolutely recommended here.

    Thanks

    ReplyDelete

Post a Comment

Popular posts from this blog

Why use Brain Cells in Art?

Misophonia: Personality Quirk, Symptom, or Neurological Disorder?

The Man Who Voled the World