Saturday, July 23, 2016

Daniel Kahneman, Molly Crockett: Deontology Or Trustworthiness?

From Edge:

Molly Crockett, Daniel Kahneman [6.16.16] 
Molly Crockett
Associate Professor of Experimental Psychology, University of Oxford
Daniel Kahneman
Recipient, Nobel Prize in Economics, 2002; Eugene Higgins Professor of Psychology Emeritus, Princeton; Author, Thinking, Fast and Slow
DANIEL KAHNEMAN:  The benefit that people get from taking a deontological position is that they look more trustworthy. Let's look at the other side of this. If I take a consequentialist position, it means that you can't trust me because, under some circumstances, I might decide to break the rule in my interaction with you. I was puzzled when I was looking at this. What is the essence of what is going on here? Is it deontology or trustworthiness? It doesn't seem to be the same to say we are wired to like people who take a deontological position, or we are wired to like people who are trustworthy. Which of these two is it?
  MOLLY CROCKETT:  What the work suggests is that we infer how trustworthy someone is going to be by observing the kinds of judgments and decisions that they make. If I'm interacting with you, I can't get inside your head. I don't know what your utility function looks like. But I can infer what that utility function is by the things that you say and do.
This is one of the most important things that we do as humans. I've become increasingly interested in how we build mental models of other people's preferences and beliefs and how we make inferences about what those are, based on observables. We infer how trustworthy someone is going to be based on their condemnation of wrongdoing and their advocating a hard-and-fast morality over one that's more flexible.

DEONTOLOGY OR TRUSTWORTHINESS?

DANIEL KAHNEMAN:  Molly, you started your career as a neuroscientist, and you still are. Yet, much of the work that you do now is about moral judgment. What journey got you there?
      
MOLLY CROCKETT:  I've always been interested in how we make decisions. In particular, why is it that the same person will sometimes make a decision that follows one set of principles or rules, and other times make a wildly different decision? These intra-individual variations in decision making have always fascinated me, specifically in the moral domain, but also in other kinds of decision making, more broadly.

I got interested in brain chemistry because this seemed to be a neural implementation or solution for how a person could be so different in their disposition across time, because we know brain chemistry is sensitive to aspects of the environment. I picked that methodology as a tool with which to study why our decisions can shift so much, even within the same person; morality is one clear demonstration of how this happens.
    
KAHNEMAN:  Are you already doing that research, connecting moral judgment to chemistry?        

CROCKETT:  Yes. One of the first entry points into the moral psychology literature during my PhD was a study where we gave people different kinds of psychoactive drugs. We gave people an antidepressant drug that affected their serotonin, or an ADHD drug that affected their noradrenaline, and then we looked at how these drugs affected the way people made moral judgments. In that literature, you can compare two different schools of moral thought for how people ought to make moral decisions.     
                  
On one hand, you have consequentialist theories, which advocate that the morally right action is the one that maximizes the greatest good for the greatest number of people. On the other hand, you have deontological theories, which argue that there's a set of absolute moral rules that you cannot break, even if there are cases in which adhering to those rules results in worse outcomes for people. These are two traditions that are at odds with one another—a very longstanding debate in philosophy.             
          
What we found was that if you enhance people's serotonin function, it makes them more deontological in their judgments. We had some ideas for why this might be the case, to do with serotonin and aversive processing—the way we deal with losses. That was the starting point for using both neurobiological and psychological tools for probing why our moral judgments and behaviors can vary so much, even within the same person.
    
KAHNEMAN:  When you use the word deontological, do you refer to how people behave, or how it's expressed in their reaction to others, or do you ask them how they think about it? Those can be very different.   
 
CROCKETT:  Absolutely. One thing that has long fascinated me is why there is so often a distinction between what we think is right or wrong, how we'll judge another person's action, and what we do ourselves. With respect to deontology, these are normative theories, they're prescriptions for what we ought to do. There's a long tradition in moral psychology of trying to understand how human judgments about what we think is right or wrong map onto these ethical theories that have been painstakingly mapped out by philosophical scholars.
     
KAHNEMAN:  What is the psychological reality of these philosophical dimensions? I understand the idea of deontology, but can you classify people? Would the classification of people apply both to what they say, what they do, and what they feel, or is there a dissociation? I might have the idea that I'm quite tolerant of certain actions, and at the same time if you checked me, I'd be disgusted by them. Is it how people feel or what they say that counts?
      
CROCKETT:  That is the crux of all of this research. When people are making judgments, much of the time they're doing this reasoned-out calculation or evaluation of consequences. We can think of them as using System 2 thinking. It's more likely that judgments are going to reflect a set of ideals or principles that people feel they ought to or, in an ideal world, would like to conform to. Of course, when people are making actual decisions that have real consequences, and there are strong incentives to behave in an unethical way, we get overwhelmed by these different sources of value and can often behave in a way that's inconsistent with our principles.   
 
KAHNEMAN:  I was asking about something that's neither of those. I was asking about indignation as an emotional response. I can think of many behaviors that I condone in the sense that I don't have the grounds to oppose them, and yet I don't like them. Does this fit into your system?  
  
CROCKETT:  Yeah. Indignation, or a retaliative desire to punish wrongdoing, is the product of a much less deliberative system. We have some data where we gave people the opportunity to punish by inflicting a financial cost on someone who treated them unfairly or fairly, and we varied whether the person who was going to get punished would find out if they'd been punished or not. We were able to do this by making the size of the overall pie ambiguous.                      
If people are punishing others in order to enforce a social norm and teach a lesson—I'm punishing you because I think you've done something wrong and I don't want you to do this again—then people should only invest in punishment if the person who's being punished finds out they've been punished. If punishment is rational and forward‑looking in this way, it's not worth it to punish when the person isn't going to find out they've been punished....     
...MORE