Predictably Irrational

Dan Ariely's Predictably Irrational should be required reading in all English-speaking high schools. Though he is an academic of impeccable credentials (including a 2008 Ig Nobel prize), he is also an entertaining writer—a rare combination. The book details some of his important and cutting-edge research in the emerging field of behavioral economics, but its writing is accessible, clear, funny, and effective.

His examples resonate with the ordinary choices we all make in life: buying magazines, dating, vacationing. He shows us the mistakes we all make, but not in a way that is condescending or cynical. Indeed, his intent is clearly to show us how we can avoid making those mistakes even while he shows us how universal they are. His advice is not that of a college professor or a parent, but more like a best friend telling you “Wow, I just did something really stupid—don't do that.”

Indeed, its very lucidity might be a risk: you might be tempted to think “Well, of course, how obvious” after he explains some aspect of human behavior, and not realize that his discoveries were not obvious, and that they are backed up by solid experimental evidence, not just platitudes.

You can get a taste of his style from Youtube, but the details in the book are worth the time spent. While it is likely that academics will continue to cite the groundbreaking 1974 Kahneman and Tversky paper as the founding work of the field, Ariely's book is likely to be one most discussed by the rest of us, and it will serve you well to be familiar with it when related subjects come up in conversation.

Show me the mouse

I once attended a scientific conference where several of the speakers were doing research into longevity. Each had a promising area of research. We have learned a lot about aging in recent years and know many of the biochemical changes that take place. There are drugs and other interventions (like calorie restriction) that show promise in slowing, stopping, or even reversing some of those changes. The speakers explained their work and why it had promise, then invited questions (as is standard practice in scientific conferences).

The first question for every speaker was usually the same: Where's your 5-year-old mouse? Mice are commonly used in medical research for many reasons. They are easy to breed and keep, their biochemistry is reasonably similar to humans (and most other mammals), and their life cycle is short and fast. Testing longevity drugs on humans would take decades. Mice only live a year or two. So if someone discovered a drug that could significantly extend human lifetimes, it is likely that it would be tested on mice first. If a drug really was the breakthrough we hope for, pictures of 5-year-old mice would be on news shows and websites everywhere.

800px-Lab_mouse_mg_3263.jpg

None of the researchers was able to show a 5-year-old mouse. Some did have good results with lower animals like flies, some had mice that were measurably healthier in their later months than controls, but no one had the holy grail. But this is not a story of failure: research continues, new things are being tried, and new things are being learned and shared at conferences. My point is that science is successful precisely because everyone knows what the hard questions are and can't duck them.

Contrast science to, say, advertising. A commercial on this year's Superbowl for vitamins touted their benefits by saying “Centrum Silver was part of the recently published landmark study evaluating the long-term benefits of multivitamins.” This statement still appears on their website, verbatim. They can say these things safely knowing that no one will ask the obvious question—so what were the results of the study? Even the website doesn't link to the study, for good reason. The study showed no long-term benefits from multivitamins. But advertisers aren't scientists. They can give their audience carefully crafted, misleading—but totally true—statements while ducking the obvious questions.

Many people think science is about studying lots of facts discovered by people many years ago. That's certainly part of it, but far more important than yesterday's answers is learning what the right questions are.

Advocates for a product or a cause can make a very eloquent case, even if they're wrong. This is because they don't have to face hard questions. A book, a movie, or a TV documentary can all make you believe nonsense because you can't talk back. And if an idea supports our ideology, or benefits us, we are more likely to believe it without questioning, even when we can ask questions. Good scientists know this, and are trained to be most suspicious of things they would like to believe, like the idea that they could live longer.

This habit of being overly credulous or optimistic about things we would like to be true is called confirmation bias. It's another habit of bad poker players that we can take advantage of. They want to call, so they convince themselves that their opponent is bluffing. They want to fold, so they convince themselves that their opponent has the nuts.

Be skeptical. Especially of yourself, and what you want to be true. Don't ever forget to ask yourself the tough questions. Even if you're telling me what I want to hear, I'm going to tell you to shut up and show me the 5-year-old mouse.

In praise of Roe v. Wade

The Supreme Court's decision in Roe v. Wade has been much maligned by both sides of the abortion debate throughout the 40 years since it was issued on January 22, 1973. It has been called a “non-decision”, a “cop-out”, “political weaseling” and much worse. It is commonly noted by judges that a good decision is one that both parties complain about, but if that were the only reason for the decision, it would indeed be a cynical and political one. But I think the decision is a good one, not because its middle ground is politically balanced, but because the middle ground accurately reflects the complex realities of the situation much better than the ideological extremes of either side.

Harry Blackmun, author of the majority opinion.

Harry Blackmun, author of the majority opinion.

The legal objections are easiest to dismiss. It has been described as “nine men in robes making a decision for the rest of us”. But this is 180 degrees from the truth. The court did not “make the decision” for everyone; they did exactly the opposite: they said that the Texas legislature, despite being a group even larger than the court and directly elected by the people of that state, cannot make the decision for every woman in Texas. They ruled that Norma McCorvey, and you and I (at least those of capable of becoming pregnant) had the right to decide for ourselves.

Some legal critics claim that the court usurped the power to create a right that wasn't granted by the constitution. They're also wrong. It's right there in amendment nine of the Bill of Rights: “The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.” The folks who wrote the bill of rights knew full well that if they started writing down what they thought basic human rights were, some power-hungry idiot would later come along and say “Look, they didn't write down X, so X must not be a basic right.” So they wrote amendment nine just to make it clear: just because the framers didn't think of it while writing them down the first time, that doesn't mean it doesn't exist. They didn't write down the right to travel and live wherever you choose, the right to educate your children, the right to marry whoever you choose, and dozens of others we would all agree are basic, fundamental human rights. But it's the court's job to enforce all of those nonetheless. The only power usurped here was by the Texas legislature, who thought they had the power to force Norma McCorvey to carry her pregnancy to term; the court told them that her rights mattered, even if they weren't among the ones first written down in 1791.

The decision itself boiled down to this: in the first trimester of pregnancy, the mother's rights are absolute and no legislature or court can limit her right to decide for herself. In the third trimester, the state may grant rights to the unborn, up to and including an outright ban on abortions. In the middle, the state can regulate to a lesser degree. As wishy-washy as that sounds, it turns out to be quite prescient in light of what we have learned in the years since.

The issue is a classic conflict of rights. When do we as a society recognize the rights of a child as distinct from the rights of the mother? There is no question in our society that once a child has been born, the umbilical is cut and its lungs are full of air, killing that child is seen as a despicable crime. There have been societies where that was not the case: infanticide was tolerated and even common in some cultures, but not here. Our culture, unlike many, even assigns gender to infants: we call them little boys and little girls from the moment of their birth, while most cultures only separate men and women after puberty, treating all children as just “children”. Our attitudes are more in line with the biology. The qualities that we admire, even revere, about people that make us revile harming them are as evident in an infant as an adult: people think, and feel, and dream, and laugh, and love, and want; infants do as well. Infant boys and infant girls have different skills and different personalities that can be observed and measured right from birth. Infants do not have quite the developed minds of adults (indeed, some parts of the brain are not fully developed until the late teens) nor are they able to express themselves as well as adults, but they clearly are selves, thinking and feeling people. In the past people have asserted that infants didn't feel pain, or never remembered events from their infancy, but we now know these beliefs are false. Infants do feel pain and joy much as adults do, and their adult lives are affected by events in infancy and by experiences before their birth, even if they don't have conscious memories of them.

The third-trimester rules of Roe acknowledge this simple biological fact: there is little fundamental difference in kind between a child just before and just after birth. A child born in week 35 of pregnancy has a 90% chance of survival. Some have survived birth as early as 24 weeks with considerable assistance from modern medical technology. We clearly think of these children as people in every important legal and ethical sense. They have thoughts and feelings and experiences, and the thought of ending their lives bothers us. There is a person there, and it is reasonable for a state to step in and protect that person from harm. Whether that person is in a womb or an incubator doesn't much affect that evaluation.

The earlier stages of pregnancy are very different. Let's first of all dispose of the idea of “conception”, which is a religious idea that doesn't have any real biological meaning. The biology of human development begins with fertilization, which can't really be called an “event” because it is a complicated process in many stages that can be accomplished in different ways. But for the moment I'll concede the point and say that a zygote exists after fertilization. Once the DNA from the egg and sperm have combined, the newly-formed zygote then begins to divide into two cells, four, eight, and so on. At this point there aren't yet any specialized cells: they're all stem cells, and will only take on specialized roles as organs, nerves, and so on much later in the process of development (if they continue at all—many of them will just die off).

The process of development itself can take many turns. The majority of the time, in fact, the process results in nothing at all. Most fertilizations are simply flushed out with the mother's next menstruation and never develop. The woman never knows that any fertilization occurred at all. In those fewer cases where the zygote does make it through the tubes to implant in the uterus, its fate is still undetermined. It might develop into a person, or two people, or three, or half. Identical twins, for example, result when the multi-celled zygote splits at some point, and both portions go on to implant and develop into fully formed unique people (albeit with identical DNA). Identical triplets are quite rare, but also possible. Another even rarer possibility is that two different zygotes will merge at some point in their development and develop into a single fully-formed person with two sets of DNA. These are called tetragametic chimeras, and are often born with defects, but can also be born as perfectly normal infants who may not even know that they were the product of two different fertilizations.

This is where the “life begins at conception” argument falls down. Yes, a zygote after (and arguably before) fertilization is living, in the same sense that any of our skin cells or liver cells is living. They can divide and grow and contain a full set of genetic material. We can now grow skin and muscle tissue in a dish from a single cell. But the relevant question is not whether the thing is living or not, or even whether it is human. The question is “is it a person?”, in the sense of laws that make harming people an act of violence we detest, or is it merely a collection of cells like the skin cells we flush down the drain when we wash our hands, or blood cells that we donate to the Red Cross? What is it about people that makes them specially deserving of protection? A good way to answer that is to think of the twin case: why do we consider twins to be two people, not one? It's simple: each twin thinks and feels and dreams independently. Each has its own personality and its own desires and fears. It is untenable to argue that the single zygote that would later develop into these two people had any of those qualities. It had no brain, no nerves, no eyes, no ears. It had only the potential for developing those things—and we didn't know then from the zygote stage exactly what it might develop into. It might have become a person, or two, or half, or it might not.

Dolly, the first cloned mammal.

Dolly, the first cloned mammal.

Likewise, there is nothing special about the type of cell that is the zygote. We retain stem cells even into adulthood, and any of them also has the potential to develop into any other kind of cell. The day is not far off—if it hasn't happened already—when a single cell from an adult human will be able to produce a cloned person, just as Dolly the sheep was created from a mammary cell of her mother. Clearly, it would be morally repugnant not to grant that person the same legal rights as other people, because she will have the same thoughts and feelings as any other infant, despite the fact that she was not the product of fertilization at all. Whether or not you approve of cloning, the fact is that it demonstrates vividly the fact that the concept of one-fertilization-one-person doesn't hold water.

So the first trimester rules of Roe also reflect what we know about biology: a few cells don't make a person, and it doesn't make sense to arbitrarily grant them the rights of a person when we don't even yet know what they may develop into. The mother, on the other hand, is quite clearly a person, and her rights can and should be protected. Laws against birth control, “morning after” pills, and yes, even first-trimester abortion clearly do victimize real women, and we simply can't legally or ethically justify that to protect what is clearly not a person.

Lastly, there's the middle ground: the second trimester. The justices here make another wise statement: we don't know. At this point in the development of what is now a fetus, it begins to resemble a person. Twins have already split, chimeras have already joined, and it begins to develop eyes and ears. Sex differences start to develop. We all begin as female. At some point in the growth process, those with Y chromosomes will produce hormones that cause male characteristics to develop, unless the fetus has AIS. At 18 weeks it responds to loud noises. At 22 weeks it has a normal sleeping and waking rhythm. It might have the beginnings of something like thoughts and feelings, or it might not—we just don't know. And because we don't know, the justices leave the issue for further debate by the people's representatives.

In short, the justices in Roe weighed the legal and biological facts before them, and reached the right decision, despite the fact that they had to decide decades before some of the biology I mention above was known, and despite the fact that they knew their decision would be disliked by both sides of the political debate on the issue. I for one find that remarkable and worthy of praise.

The value of prejudice

This is not an uncommon situation: I'm in a casino I haven't played in for a while and sit down in a game with players I don't know. In my first few minutes before I've had much chance to observe them, I get into a hand with another player. After the final card is dealt my hand is what's called a “bluff catcher”, a weak hand that will only win if my opponent is on a pure bluff. I have to estimate the odds that my opponent is bluffing to decide on a call. The important factors are the size of the pot, the size of the bet, the previous bets my opponent has made, and my “read”. Let's say that I have no good read, and all else being equal, my call would be mathematically a toss-up. What should I do? One good option is make a truly random choice. Another option is something I might not consider in other life choices: prejudice. If my opponent is an Asian male, I call; a white woman, I fold.

These are common stereotypes at the poker table. Asian men bluff too much, and white women don't bluff enough. I course I know many players that don't fit the stereotype. If my opponent is Bill Chen or Jennifer Harmon, I will assume that they have properly randomized their bluffs, and so I will have to properly randomize my calls. But if I don't know the players at all, and have no other information to go on (such as how they dress, speak, shuffle their chips), I might well go with the stereotype.

Should I feel guilty? I don't think so. The human instinct for prejudice is an example of what computer programmers call a heuristic: a quick and dirty shortcut to solving a problem that you can't solve properly with the resources available. A good example might be Google's search algorithm. They can't possibly hire people to read every page on the net and decide which are good and relevant to every possible query, so they use a collection of shortcuts to rank pages. For example, if other pages link to a given page, it's probably better than one that people don't link to, so it will rank higher.

Where prejudice and other heuristics go wrong is when we use them as actual rules rather than shortcuts. If we do have sufficient information to solve a problem, we should use that instead of the shortcut. If you persist in treating a person by their stereotype even after you get to know them, then you have a problem (such as losing all your money to Bill Chen).

There are many other such heuristics our brains use: for example, trust in authority. We can't all study every subject in depth, so it makes sense to rely on expert opinion for many things. But sometimes the experts are wrong—they are human too. Another is called “availability bias”: when estimating the odds of an event, most people give greater odds to things if they can easily bring examples to mind. For example, many people think the odds of suffering a violent crime or a plane crash are much higher than they are, because they can easily recall news reports of these rare events. This can be very troublesome in our world of sensational news reporting.

The trick, then, is not to feel guilty about our biases or even to suppress them, but to recognize and understand them and know when to overrule them. There are good methods for discovering our biases and finding the truth in spite of them. We call these methods science. Many people confuse science with learning lots of details about chemistry and biology and physics. Those things are handy too, but far more important is to learn the techniques we used to discover those things, because those techniques apply to life as well. And learn to use them properly: Some people use their intelligence and education to be more adept at justifying their biases rather than to overcome them.

Human brains are imperfect things. But we can learn to work around those imperfections.

Moral realism

In a recent Neurologica blog post blog, Dr. Steven Novella expresses a belief that is not uncommon among philosophers (at least since Hume): that all those who champion an objective morality must posit some “lawgiver” as its source. There have been a few notable exceptions: Ayn Rand attempted to derive a morality from “human nature”. More recently Sam Harris's book The Moral Landscape makes a case for applying science to moral values. I agree with Novella and others that these attempts are not persuasive. But I remain both a moral realist and atheist, and I have a few ideas to add to the mess.

First let me clarify what I mean by moral realism. I reject the idea that morality is a matter of opinion, emotion, custom, culture, or agreement. I believe, for example, that slavery is wrong, for everyone, at all times. It was wrong when America did it, it was wrong when the Bible approved of it, it's wrong now. It didn't become wrong when we outlawed it; we outlawed it because we realized that it was already wrong. I feel no obligation whatsoever to “respect” cultural traditions like Islamic misogyny or Hindu castes. Such things are wrong even if the slaves, women, and untouchables themselves tacitly or explicitly agree with them.

There are finer grades of meta-ethical positions that go by names like universalism, absolutism, nihilism, relativism, and so on. But they aren't my subject today. Moral realism is only the claim that statements about what is right and wrong are statements about reality, not subjective opinion.

This position does indeed require that moral values come from some standard other than culture or law. When I say that one culture is better than another, or that our culture is better than it was in the past, or that laws against free speech like blasphemy and political dissent are bad laws, I imply that there exists something other than current culture and law against which I measure them. I don't concede that this must be an intelligent lawgiver, but it must be something. Otherwise, what does “moral progress” even mean?

Novella says that he can't imagine how any scientific investigation of nature can uncover objective values, and that values are therefore inherently subjective. To be honest, I can't see how to do that either. But I reject his defeatism. I'm unwilling to elevate my ignorance to the level of natural law. Scientists over the centuries have made similar claims: when it was first known that the stars were trillions of miles away, scientists lamented that we would never be able to know what they were made of...and then we discovered spectroscopy. Science has a remarkable history of discovering the unknowable and doing the impossible. It might be unwise to count on such accomplishments in the future, but it would be equally unwise to bet against them.

I also think that ethical philosophers spend too much time talking about actions and motivations, and don't sufficiently emphasize the relationship between actions (what we do), values (what we want), and knowledge (what we believe). If we want to determine if an action is ethical, we cannot ignore the fact that actions are not made in isolation: we act to accomplish goals that serve our values, and we act in ways that we believe will do that. Yes, it's important to choose values well, and to have some means to judge competing values. But if our knowledge—something we believe about reality—is incorrect, we may do things that we mistakenly think support our values but that actually thwart them. Such actions may hurt other people. If we sincerely believe that attacking the symbol of American decadence will secure our place in heaven, we may pilot a jetliner into the World Trade Center. If we sincerely believe that witches are possessed by demons, we may subject them to exorcism or execution to save their souls. Good people, with what we (or their peers) would see as good values, will do these things with good intentions. Such actions are no less evil.

This has some important implications. It means that having accurate knowledge about reality is important regardless of your values. Whether you value liberty or obedience, wealth or austerity, altruism or selfishness, conformity or individuality, you can serve your goal only if your actions are informed by real knowledge of the world. Regardless of where you want to go, you can't get there unless your map matches the territory. And since we have to acquire knowledge, it is critical that the methods we use to acquire it work. Methods of acquiring knowledge are what philosophers call epistemology.

Here is my proposed candidate for an absolute, objective, moral law:

“It is morally wrong to base your working knowledge on a demonstrably inferior epistemology.”

This is not a value; it is independent of one's values. It is not a matter of personal choice or culture. It is not a commandment from on high. It is an unavoidable logical consequence of four facts of nature:

  1. Our actions affect the world, including other people.
  2. We base our actions upon our knowledge.
  3. We acquire knowledge during our lifetimes.
  4. Some methods of acquiring knowledge are better than others.

Of course, I am also assuming that such a thing as a moral law exists. If such a thing exists, I can't imagine this not being one. And there is nothing subjective or relative about it.

Unfortunately, this law can't be applied to every moral conundrum. I still can't say slavery is wrong without bringing in some values. But I think a surprisingly large number of contentious moral and political issues of our time are not disagreements about values—they are disagreements about how to achieve those values. Conservatives and liberals alike want less gun violence in the world. They disagree about how to achieve that. This is a hard question, but it is not a question of values. It is a question about the nature of the world (including the nature of human psychology, rights, culture, violence, technology, and many other things). Science can be brought to bear on it, and real answers can be found.

Another objection to the idea of objective morality is that it makes moral progress impossible. This is certainly true of certain kinds of moral codes. If a single book, for example, is held to be the one source of moral wisdom, then moral progress does indeed become impossible. Books can't change. I am a big fan of moral progress: anyone who longs for “simpler” times of the past just has a bad understanding of the past. There is more peace, prosperity, health, beauty, wisdom, and every other good thing in the world today than there has ever been at any time in the past.

But this objection does not apply to my idea of objective morality. First, since I don't claim to know a priori what the objective moral values are, moral progress becomes the act of discovering them. Also, knowledge of the world increases with time. We, as a society, learn things and invent things. Therefore, our actions become better informed with time. Morals can change not only when values change (or are discovered), but also when knowledge changes. A world with safe and effective birth control and extensive knowledge of disease and psychology is very different from the world without those things that we had 1000 years ago. It follows that the moral import of an action such as pre-marital sex is very different now, even among those with values identical to those of our ancestors.

I admire Dr. Novella as a scientist, as a writer, and as an activist. I hope I can persuade him as a philosopher that morality is too important to give up the search for an objective, rational basis.

Update

A recent conversation brought to mind an interesting argument. Let me admit, for the time being, the following: The general consensus of ethical philosophers on the contention that all ethical statements are subjective is that this position is logically coherent, consistent with the known facts, reasonably simple, and easy to defend. Moral realism is also logically coherent, consistent with the known facts (although no one has convincingly shown an objective moral rule, no one has convincingly ruled out the possibility either), a bit more complex, and harder to defend. On these grounds, moral realism should fall by Occam's razor.

So let's go ahead and assume subjectivity. I personally, think that moral realism is a good idea, and would like it to be true. I think it is worth my time to pursue better arguments for it. In other words, I value it. If all moral values are subjective, then it is difficult to argue against my desire to pursue it. I can rationally assume subjectivity as my working hypothesis while still pursuing realism without any contradiction. So I plan to keep working on it.