The overall rating of a review is different from a simple average of all individual ratings.
Share this review on
Back in the day, I did a degree in Philosophy, Politics, and Economics. All three subjects make certain claims or predictions about how rational individuals will behave, for instance when making choices. It’s long been known, however, that experiments with actual human beings show significant departures from these models. In other words, human beings act irrationally and (according to Kahneman) this is not simply because we are emotional but also because our cognitive processes are subject to pervasive biases and distortions.
I was already vaguely familiar with some of Kahneman’s work (mostly co-authored with Amos Tversky) before I read this book, but what I knew I had gleaned second-hand from other sources. This was the first time that I’d read his actual writing.
This book is clearly intended for a popular audience. References are included, unobtrusively, at the end, but the text isn’t besmirched by so much as a superscript numeral to indicate their presence. (Personally, I found this somewhat annoying, as I never knew when to turn to the references.) Further, the style is positively conversational at times, with Kahneman covering not only a range of psychological findings but, at times, the biographical story that led him to make these breakthroughs. (Again, I sometimes found this grating, if only because it sometimes seemed to amount to gratuitous name-dropping regarding various other distinguished academics he knows.)
The opening chapters introduce the ‘two system’ theory of mind. Though Kahneman is at pains to stress that these two systems are not literally two different agents, he often speaks as if they were. System 1 is intuitive and impulsive; system 2 is the rational calculator, with which many of us identify, but which for the most part is content to let system 1 do the running (Kahneman describes it as the ‘lazy controller’). It is system 1 that is particularly prone to various biases in its immediate judgements but, while system 2 often has the power to correct for these, it takes effort to do so.
With the stage set, most of the book is devoted to describing various common biases, such as our tendency to either over- or under-estimate small probabilities, usually to do with the ease and vividness with which we can imagine their occurring. This explains why people are more afraid, for instance, of dying in a plane crash or terrorist attack than in a domestic accident, even if more people die in domestic accidents (they don’t get the same sensationalist media coverage).
Sometimes our biases lead us to make bad decisions. Suppose that you were offered the choice between receiving £850 for certain or having a 90% chance to win £1,000. Which would you pick? Most people take the sure thing; the difference between £1,000 and £850 isn’t that great, whereas the 10% risk of leaving with nothing is a very real prospect. This is contrary to traditional ‘rational’ predictions, since the expected payoff of the gamble is £900 (i.e. 90% x £1,000). This can be explained away, however, by observing that the value of money diminishes as we have more of it. The first £150 is worth more than the last £150, and perhaps that’s why most of us would prefer the certain £850.
Interestingly, however, although most of us prefer certainty when it comes to gains, we are typically risk-seekers when it comes to loss. Suppose that you have a choice between losing £850 for certain and a 90% chance of losing £1,000. In this case, assuming that they can afford it, most people actually opt for the gamble, since there’s a chance (10%) that they may get away without losing anything.
There’s nothing necessarily irrational about either preference in isolation, but if (like most people) you prefer certainty in gains and risk in losses then it could spell trouble. Each time you come across decisions of the first kind (involving gains), you prefer the certain gain, even though the expected gain is less. But each time you come across decisions of the second kind (involving losses), you prefer the gamble, even though the expected loss is greater. Someone who goes through life always choosing in this fashion is bound to come out worse than someone who acts as rationality dictates.
More problematic still is that our decisions often depend on how the resulting outcomes are presented (or ‘framed’). Consider these two scenarios (from p. 368):
Scenario 1: A disease epidemic threatens to kill 600 people. If we use Drug A, 200 people will be saved. If we use Drug B, there is a one-third probability that all 600 will be saved, but a two-thirds probability that no one will be saved.
Which option would you pick? Most people, presented with this scenario choose A. But now consider scenario 2:
Scenario 2: A disease epidemic threatens to kill 600 people. If we use Drug A, 400 people will die. If we use Drug B, there is a one-third probability that nobody will die, but a two-thirds probability that 600 people will die.
Which option would you pick here? In this second scenario, most people prefer Drug B. Did you agree? If so, there’s a problem. If you look again at the two scenarios, a moment’s reflection will reveal that they describe exactly the same case: saying that 200 out of 600 will be saved is just another way of saying that 400 will die. This is easily seen when pointed out, but people’s choices really do seem to differ depending on whether the outcome is described in terms of how many lives are saved or how many lives are lost!
I can’t cover everything in this review, but these two examples will give you some taste of the kind of issues that Kahneman covers. In each case, a puzzle is presented and, usually, some explanation given for people’s behaviour, often linked back to the earlier discussion of systems 1 and 2. I have to say though that I did start to tire of this, even reading the book in quite small chunks over a period of time. Each of the examples that Kahneman presents is interesting in isolation, but for me there were too many, so it started to get repetitive, especially since many were simply different ways of illustrating pretty similar points about cognitive biases. Perhaps it’s better simply to ‘dip in’ to this section of the book than to read it in one sitting, though I should add that it’s still better to read in order, since later discussions sometimes assume familiarity with earlier material.
Even though I was vaguely aware of some issues, such as loss aversion and availability bias, I certainly felt that I learned a lot from reading this book. As fascinating as some of these finds are, however, I’m not quite sure what to make of them. As a psychologist, Kahneman is rightly concerned with describing and explaining how real people actually do behave. Sometimes, however, he suggests that these findings falsify basic tenets of rational choice theory. But decision theorists need not be concerned with predicting how actual people will behave – their aim may simply be to say how people should behave. That people don’t always do X no more threatens the claim that X is rationally required than the fact that people sometimes lie threatens the claim that lying is morally wrong.
Towards the end, in an interesting discussion of well-being, Kahneman says, in an off-hand manner “Philosophers could struggle with these questions for a long time” (p. 410), seemingly oblivious to the fact that philosophers have indeed been considering the questions he has in mind (if not exactly in his terms) for hundreds, if not thousands, of years. Kahneman seems content simply to say that we have rational choice on the one hand and how people actually choose on the other and that we need to give them both weight. But that there are two sides to an issue doesn’t necessarily mean that we must find some compromise between them (consider debates over evolution vs. intelligent design, or abortion); we should be open to the possibility that one side is simply mistaken. In more practical terms, if our choices are systematically ‘irrational’, then we may want to consider whether our notion of rational action is as adequate as we think. If it is, then perhaps we should be prepared to adopt rational policies, even if they are not what people (actually but irrationally) want.
These concerns, however, take us away from the book itself and towards its larger import. Despite some minor complaints concerning the presentation, they serve to illustrate just how stimulating a read I found it. It’s certainly something I can see myself pondering over and dipping back in to and I imagine it would be enjoyed by those interested in other popular works of behavioural economics (such as Freakonomics) or psychology.
At time of writing (September 2012), the original hardcover edition is no longer available from Amazon.co.uk, so may be out of print, but you may be able to track one down or purchase second-hand, if you’d prefer it. They are offering the paperback for £5.39 (RRP £8.99) or a Kindle version for £5.49. There’s also an unabridged audio download for £11.24, apparently, but I think this is a book I’d prefer to read, rather than listen to.