Monday 6 August 2012

Thinking, Fast and Slow - Book Review

Over the past few weeks I have been reading a book called Thinking, Fast and Slow by Daniel Kahnemann.

This book was recommended to me by a good friend of mine who works as a software tester for another company.  He told me that it would change the way I think about how our minds work, and indeed it has.

As the title suggests, Daniel Kahnemann describes how our minds are split into two main systems which we use when we think and make decisions.  He refers to these as System 1 and System 2.  System 1 is described as automatic and subconscious.  So when we feel we are acting on our instincts, gut reactions or hunches, we are said to be using our (fast) System 1 to make these decisions.  When we act on instinct we do not take a step back to analyse the situation before coming to the decision we make, we just feel that it's right.  Often, this is exactly how we want our minds to work.  For example, if someone throws a ball in your direction you need to make a quick decision as to whether you're going to dodge or catch the ball.  Not a lot of conscious thought is going into the decision as there is not enough time to think about what action you will take.

There are other situations where taking your time before acting is much more appropriate.  For example, if you're looking to buy a new car you won't make a quick choice based on looks alone, you will want to know think about many aspects such as performance, economy, mileage etc.  Therefore, before making your choice you will have to use your (slow) conscious System 2.

It is the occasions where we don't feel a decision requires a great amount of thought which can lead to errors of judgement.  Something may seem simple and obvious on the face of it but only when you really apply conscious time and thought do you see things more clearly.  This is a good point to bear in mind, especially when testing software.

One of the mistakes we are prone make is to let our own personal experiences of events bias our view of the probability that those events will happen in the future.  For example, if you have a family history of heart attacks and you are asked what percentage of deaths nationally are caused by heart attack then chances are you will overestimate the likelihood when compared to someone who has no personal experience of heart attacks.  This is known as the Availability heuristic, since instances which come to mind (are available) lead us to think that events are more common than they are in reality.
When we have personal experience of a subject we must not let that influence our view of the facts, however, this is easier said than done.  Another similar idea which is related to the Availability heuristic is the acronym WYSIATI, which stands for 'What you see is all there is'.  We each have our own view of the world and everyone's view is different.  We often don't look or investigate any further than what we've seen personally as we don't always believe there is more to see.
We need to train ourselves to think about the bigger picture as there is often a lot more going on that we don't realise just because we haven't paid attention to it.

Below I've briefly described a few of the many ideas Daniel talks about in the book which are useful to bear in mind, especially when applied to software testing.

Sample size
One way in which it can be very easy to arrive at an incorrect conclusion is when judgements are made based on a small sample size.  The sample size should be large enough so that natural fluctuations in results do not skew the overall result.  For example, if you toss a coin 1000 times, as well as being very bored and worn out, the percentage of times you see heads should not be far from 50%.  However, if you only toss the coin 10 times, you could quite easily see 7 heads out of 10.  We all know that the probability of seeing a head is 50% but when we don't know the probability we need to choose a sample large enough to eradicate the influence of natural fluctuations in results.

Answering an easier question
When you are trying to answer a difficult question or one that you do not have much knowledge about it can be natural to give an answer based on what knowledge we do have about something linked to the question.  For example, if you are asked the question 'How happy are you with your life?' you are likely to give an answer based on how you feel about things right now rather than thinking more objectively about your life as a whole since birth.

Regression to the mean
Sometimes people can misinterpret a correlation between events happening as causal just because they both occurred at the same time.  Daniel explains that it's important to bear in mind that measurements of a particular category tend to create a bell shaped curve (see below) over time.  So there will be fewer results at the extremes with the majority falling somewhere between these two extremes.  Due to this fact there is a general regression to the mean, or average result.
This can explain why, more often than not, punishment of a bad result or low score is followed by an improvement, and rewarding success is followed by a worsening in performance.  This phenomenon can result in the misbelief that it was the punishment that caused the improvement or the reward that lead to the deterioration.  This is a great shame as many employers are not aware of regression to the mean and believe that their punishment of poor performance is always responsible for the subsequent improvement, so they have no reason to change this behaviour.



Since reading Thinking, Fast and Slow I've found myself thinking about real life situations where I can apply the principles described.  Even when you have read the book and have the knowledge of how our minds work it still seems unavoidable to fall into a lot of the traps which our brains appear hardwired to make us susceptible to.  But to have the knowledge about how our minds work and the flaws that exist can be empowering and should lead to more comprehensive testing.