Polishing a Turd

turd polishWhen I was growing up, I remember my dad (a Physicist) telling me that it was pointless, and in fact meaningless, to be accurate with an estimate: if you’ve worked out a calculation using a number of assumptions, there’s no point in writing the answer to 3 decimal places! He would say that my ‘accurate’ answer would be wrong because it is misleading. The reader needs to know about the possible range of answers – i.e. about the uncertainty – so that they don’t run off thinking that it is exact.

And so, with that introduction (and flashback to my school days) this post is about the regular comedy surrounding business cases, and detailed up-front planning…and what to do instead.

A seriously important concept to start with:

The Planning fallacy

Human beings who desire something to be ‘a success’ (e.g. many an Executive/ Senior Manager) tend to:

“make decisions based on delusional optimism rather than on a rational weighting of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or deliver the expected returns – or even to be completed.” (Daniel Kahneman)

This isn’t calling such individuals ‘bad people’, or even to suggest that their actions are in some way deliberate – it is simply to call out a well-known human irrationality: the planning fallacy.

We all ‘suffer from’, and would be wise to understand and guard against, it.

I’ve worked (or is that wasted time) on many a ‘detailed business case’ over the years. There is an all-too-common pattern….

“Can you just tweak that figure till it looks good…”

models are wrongLet’s say that someone in senior management (we’ll call her Theresa) wants to carry out a major organisational change that (the salesman said) will change the world as we know it!

Theresa needs permission (e.g. from the board) to make a rather large investment decision. The board want certainty as to what will happen if they sign the cheque – there’s the first problem1.

Theresa looks around for someone who can write a great story, including convincing calculations…and finds YOU.

Yep, you are now the lucky ‘spreadsheet jockey’ on this proposed (ahem) ‘transformation programme’.

You gather all sorts of data, but mainly around the following:

  • a ‘base case’ (i.e. where we are now, and what might happen if we took the ‘do nothing’ option);
  • a list of ‘improvements’ that will (supposedly) occur if the board says ‘Yes’;
  • assumptions relating to the potential costs and benefits (including their size and how/when the cash will flow); and
  • some ‘financial extras’ used to wrap up the above (interest rates, currency rates, taxes, the cost of capital…and so on)

You create an initial broad-brush model and then, after gaining feedback from ‘key’ people, you work through a number of drafts – adding in new features and much detail that they insist as being essential.

And voila! We have a beautifully crafted financial model that has a box at the end with ‘the answer’ in it2.

You show the model to Theresa.

Wow, she’s impressed with the work you’ve put in (over many weeks) and how sophisticated the model is…but she doesn’t like this initial answer. She’s disappointed – it’s not what she was looking for.

You go through all of the assumptions together. Theresa has some suggestions:

  • “I reckon the ‘base case’ comparison will be worse than that…let’s tweak it a bit”
  • “Our turnover should go up by more than that…let’s tweak it a bit”
  • “Nah, there won’t be such a negative productivity hit during implementation – the ‘learning curve’ will be much steeper!…let’s tweak it a bit”
  • “We’ll save more money than that…and avoid paying that…let’s tweak it a bit”
  • “Those savings should kick in much earlier than that…let’s tweak it a bit”
  • “We’ll be able to delay those costs a bit more than that…let’s tweak it a bit”

…and, one of my favourites:

“Mmm, the ‘time value of money’3 makes those upfront costs large compared to the benefits coming later…why don’t we extend the model out for another 5 years?”

And, because you designed a nice flexible model, all of the above ‘suggestions’ are relatively easily tweaked to flow through to the magic ‘answer’ cell

“now THAT looks more healthy! The board is going to LOVE this. Gosh, this is going to be such a success”.

Some reflections

John Dewey quote on learningSome (and perhaps all) of the tweaks might have logic to them…but for every assumption being made (supposedly) tighter:

  • one, or many, of the basic assumptions might be spectacularly wrong;
  • plenty of the assumptions are being (conveniently4) ignored for tweaking…and could equally be ‘tightened’ in the other direction (i.e. making the business case look far worse); and
  • there are many assumptions that are completely missing…because you simply don’t know about them….yet…or don’t want to know about them.

With any and every tweak made, nothing has actually changed: Nothing has been learned about what can and will actually occur. You have been ‘polishing a turd’…but, sadly, that’s not how those around you see it. Your model presents a highly convincing and desirable story.

Going back, your first high-level draft model was probably more useful! It left many ‘as-yet-unknowns’, it contained ranges of outcomes, it provided food-for-thought rather than delusional certainty.

We should reflect that “adding more upfront planning…tends to make the eventual outcome worse, not better” (Lean Enterprise). The more detailed you get then the more reliant you become on those assumptions.

The repercussions

Theresa gains approval from the board for her grand plan and now cascades the (ahem) ‘realisation of benefits’ down to her direct reports…who protest that the desired outcomes are optimistic at best, and sheer madness at worst (though they hold their tongues on this last bit).

Some of the assumptions have already proven to be incorrect – as should be expected – but it’s too late: the board approved it.

The plan is baked into cascaded KPIs…and everyone retreats into their silos, to force their part through regardless of the harm being caused.

But here’s the thing:

“Whether the project ‘succeeds’ according to [the original plan] is irrelevant and insignificant when compared to whether we actually created value for customers and for our organisations.” (Lean Enterprise)

The wider point…and what to do instead

validated learningIt’s not just financial models within business cases – it is ‘detailed up-front’ planning in general: the idea that we should create a highly detailed plan before making a decision (usually by hierarchical committee) as to whether to proceed on a major investment.

The Lean Start-up movement, led by Eric Ries, makes a great case for a totally different way of thinking:

  • assumptions aren’t true! (it seems daft to be writing that…but the existence of the planning fallacy requires me to do so);
  • we should test big assumptions as quickly as possible;
  • such testing can be done through small scale experimentation (which doesn’t require huge investment) and subsequent (open-minded) reflection;
  • we will learn important things…which we did not (and probably could not) predict through detailed up-front planning. This is a seriously good thing – we can save much time, money and pain, and create real customer value;
  • we may (and often will) find a huge flaw in our original thinking…which will enable us to ‘pivot’5 to some new hypothesis, and re-orientate us towards our customer purpose.

The big idea to get across is what has been termed ‘validated learning’.

Learning comes from actually trying things out on, and gaining direct feedback from, the end customers (or patients, citizens, employees etc.), rather than relying on our opinions about them.

Validated is about demonstrating what the customer (or patient, citizen, employee etc.) actually does (or doesn’t do), not what they say they would do when asked (i.e. from external market research or internal survey). It is to observe and measure real behaviours, rather than analyse responses to hypothetical questions.

…and to do the above rapidly by experimenting with ‘minimum viable products’ (MVPs).

Delay (whilst writing a beautiful document, getting it approved, and then building a seemingly golden ‘solution’) prevents the necessary feedback from getting through.

Caveat: Many an organisation has read ‘The Lean Startup’ book (or employed a consultant who has) and is using the above logic merely at the start of their legacy ‘investment decision’ process…but, through grafting new labels (such as Lean) onto old methods and retaining central hierarchical approval committees, their process remains ostensibly the same.

You don’t do validated learning merely at the start of an investment process – you re-imagine what ‘making investments’ means!

“It’s moving leaders from playing Caesar with their thumbs up and down on every idea to – instead – putting in the culture and the systems so that teams can move and innovate at the speed of the experimentation system.”

“The focus of each team is iterating with customers as rapidly as possible, running experiments, and then using validated learning to make real-time investment decisions about what to work on.” (Eric Ries)

 Notice that it is the team that is making the investment decisions as they go along. They are not deferring to some higher body for ‘permission’. This is made possible when:

  • the purpose of the team is clear and meaningful (i.e. based around a service or value stream);
  • they have meaningful capability measures to work with (i.e. truly knowing how they are doing against their purpose); and
  • all extrinsic motivators have been removed…so that they can focus, collaborate and gain a real sense of worth in their collective work.

Nothing new here

You might read the above and shout out:

  • “but this is just the scientific method”; or
  • “it’s yet another re-writing of the ‘Plan – Do – Study – Act’6 way of working”

…and you’d be right.

Eric Ries’ thinking came about directly from his studying of Deming, Toyota etc. and then applying the learning to his world of entrepreneurship – to become effective when investing time and money.

His book, ‘The Lean Startup’, and the ‘validated learning’ concept are an excellent addition to the existing body of work on experimentation towards purpose.

Footnotes

1. We should never present a seemingly certain picture to a board (or merely hide the caveats in footnotes)…and we should coach them to be suspicious if they see one.

2. For the financially aware: this will likely be a net present value (NPV) figure using a cost of capital (WACC) provided by the finance department, or some financial governance body.

3. The ‘time value of money’ reflects the fact that $1 now is worth more to you than $1 in a year’s time.

4. Conveniently doesn’t mean intentionally or maliciously – it can just be that lovely planning fallacy at work.

5. Pivot: This word has become trendy in many a management conversation but I think that its original (i.e.intended) meaning is excellent (as used by Eric Ries, and his mentor Steve Blank).

Eric Ries defines a pivot as “a structured course correction designed to test a new fundamental hypothesis….”

6. PDSA: Popularised by Deming, who learned it from his mentor, Walter Shewhart. A method of iterative experimentation towards your purpose, where the path is discovered as you go, rather than attempted to be planned at the start. Note that, whilst the first step is ‘Plan’, this DOESN’T mean detailed up-front planning of an answer – it simply means properly planning the next experiment (e.g. what you are going to do, how you are going to conduct it, and how you are going to meaningfully measure it).

 

Advertisements

I knew it wasn’t that simple!

Prospect TheoryI’ve recently had that experience: the one where you’ve always thought something is peculiarly wrong about conventional wisdom but couldn’t quite put your finger on why…then you happen to read a book and “aha! That explains it.”

Now, rather than pointing you to the research…or (worse) attempting to rewrite (by which I mean ‘butcher’) it here, I’m going to try and provide a concise post around my ‘aha’ moment:

The topic in question:

I’ve written before about the harm that contingent rewards do, and this has mainly been from the point of view that they distract us and distort our actions (and these remain massive criticisms) …but I’ve always thought that there is more to be said about the emotions people experience when it comes to the annual bonus.

If you’ve worked in an organisation that uses financial incentives, cast you mind back to the joys of bonus time. How much of the following rings true:

  • You’ve got a bonus but you aren’t exactly ecstatic about it…there’s plenty of negative emotions going on
  • You are aware of what you didn’t get
  • You compare with what you got in previous years
  • You were really only ‘arguing over’ a couple of hundred dollars in your performance review…but you still fought hard for this or, if you are introverted, perhaps you did so in your head.
  • …no doubt you can add your own thoughts for the reactions and emotions you experience (or, if you manage people, have to deal with).

This sounds crazy – you should be happy shouldn’t you?…you got a bonus! So what’s going on?

Utility Theory1:

Classical economists have based their thinking around the ‘rational person’ for hundreds of years. In particular they assume that, given a set of options, we make logical comparisons and then make rational choices.

Given that two options may not be directly comparable (say a carrot with a banana), Economists talk about utility as a measure of worth. At its simplest, the measure of utility is your willingness to pay different amounts for different goods (and services).

Further, when there is uncertainty about an outcome, utility theory assumes that we rationally use probabilities in our thinking. i.e. If there’s a 75% chance of earning $1,000 then the utility of this option is $750.

…but it turns out that we aren’t that rational!

Example 1: You are offered the choice between:

  1. a sure gain of $750; or
  2. a 75% chance at winning $1,000 and 25% of winning nothing

What would you do? … please consider and take a few seconds to decide.

Utility Theory expects these choices to be identical (both valued at $750) but, to the vast majority of us, they are not.

It turns out that we overwhelmingly prefer the certainty in the sure gain of $750 (we are risk-avoiding). This probably feels right to you but what’s going on?

‘Behavioural Economists’ to the rescue…

Psychologists consider what people really do (and why), as opposed to what a rational (well reasoned) analysis says they should do.

Some of these psychologists started looking at ‘real’ decisions made by people that contradict what a rational economist would expect. This gave birth to the rather trendy (and I would say fascinating) school of ‘behavioural economics’.

Prospect Theory2:

Two of the early giants of behavioural economics, Amos Tversky and Daniel Kahneman, spent years together considering scenarios where utility theory broke down (i.e. where real people took irrational decisions) and how this can be explained.

They arrived at a new ‘Prospect Theory’ that far better explains3 why we behave as we do. So, now that you have this background knowledge, let’s consider the annual bonus:

That wonderful (?) annual bonus:

So, when ‘Management’ believe that they are offering you a healthy bonus, they likely think you are conforming to Utility Theory. Indeed, much of their (verbal and written) communications around such a bonus suggest this is so.

But such thinking is far too simplistic, and flawed. Tversky and Kahneman identified a number of cognitive features at play. I’ll explain two key features below:

Feature 1 – Reference point:

Do you think that a financial trader should be ecstatic with her mega $2 million bonus?

What now if I told you that this is only half of what she has become accustomed to?

I expect that you would be ‘over the moon’ with such a bonus but you also understand that the trader may not be (however much this might annoy or even disgust you)…you have different reference points.

Why the difference? This is because we don’t just consider the quantity of an outcome; we use a reference point as a comparator, though we may not be conscious of doing so.

Here’s another way of thinking about the importance of reference points: If I told you that two people had been at the (horse) races today and they both came out with $50 you don’t have enough information to predict how they feel…you need a reference point: how much did they start with?

Our reference points have a huge effect on our thinking and this cognitive feature is relevant to the annual bonus scheme:

Scenario: Let’s take Bob who is on a $70,000 salary and is being ‘motivated’ (!) by the chance to earn a 10% bonus.

‘Management’ would like to believe that Bob is thinking about his chance to gain up to $7,000; that he will consider this to be a sizeable amount; and that this is a positive experience for him (translation: it is acting as an ‘effective bribe’ to get more out of him):

i) If this is the first year of the annual bonus scheme, then Bob’s reference point is $70,000 and he is happy when he is awarded, say, a 75% ‘performance’ score equating to a bonus of $5,250 (i.e. 75% of $7,000);

ii) If this is, say, the 3rd year of the scheme and Bob has averaged a 75% performance score on both his previous years, then his reference point has become $75,250. This is what he has come to expect and he will compare any bonus he receives this year against this reference point…which you can see means that the bonus has lost a great deal of value to him. Indeed, it is very likely to be a disappointment.

Annual bonus schemes look great at the start, but cause problems after only a few iterations. There is nothing procedurally that can be done to resolve this – the reference point has moved away from a person’s base salary and to what they now expect.

Kahneman notes that “For financial outcomes, the usual reference point is the status quo, but it can also be the outcome that you expect, or perhaps the outcome to which you feel entitled, for example, the raise or bonus that your colleagues receive. Outcomes that are better than the reference points are gains. Below the reference point they are losses.” …which leads nicely on to:

Feature 2 – Loss Aversion:

So far we have only looked at potentially gaining $$$, but what happens with loses?

Example 2: You are offered the choice between:

  1. a sure loss of $750; or
  2. a 75% chance at losing $1,000 and 25% of losing nothing

What would you do? …please consider and take a few seconds to decide.

This looks very similar to example 1 – it’s the same except that we are looking at a loss rather than a gain…but it turns out that people treat gains and losses differently – we are loss averse.

People really don’t like losing and, as a result, are willing to take a gamble (we become risk seeking). The majority of people would go for the 2nd option in example 2 and take the chance. They might get away with losing nothing and they find this attractive as compared to the unpleasant certainty of losing $750.

Note: If you go back up to the graph used as the picture for this post, the curve represents the logic within Prospect Theory, showing that we feel far more pain from a loss than joy from an equally sized gain.

Tversky and Kahneman’s experimental research on peoples’ real choices identified that our loss-aversion co-efficient* is around 2 in many contexts, and much higher in others (e.g. it can be as high as 50 when it comes to decisions about our health).

* This means that I feel as much emotional pain from a $375 loss as I do joy from a $750 gain. They are asymmetric.

Putting the two together:

So let’s assume that it’s year 4 of the bonus scheme’s operation and Bob has become used to the $75,250 ‘salary plus bonus’ reference point. Let’s now compare and contrast two different outcomes for this performance year:

Outcome 1: Bob’s score delivers an annualised $76,000 pay packet– what does he think?

Feeling: “Yeah, big deal, I got another $750…nothing much to that.

Note: He isn’t thinking too much about $6,000 i.e. the absolute size of the bonus

Outcome 2: Bob’s score delivers an annualised $74,500 pay packet– what does he think?

Feelings: “That’s terrible, I’ve lost $750….I’m really not happy about that!!!”

Note: Worse (from management’s perspective) than not thinking about the $4,500 bonus, Bob actually think of it in terms of a loss and that loss really hurts him (far more than its apparent size).

What’s the point?

Our emotions about our annual bonus award are dealing with far more than the simple bonus number printed on the payslip: There is sooo much more going on in our minds:

  • Reference points: can reduce a seemingly large bonus figure down to a virtual irrelevance; and
  • Loss aversion: can turn us to think in terms of losses, and feel emotionally hurt by them by far more than would appear to be rational.

I always knew it wasn’t as simple as looking at that bonus figure on my payslip – there are lots more emotions going on!

I am reminded of Alfie Kohn’s insight that “within every carrot, there is a hidden stick”.

Annual bonus schemes appear great in their first year (if you subscribe to Theory X thinking )….and then become a major burden.

Some organisations think the answer is to ‘reboot’ them every now and then. I don’t.

Notes

  1. This introduction to Utility Theory is necessarily ‘overly simplistic’ but it makes the necessary point about rationality vs. reality.
  2. You can read an account of this in Kahneman’s mind-bending book ‘Thinking fast and slow’. Their paper on the theory was first published in 1979 and is included as an appendix to the book.
  3. Kahneman recognised that Prospect Theory isn’t perfect. It has since been revised as ‘cumulative prospect theory’.

“I hear what you say…but I don’t want to change my world”

Upton Sinclair quoteSo, for this post, I’m going to use a ‘true story’ as explained by Daniel Kahneman in his mind-bending book ‘Thinking fast and slow’.

(Kahneman is a Nobel prize winning giant in the field of human psychology and I will be adding him to my group of giants soon).

Some years ago, Kahneman was invited to speak at an investment firm whose advisors provide financial advice to wealthy clients. I can almost hear them shouting “buy, sell…buy” across the trading floor.

Pre-meeting preparation.

Kahneman asked the firm’s executives for some data so that he could prepare for the talk he was due to give.

He was provided with a spreadsheet containing the investment outcomes of 25 of the firm’s advisors, for each of 8 consecutive years. No names, just anonymous identifiers.

The firm used the investment outcome success of each advisor as the main determinant of their (potentially large) year-end bonus.

…so what was Kahneman interested in understanding about this data set? And what did he do to interrogate it?

His thinking: That investment outcomes will be a combination of skill (on the part of the advisor) and luck1

His question: How much of the outcome in this ‘providing expert investment advice’ work was down to skill and how much to luck?

How to determine the answer: Kahneman was interested in understanding whether any apparent differences in skill were persistent i.e. did the same adviser consistently achieve better (or worse) returns year on year?

postive corelation pictureTo work this out he calculated the correlation coefficients2 between the advisor rankings in each pair of years: year 1 with year 2, year 1 with year 3….all the way along to year 7 with year 8. This gave him 28 correlation coefficients from which to calculate the average.

  • An average score close to 1 would mean that it was a very highly skilled job and the best (and worst) advisors were easy to identify – in this scenario, luck plays virtually no part;
  • A score midway between 0 and 1 would mean that skill mattered a bit but that luck also had a huge part to play.
  • Anything nearing 0 would mean that it was really just about luck.

So what were his findings and what does this mean?

Drum roll…he was surprised to find that the score was…0.01 or put more simply ‘zero’.

In Kahneman’s words “The consistent correlations that would indicate differences in skill were not found. The result resembled what you would expect from a dice-rolling contest, not a game of skill.”

Clarification: Just in case you are thinking “hey, that’s just one set of data. He got lucky!”…Kahneman knew roughly what he was going to uncover because this ‘person or system’ type analysis has been done many times by many people. He knew the theory and the evidence….he expected it to be low but he didn’t expect it to be soooo close to zero!

So what happened next?

Well, he ended up having dinner with the investment firm’s executives the night before he was due to give his talk.

He explained the question he had asked of the data they had provided to him and asked them to guess the year-to-year correlation in the rankings of their advisers.

The executives (being intelligent and self-protecting people) thought they knew what was coming and calmly accepted that performance certainly fluctuates and, yes, there was an element of luck…however, none of them expected the average correlation to be zero.

Kahneman gave them the clear message that “the firm was rewarding luck as if it were skill”.

This should have been a major shock, but it created no great stir…they calmly went on with dinner as if nothing of note had been said.

Kahneman goes on to write about The illusion of skill: Facts that challenge such basic assumptions – and thereby threaten people’s livelihood and self-esteem – are simply not absorbed….people consistently ignore statistical studies of performance when it clashes with their personal impressions from experience.”

Why write this post?

There are two key points within the case above:

The first is that Kahneman’s story is an (extreme) example of the system vs. the individual. Yes, some people may be outstanding but a great deal of ‘performance’ can only be ascribed to the system in which they operate. (You might perhaps take note that investment advice is little more than a game of chance.)

But perhaps the second (and main) point is clearly expressed in the phrase “I don’t want to change my world”. The executives may very well accept ‘the maths’ and the conclusion…but that doesn’t mean they are about to change anything.

Consider that executives are probably also on a (larger) bonus structure which will have a similarly dubious rationale. We can expect little change unless and until those ‘at the top’ of an organisation understand, agree and want it.

People (such as me) can bang on about performance reviews and contingent rewards, providing ever increasing evidence and logic…yet (and this is an open question) what will cause a change?

1 This is another way of stating Deming’s x + y(x) = the result equation. i.e. the result is partly down to the person and a large part down to the system in which they operate (which is simply luck from the person’s perspective).

2 A correlation coefficient (usually denoted with the letter R) is a statistical measure of the strength of the relationship between two sets of data.

Correlation coefs

R = 1 means that the two sets of data are a perfect positive fit.

R = -1 indicates a perfect negative fit

R = 0 indicates that there is no relationship i.e. any relationship is purely random.

A correlation greater than 0.8 is generally described as strong, whereas a correlation less than 0.5 is generally described as weak.