Operational post mortems on projects or decisions are probably the most valuable way to learn and grow – but also horribly difficult. Once you’ve learned something, it’s hard to re-imagine yourself not knowing it. Still, if you’re serious about succeeding, the starting point of a new project or making a critical decision is: what happened last time we did this?
Typically, as you go through the exercise, you’ll come across three ways in which reality resists and the best laid plans of mice and men go awry:
- I wish I’d known that: You don’t know what you don’t know, and a critical bit of information was missing at the time of the decision which would have made you choose completely differently.
- Tough luck: Sometimes the decision was absolutely the right one, but a grain of sand in the works at the wrong moment made the whole thing derail and take a completely different and unpredictable path.
- Diverging opinions: Everyone didn’t understand the same thing, agree on the same goals or plan, or have the same intentions to start with.
Operational Post Mortems: Learning & Moving Forward
On the whole, when you’ve conducted enough post mortem reviews, it’s easy to conclude that although the world keeps surprising us every day, the ways in which it does are not that surprising. Sure, there are black swan freak events, but also many banal, run of the mill white swans we simply didn’t see.
The aim of conducting a post-mortem review is of course not to berate ourselves for having missed the obvious – self-compassion is the key to moving forward with confidence – but to learn to conduct forward-looking decision reviews in which we plan more sensibly. Without a crystal ball and in a volatile, complex and ambiguous world, how on earth can we do that?
What Leaders Get, Don’t Get, & Get Wrong
Looking at enough projects or choices that succeed or fail, one starts to discern that in multi-faceted situations there are three things to consider:
- What leaders get: These are the aspects that leaders understand in terms of what usually happens, what kind of special cases they might be confronted with, and how they should handle both the run-of-the-mill scenario and the rare one-offs.
- What leaders don’t get: Aspects of the situation which are simply out of the leader’s scope, issues to which the leaders are completely tone deaf because they don’t realize there is an issue there in the first place. Not taking digital transformation seriously because it won’t touch us. Not hiring or promoting more women or reviewing the gender pay difference. Ignoring the stock market, and so on.
- What leaders get wrong: Mental models are very sticky and, in some cases, the information is there, shared amongst many in the organization, but the leadership team keeps operating on a theory that is now known to be wrong.
What It Means to “Get It”
“Getting it” is actually quite a complex bundle of mental operations. There are four basic aspects to this. First, is having a correct theory about what is going on. We now know that any object thrown in the air follows a parabola, but for centuries people believed the object had impetus in a straight line, and then would fall straight to the ground when the impetus was exhausted. The theories we hold in our minds about how things work matter deeply because they are the starting point of every reasoning, and are often not very clear or explicit – and as such they are the source of many misunderstandings and confusion.
Secondly, no theory is fully correct so getting it also involves having a good feel of both the base rate (what usually statistically happens in this situation) and the special cases, which is the known oddities that can occur when the planets align unusually. In the Obama/Romney face-off, Texas, a conservative state, voted 56% for Romney but Obama won 86.4% of the vote in Starr County. Getting it means having some notion when presented with specifics of whether this represents the average or the bizarre.
Thirdly, getting it means knowing what to look at to understand how the mechanism behaves: what are the specific control points that reveal how the underlying mechanism is being played out? Economists are endlessly looking for the right indicators to reveal whether trends are up or down. A doctor’s first question is whether the patient is feverish or not. There are specific things to look at to orient diagnosis towards the normal case, or a special case.
Fourthly, getting it also involves knowing who else gets it and having continued conversations to explore our understanding of this specific mechanism. Are the inputs right, have the outputs changed, do we understand how things come together and what explains what? Who can teach us what?
When You Don’t “Get It”
On the contrary, not getting it means missing a hugely important factor, or framing the factors wrong, which is easy to do in conditions of uncertainty. Getting it wrong means repeatedly making a calculation that turns out wrong and clinging to what you think you know, in the face of conflicting facts.
The problem, of course, is that the human mind is built for motivated thinking. We start with the conclusion we like or want, and then we line up the arguments to prove our case. Most of our reasonings are motivated by our desired outcomes. Learning really involves thinking outside of yourself. Ideas that are hard to process feel doubtful or outright wrong – this is when you’re learning.
In this view, one person’s performance is explained by what they get (where they master the topic), what they don’t get (the gaps in their knowledge and understanding) and what they get wrong (the repeated reasoning errors they make). Learning from our mistakes is therefore not about looking at very specific decisions and berating ourselves about “I should have known better” but taking a wider view of asking – what am I not getting? What’s the bigger picture I’m missing here – and what would be the specific tell-tale signs?
Learning, in this sense, takes a different meaning from what we usually understand. It involves:
- Familiarizing yourself with a situation rather than replacing “wrong knowledge” with “right knowledge.” By plotting similar cases on a curve, we can get our mind to look at a situation in all sorts of different ways and understand it in greater depth.
- Challenging our causal models, which first means expressing them and debating them: do we understand the perimeter? Is there a factor we’re missing or misinterpreting? Are strange occurrences the sign of something new? In practice this means spending far more time on explicating our theories rather than dissecting what we’ve done wrong.
- Learning to handle heated discussions with others when the unique things they know finally come to the surface. Deep thinking makes us tired and true discussions are usually emotional – and that’s okay. The skill lies in learning to listen beyond the emotion.
The obvious starting point, of course is where you get things wrong. If you get something wrong, chances are that you’re missing something, and that asking “why?”, “why?” “why?” repeatedly will eventually make you discover what was hitherto unseen, although in plain sight.
As a matter of fact, we tend to distinguish discovery (figuring things out) from delivery (executing decisions), but the truth is that the information for discovery appears during the delivery – reality resists as we do stuff, and is talking to us. The trick to learning is therefore keeping the debate about “what do I get, don’t get, get wrong,” open while we’re in delivery phase – and when our minds are saying, “just let me get this done and then I’ll think about it.”
To learn to think for yourself, first you have to learn to think against yourself.
Michael Ballé, PH.D. is a best-selling author, speaker and co-founder of the Institut Lean France. He holds a doctorate from the Sorbonne in Social Sciences and Knowledge Sciences. He is also a renowned lean executive coach. His latest book is The Lean Strategy (with Dan Jones, Jacques Chaize and Orry Fiume).