Certainty, uncertainty, and what we choose to believe

It seems that people have been inventing stories to make sense of events for as long as people have been.  Nassem Taleb, in Fooled by Randomness, commented about people’s tendency to create narratives to explain events.  In his view, people tended to create deterministic narratives to explain random events or outcomes that resulted from uncertainty.


Taleb believes that “You attribute your successes to skills, but your failures to randomness.”  Millenia ago, Julius Caesar said “Men are nearly always willing to believe what they wish.”  In my opinion, these two quotes, separated by over 2000 years are two sides of the same proverbial coin.  Uncertainty, randomness, and chance are difficult concepts.  How much of my personal success is because of choices that I made or the long hours that I put into the project?  Similar to Taleb, Jim Collins, in How the Mighty Fall, commented on people discounting the role of luck in their past successes.  

In all of my jobs, dealing with uncertainty, has been a key facet of developing solutions.  Tolerance stackups, uncertain operating environments, part-to-part variation, etc.  Taleb is particularly disdainful of the tendency for MBAs to invent scenarios to explain outcomes and to discount the role of chance or uncertainty.  Personally, I struggle with how to navigate future decisions if we don’t understand, really understand, how things got to where they are.  Understanding how things got the way they are requires, IMO, at least a passing nod to the potential for uncertainty, chance, or luck to have played a role in the sequence of events that led to where we are.

What does that mean?  How can we do that?  In engineering problems, I think one tool is pre-test (or a similar pre-event analog) predictions.  Predicting what will happen in advance, and why, is very helpful to understanding what happened after the test or the event.  Invariably, something is different in the actual test or event than it was in the prediction.  Why was it different?  Why did we model it the way we did before the test?  What does that mean?  Was the difference we saw in the event what we should expect “normal” to look like?  Or was the difference we saw in the event relative to our prediction a measure of the noise, randomness, or uncertainty that we should expect from this system in the real world?  If the latter, how can we quantify the uncertainty and ensure that the system will behave the way we want it to in the range of conditions and configurations that it will experience in the real world.  This approach can be captured in another quote by Taleb.  “To understand how something works, figure out how to break it.”  But humans being humans, the pretest prediction approach generally only works well if the test is done before the test or the event.  Otherwise, some information from the actual event often leaks into the prediction.

From a personal perspective, I learned how useful this lesson can be to other people years ago.  I was working on a project where we needed to predict how a system would behave.  The behavior of interest was a dynamic response that was inherently difficult to predict.  The goal of the project was to predict the behavior and then “turn on” a system to cancel the behavior.  At the time, I had a customer that insisted that we generate pre-test predictions and explain to him what we expected to happen and why.  I was curious about his mindset, so I asked, “why do you want these predictions?”  His response “I haven’t decided yet if I think you are good at this.”  That short exchange taught me a lot about the value of calling your shot and then making it.

In my experience, the same technique is useful in sales predictions, machine learning, AI, and generally all human endeavors.  Have a plan.  Predict how things will unfold.  Compare the results you got to the results you expected.  Learn from the difference.

Leave a comment