Similarity and Synergy, but Not the Same

One of my mentors was often encouraging me and the others he was guiding to broaden your beam.  In various ways, he was always encouraging us to learn more, widen our aperture, understand how our efforts fit in the bigger picture, and to better understand the other parts of the big picture.  At some point, this led me looking into Danny Meyer’s idea of “Always Be Connecting Dots.”  While that hasn’t made me into a famous restauranteur or even a better cook at home, it has helped encourage me to looks for connections across thought models and issues.

Recently, I have been looking at potential connections between the Seven Elements of Good Flight Rationale, Donella Meadows’ Leverage Points, and Taleb’s Concepts of Fragility, Robustness, and Antifragility.  In the “Always Be Connecting Dots” category, looking for how these ideas connect or reinforce each other has been a useful thought experiment for me.  This personal thought experiment is a continuing one as I use these different models to help me organize my thoughts when I look at different problems.  While these frameworks originate in different domains, they intersect meaningfully around how organizations understand risk, make decisions under uncertainty, and shape system behavior.

At their core, I believe that the Seven Elements of Good Flight Rationale represent a tool for shifting from “selling readiness” to “seeking to share risk.”  Rather than presenting confidence narratives, the elements are a tool that can be used to effectively communicate uncertainty, assumptions, margins, and limits in a way that is can enable decision-makers to understand where risk truly resides. From this perspective, the seven elements can be used as a way to characterize and communicate risk from multiple dimensions and multiple perspectives.  When used with this intention in mind, I believe that the seven elements are a helpful tool to enable risk to be “shared” in a meaningful way.  The seven elements of flight rationale are:

  1. Solid technical understanding,
  2. Condition relative to experience base,
  3. Bounding case established,
  4. Self-limiting aspects,
  5. Margins understood,
  6. Assessment based on data, testing and analysis, and 
  7. Interactions with other elements/conditions addressed

For folks that are interested, I think the presentation at this link (https://sma.nasa.gov/docs/default-source/safety-messages/safetymessage-sevenelements-2015-03-05.pdf?sfvrsn=5a621bf8_6) provides a lot of good context on the seven elements.  For folks that are REALLY interested in the topic, I personally think Diane Vaughan’s book on Challenger, the Challenger Launch Decision, is a particularly good book to read.

In my opinion, looking at an issue through the lens of the seven elements hinges on the first element.  Understanding how a part works, what the issues is, and how the part and the behavior fit in the bigger picture is crucial to understanding the situation, characterizing risk, and being in a position to communicate in a way that enables good decisions.  I believe that solid technical understanding enables a good assessment on the other dimensions.


Donella Meadows’ leverage points (https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/) provide a complementary lens. Meadows emphasized that the most powerful interventions in complex systems are not parameter changes, but changes to information flows, rules, and—most importantly—paradigms.  In her framework, the leverage points, in order of increasing leverage or effectiveness were:

  • Constants, parameters, numbers (subsidies, taxes, standards).
  • Regulating negative feedback loops.
  • Driving positive feedback loops.
  • Material flows and nodes of material intersection.
  • Information flows.
  • The rules of the system (incentives, punishments, constraints).
  • The distribution of power over the rules of the system.
  • The goals of the system.
  • The mindset or paradigm out of which the system — its goals, power structure, rules, its culture — arises

When looked at from this perspective, I believe that the Seven Elements channel the ideas in some of the highest leverage points in Meadows’ hierarchy.  The elements are seeking to change the mindset and paradigm of risk decisions.  They came about as a result of a desire to change the culture, the rules, and information flows for critical risk decisions.  Instead of optimizing reports, metrics, or schedules, the seven elements encourage frame risk decisions in a way that is intended to enable better communication across an enterprise in a way that shifts the culture around those discussions from “selling” to “sharing.”

From a very different origin, Nassim Taleb introduced the ideas of fragility, robustness, and antifragility in terms of systems and risks. In his view, fragile systems are harmed by variability and uncertainty, robust systems resist harm, and antifragile systems benefit from variability. Importantly, Taleb’s ideas apply differently to systems and to decisions. A system may be structurally fragile even if individual decisions are made conservatively, and conversely, decision processes may be fragile even when the underlying system is robust.

Several of the Seven Elements map naturally onto these concepts. The emphasis on solid technical understanding is foundational. This technical understanding enables robust decisions by ensuring that the uncertainty is understood, that the margin are that protect against bad outcomes can be defined in a meaningful way, and that the mitigating behaviors and traits of the system, if they exist can be defined.  However, technical understanding alone does not make a system antifragile. Knowing how a system behaves in a nominal way or even in defined, but uncertain scenarios is not the same as the system behaving how you want it to behave.  In my opinion, the idea of antifragility involves a different type of behavior than typically is the case.  In Taleb’s view, antifragile systems benefit from variability.  They get stronger when there is variability or uncertainty.  In typical mechanical and electrical systems, that isn’t often the paradigm.  The bedrock of many production systems is the idea of interchangeable parts.  The assembly process does not typically benefit from more variability in the parts.  But, could there be opportunities to implement the idea of antifragility here too?

In some cases, antifragility could come from deliberate organizational design choices, such as decentralization, optionality, and bounded experimentation.  While these ideas introduce their own costs and risks, they do provide a way to explore introducing antifragility in areas that look like they would typically abhor variability.  In fact, I believe one of the elements of the Toyota Production system inherently relies on bounded experimentation and antifragility.  Each build in the Toyota Production system is a controlled experiment using the scientific method.  It is a controlled experiment to try to continuously improve.  If that experiment results in a better way to build a part, that way becomes the new model around which new improvements and new experiments are made.  This constant bounded experimentation uses small variability, bounded by the scientific process, to improve the system.

The element addressing margins and uncertainty directly relates to robustness in Taleb’s model and provides an opportunity to assess when robustness and antifragility may not both be goals to pursue. In contexts where the seven elements framework is likely to be used, understanding margins is fundamentally about understanding how uncertainty propagates through a system, how that uncertainty is potentially amplified by uncertainties in system response, and where the defined margins provide a buffer that prevents a bad day.  In a simple example, there is uncertainty in how many cars might be on a bridge at a given time.  Based on where those cars are on the bridge the structure is more or less sensitive to that uncertainty.  When the uncertainty and sensitivity are compared together, the resulting load can be compared to the structural capability of the system.  The excess capability is margin that provides a buffer if the material properties aren’t as strong as expected, if there are more cars on the bridge than expected, or if the sensitivities were not as well understood as hoped.  Poorly understood margins can result in fragility.  In these cases, small errors or changes in assumptions can lead to surprises and bad outcomes.  In many of his books, Petroski illustrates examples where failure to understand how margins, sensitivities, and uncertainties interact can lead to catastrophic failure.  Many of his examples illustrate how errors in understanding how critical behaviors change as a design approach is scaled can lead to unexpected outcomes.  In terms of the seven elements, understanding how the behaviors change as a function of scaling can be viewed through the lens of solid technical understanding.  In the examples Petroski stated, many of the failures stem from weaknesses in this element, from weaknesses in how well the margins were understood, or from weaknesses in the condition relative to the experience base.

The self-limiting element of the Seven Elements has a closer connection to antifragility. Asking whether a system self-limits under stress or improves under stress is, in effect, asking whether failure modes are bounded and whether the system has characteristics that limit how a behavior can evolve. Often, this means that the system degrades gracefully rather than catastrophically.  In truly antifragile systems, this means the system actually improves when it is stressed. This is not often the case for mechanical systems, but it is a key element of Toyota’s approach to continuously improving their production systems.  In their model, they “lower the water to see the rocks.”  They reduce the buffer stocks and inventory that can make it hard to see bottlenecks and trouble areas in their production system.  By stressing the production management system, they make problems visible that their team can then resolve.  This approach of increasing the stress to improve the system is, in my opinion, an example of applying the idea of antifragility.  While this approach is not obviously self-limiting, it is an example of potential antifragility.  By contrast, adaptive control systems that learn during operations are an example of a system that can incorporate self-limiting and antifragile behaviors.  The systems typically need some variance from expected behavior to learn and adapt.  These implementations can also serve to reduced or even limit how variation is experienced by the integrated system.

An important distinction that emerges when seeking integrating these frameworks is the difference between system fragility and decision fragility. The Seven Elements include questions that touch both. Some elements are aimed at understanding the inherent fragility or robustness of the technical system itself. Others are aimed at improving the robustness of the decision-making process—ensuring that uncertainty, assumptions, and risks are visible and shared rather than hidden or normalized.

Viewed together, I believe that these frameworks reinforce a common insight: the highest-leverage improvements in complex, safety-critical systems often come not from better optimization, but from better understanding, better information flow, and better alignment between technical reality and decision-making paradigms. The Seven Elements provide a practical mechanism for operating at these high leverage points, while Meadows and Taleb offer conceptual tools for understanding why those mechanisms matter and where their limits lie.  If we return to Danny Meyer’s idea of connecting dots, the real value to me from thinking about connections between the ideas is that the process provides me with ideas on new avenues for applying the good ideas from the different frameworks.  And, for me, this helps me do a better job finding solutions and enabling better decisions.  My bottom line is that these different frameworks from different origins have similarities and synergies, but they are not the same.

Leave a comment