A Look at the Human Side of Defence Forecasting
By Paul Salmon FCILT, FSCM
Introduction: The Model vs. the Human
In Defence logistics, models promise order in a world of uncertainty. They take data on usage, reliability, and failure rates, then generate forecasts of what spares will be needed, where, and when. In theory, this should be the end of the story. But in practice, many Defence logisticians remain sceptical. They hedge, over-order, or override the outputs of models designed to optimise cost and availability.
This tension between models and humans is not new. During the Second World War, analysts studying bullet holes on returning aircraft concluded armour should be added to the places where planes were being hit most frequently. But mathematician Abraham Wald pointed out a flaw: they were only analysing planes that survived. The ones shot down — and therefore absent from the dataset — had been hit in other places. Armour was needed where the bullet holes were not. It was a lesson in “survivorship bias” and a classic case of humans trusting instinct over analysis.
The same dynamic plays out in Defence spares forecasting today. However robust the mathematics, however elegant the optimisation, humans are reluctant to put blind trust in models — and often, for understandable reasons.
So why don’t we trust the spares model? The answer lies less in the equations and more in the people who must use them.
What Spares Models Are Meant to Do
Spares models have long been part of Defence supply chains. From the early days of reliability-centred maintenance to modern probabilistic optimisation, their goal is simple: balance readiness with efficiency. They aim to answer questions like:
How many parts do we need in stock to achieve a target availability rate? Where should spares be positioned for fastest response? What is the most cost-effective way to provision for uncertainty?
Mathematically, many of these models are robust. They incorporate Mean Time Between Failure (MTBF), usage cycles, demand history, lead times, and failure distributions. With large enough datasets, they produce forecasts that minimise the risk of shortage while reducing overstocking and waste.
Yet Defence logisticians often push back. The model says hold 20, the user orders 40. The model predicts low risk of failure, but commanders still airlift a surge of parts “just in case.” Why?
Six Human Reasons We Distrust Spares Models
1. The Black Box Effect
Many spares models are opaque. They use stochastic techniques, Monte Carlo simulations, or AI-driven optimisation. To the average user, inputs disappear into a “black box” and outputs emerge without visibility of the logic inside. Humans tend not to trust what they cannot explain — particularly when they will be held accountable for the outcome. “Because the model said so” is not a defence when a platform is grounded on operations.
2. Experience vs. Algorithms
Logisticians and maintainers often have decades of experience. They remember missions where certain parts failed more often than the records suggested, or where harsh operational conditions drove unexpected demand. When model outputs conflict with personal or collective memory, people default to lived experience. Anecdote may trump statistics, even when the data is more representative.
3. Data Quality Anxiety
Even the best model is only as good as its inputs. Defence personnel know too well that spares codification is incomplete, maintenance records inconsistent, and demand history patchy. “Garbage in, garbage out” is a familiar refrain. If the foundation is shaky, why trust the castle built on top?
4. Perception of Risk
Behavioural science shows humans are “loss averse.” The pain of being blamed for a shortage outweighs the gain of saving cost or space. A model may say the chance of a shortage is only 5%, but the individual making the decision may not want to risk being the one who runs out in theatre. So they overstock, overriding the model’s “optimisation.”
5. Ownership and Involvement
Spares models are often developed by external contractors or data scientists. If the logisticians who must use the model were not involved in its design, they don’t feel ownership. Without ownership, there is little buy-in, and little trust.
6. Communication of Uncertainty
Most models express outputs in ranges and probabilities. “There’s a 75% chance we’ll meet demand with this stock.” Humans prefer certainty. What the planner hears is “25% chance of failure” — and they will act conservatively. The nuance of probability is lost in translation.
Historical Lessons: When Humans vs Models Collide
WWII Survivorship Bias
The aircraft bullet hole story is more than a cautionary tale. It shows that humans tend to see the data they want to see. Without Wald’s statistical correction, resources would have been wasted armouring the wrong parts of aircraft. In Defence spares forecasting, the same principle applies: we often overweight what we can see (recent shortages, vivid anecdotes) and underweight what we cannot (hidden failure modes, silent data gaps).
The Falklands War
During the 1982 Falklands campaign, spares shortages were a recurring theme. Forecasts had been made using models, but the realities of extended supply lines, harsh climate, and untested platforms made demand deviate dramatically from predictions. Commanders trusted their gut, demanding more parts than the models suggested. This fuelled the perception that models are disconnected from the realities of war.
NATO Cold War Spares Pooling
Attempts to create centralised NATO spares pools often ran into trust barriers. Nations feared that relying on a model-driven pooled stockpile would leave them exposed. Despite mathematical efficiency, countries hoarded national reserves. The logic of the model clashed with the politics of sovereignty and the psychology of risk.
COVID-19 PPE
In healthcare, stockpile models for PPE were in place before COVID-19. But when the crisis hit, governments across the world ignored modelled stock levels and scrambled to procure more. The perception of risk — and the visibility of shortages — overwhelmed faith in the models. The same dynamics apply in Defence.
The Cost of Distrust
Distrust in models is not harmless. It leads to:
Overstocking: Expensive surpluses tie up funds and storage capacity. Wastage: Items expire unused, particularly medical spares. Inefficiency: Resources are allocated based on hunch rather than data. Missed Opportunities: Decision-makers spend time arguing with models instead of refining them.
But blind faith in models is equally dangerous. Over-reliance can obscure data gaps, ignore operational realities, and lull decision-makers into a false sense of certainty.
The challenge is not to eliminate human judgment or model logic, but to integrate them.
Bridging the Gap: Building Trust in Spares Models
1. Transparency
Models must be explainable. Showing assumptions, sensitivity analyses, and trade-offs makes outputs credible. “Open box” modelling builds confidence.
2. Blended Judgment
Position models as aids, not replacements. Encourage logisticians to compare outputs with experience and adjust. When human intuition and model logic are aligned, trust grows.
3. Data Stewardship
Investing in data quality is not just a technical fix — it is a cultural signal. When users see data improving, they are more willing to trust outputs.
4. Scenario Testing
Give users the ability to run “what if” scenarios themselves. If they can see the model adapt to different conditions, they are more likely to believe in its flexibility.
5. Education and CPD
Professional development matters. Training logisticians to understand modelling basics demystifies the process. It makes them co-owners of the output, not passive recipients.
6. Communicating Uncertainty
Frame probabilities in operational language. Instead of “75% confidence,” say “three times out of four, this level of stock will succeed.” Use graphics and scenarios to illustrate what risk really means.
7. Culture and Leadership
Senior leaders must champion models while respecting human judgment. Trust is cultural as much as technical. Leaders set the tone: is the model a partner, or a burden?
Conclusion: Models, Humans, and Readiness
In Defence logistics, we often hear: “We don’t trust the spares model.” But what we really mean is that we don’t trust the combination of opaque algorithms, patchy data, and our own accountability being at stake. Models are not inherently untrustworthy. Humans are not inherently irrational. The gap is in the interface.
History shows that ignoring models can be disastrous (survivorship bias). Blindly following them can be dangerous (COVID-19 stockpiles). The real solution is integration: models providing structured, data-driven guidance, and humans applying context, judgment, and accountability.
Ultimately, Defence does not need to choose between model and man. It needs to design systems where both reinforce each other. Because if the aircraft is grounded, the mission can wait. But if the operating theatre is grounded, the mission fails.
Leave a Reply