FREQUENTLY ASKED QUESTIONS

When, in 2017, the Society of Construction Law included System Dynamics (SD) in the 2nd edition of its “Delay and Disruption Protocol”, many in the construction industry started to finally pay attention to the methodology.
However, as is the case with any innovative approach, some questions and concerns are still being raised. Here, we would like to address the most common ones – our answers are grounded in academic insight and in over two decades of professional experience.

Absolutely not. Model variable names are spelled out in ‘plain English’, and they mostly represent real world elements (e.g. “Direct Labour”, “Progress Achieved”, etc.) The graphic nature of the interface shows causal connections as simple arrows, the sources for all data variables are fully documented, and each group of equations contains a brief description of its nature (what aspect of reality the formulation attempts to reproduce), whether alternative formulations were considered, etc.

The mathematics of an SD simulation model are complex (they contain many equations), but still quite accessible: most equations in a simulation model use simple algebra (+, -, * and /), and the intuitive graphical interface of the simulation software allows even people unfamiliar with SD to glimpse the causal structure of the models.


Yes. SD often uses outputs from other analyses as inputs to its models. For example, statistical sampling is often used to characterize the direct impacts of disruptive events, Earned Value analysis is used in construction to estimate scope growth and to produce monthly progress data, and the literature on the typical impact that certain factors have on productivity is taken into account during the calibration process.

Regarding delay, SD is also complementary to more traditional analyses (like Time Impact Analysis): SD estimates the full actual impact that disruptive and/or delaying events had on the completion of the Project, including all ripple effects.


Absolutely. One of the main requirements of any SD model is that the ‘as built’ simulation should be consistent with all available project information – so a proper SD model will never contradict any of the project data… or any statements made by factual witnesses.


First, claim values resulting from an SD analysis are never accepted per se: the analyses always also deliver detailed causal narratives, explaining step-by-step how the claimed losses arose.

Second, both the inputs to a model and its scenario outputs undergo a thorough review process, ensuring that none contradict any information known about the project.

Third, models also undergo a battery of tests, aimed at validating their structure and the plausibility of their simulations.

And finally, ‘Constrained Fit Monte Carlo’ (CFMC) analysis now allows us to determine confidence ranges surrounding model input and claim results. CFMC involves simulating thousands of scenarios for each project, in which the values of the calibration parameters are changed randomly. This typically leads to a few hundred (at most a few thousand) simulations whose output still matches all of the project data – a statistical analysis of these ‘valid’ scenarios then allows us to estimate 90% confidence ranges for both model parameters and simulation (dispute) outputs.


Not at all.  The simulation model begins as a hypothesis of how the project performed and why, and through the calibration process this hypothesis is refined until it can no longer be faulted (until it is consistent with all known data about the project.) The parameter values estimated through calibration are the result of strictly adhering to the scientific process.


No. After having spent almost two decades calibrating project simulation models in dispute situations (both manually and with the help of automated tools), we can unequivocally state that the calibration process is never easy. Automated calibration tools only work well on small bits of a model at a time – and (properly used) they do not eliminate the need for any of the checks that make the calibration process so lengthy.


Not in the least: calibration is the embodiment of the scientific process.

Calibration involves adjusting the values of model parameters so that the ‘as built’ simulation matches the project data – but the only parameters that are thus adjusted are those for which no direct project data is available. This is actually one of the major purposes of calibration: to find the most likely values for parameters for which there is no direct data.

But, it is important to note that the adjustments of parameter values that happen during calibration are anything but random: calibration

When first set up, a model uses ‘a-priori’ expected values for those parameters for which there is no project data, and initial simulations are usually quite a poor fit against the historical performance data of the project. Calibration is the highly iterative process that closes the gaps between data and simulation, by continuously questioning everything: the structure of the model, the value of its parameters, and even the accuracy and the completeness of the project data.

For example: in one major disruption dispute in which we were involved, the model consistently simulated construction progress happening 6-10 months before the data showed it actually happening. Rather than pushing model parameters to extreme values in attempting to fit the data, the modeling team undertook a thorough review, and jointly with project experts was able to unearth a new claim item: the discrepancy in the simulation eventually reminded project managers that imported materials had suffered from long customs clearance delays – an event that theretofore had not been considered ‘disruptive’, and thus had not been communicated to the modeling team! Once the delays in the procurement process were introduced into the simulation model, simulated construction progress quickly matched the timing of the project data.

Another example: in another major dispute, the simulation model seemed unable to match the pattern of direct manpower data during a specific time window. Once our review eliminated all other potential sources of error, we were able to convince the contractor to review the relevant data – and sure enough, he eventually discovered that (during the conflictive period) some indirect labour groups had wrongly been reported as direct manpower.


In disputes, it often happens that a party’s initial assumptions about who is responsible for a given event may later not be upheld by the courts or by an arbitration tribunal. But, would this invalidate an SD analysis? No, because the SD analysis could quickly and easily be amended to reflect the new stance on liability.

To explain, let us imagine a disruption claim in which the contractor argued that the actions of the employer prevented his crew from learning (improving their productivity) at a sufficiently fast rate.  

First, if such a slowing down of the learning curve happened, then it would need to be included in an SD simulation model of the project – irrespective of any liability issues.

The question of liability would only arise when simulating the ‘but for’ scenario of the project:

  • If the employer were responsible for the event, in the ‘but for’ simulation the parameter estimating the magnitude of the slowdown in learning would be set to zero, causing simulated ‘but for’ crews to learn faster than in the ‘as built’ scenario.
  • If the responsibility were the contractor’s, the parameter for the slowdown in learning would be left at its ‘as built’ value, and simulated ‘but for’ crews would learn at the same speed as they did in the ‘as built’ scenario.

So, adapting the SD analysis to new legal opinions (or rulings) on liability would simply entail resetting a few parameter values and re-running the ‘but for’ scenario – something that could literally be accomplished within a few seconds.


Yes. SD models are based on a causal framework that describes how project conditions, disruptive events and management decisions interact, all together determining project performance. Since all these issues are highly interconnected, it is not possible to simulate some aspects of a project while ignoring some other ones: SD models need to simulate the full project, not just the claim – and this includes simulating all disruptive events (irrespective of the party that may have been responsible for them.)


Yes. Traditional delay analysis methods (like e.g. Time Impact Analysis) feel reassuring because they give apparently unequivocal results – but they are based on significant simplifications of a much more complex reality… whereas SD accounts for this complexity.

First: in spite of appearances to the contrary, nobody actually knows which activities are on the true critical path of a project, no matter what the plans say. In reality, as soon as a plan has been established deviations will start to occur: many small design changes will be introduced, many small (obviously unplanned) mistakes will be made, chance will occur… No detailed plan can hope to stay abreast of all these changes, and thus no-one can truly know which activities lie on the actual critical path.

→ SD recognizes the volatile nature of the critical path, and instead of focusing on activities it looks at ‘work phases’, larger groups of activities whose characteristics remain much more stable.

Second, traditional methods only deal with events that can be characterized as additional activities in the project plan – like, for example, variations However, there is a much broader range of delaying events: there are ‘unofficial’ changes introduced via comments or RFIs, undue delays to the design review process, cash-flow restrictions, etc.

→ SD does account for all types of delaying events.

Third, on any real project a significant amount of delay is caused by disruption: lower levels of productivity clearly slow down progress, and excessive rework can be a nightmare when trying to wrap up a project. Attempting to estimate the causes for delay on a project without accounting for disruption is unrealistic.

→ SD fully accounts for the delaying impact of disruption and rework.


The only difference introduced into the model to run a ’but-for’ scenario consists in removing the direct impacts of some of the unplanned events and conditions included in the ‘as built’ simulation.

The implication is that both simulations represent the same project happening under exactly the same conditions – except for the occurrence (or not) of a given number of unplanned disruptive events.


Yes. Our analyses deliver unbiased estimates for the impact caused by all disruptive and delaying events happening on a project, and can be equally used by contractors and employers.


Because disruption is highly non-linear. Human minds tend to underestimate disruption because evolution has left us poorly equipped to deal with non-linear behavior, but SD computer simulation models do not suffer from this bias, and are able to fully capture this complexity.

Disruption:

  • Can continue to wreak havoc months (sometimes even years) after the occurrence of the event that started it;
  • Behaves exponentially, often leading to corrective measures that disrupt a project even further… thus starting a snowballing effect that can completely destroy project performance;
  • Causes ripple effects, so that even limited events will end up impacting the whole project. 

Disruption is almost impossible to quantify contemporaneously (when it happens), and even after the fact it is difficult to determine how much of it was caused by whom.

However, even if they are difficult to estimate, disruption costs are real and usually significant. Not claiming for disruption will leave a lot of the contractors’ money on the table.


© Construction Dynamics Solutions L.L.C. 2018