- 1 Introduction
- 2 Best/Worst Case Analysis
- 3 Break-even Analysis
- 4 Spider Graph Analysis
- 5 Monte Carlo Simulation
- 6 References
Sensitivity analysis is a common tool that is used to determine the risk of a model, while identifying the critical input parameters. More often then not, it is used to evaluate the stability of a project’s economics by determining the impact of certain key input parameters on the output. The primary step in conducting a sensitivity analysis is defining the objective of the problem, followed by identifying which parameters will be considered.
The two main types of sensitivity analysis are a one-way and a two-way. Within these models either a deterministic or probabilistic approach can be taken. Deterministic is a manual analysis that takes a known input and varies it over multiple point estimates (Assakkaf, 2003), while probabilistic varies the input parameters over a specified distribution. The basic one-way sensitivity analysis only considers one parameter at a time and the impacts on the model are observed. However, it is often conducted to determine which parameters have the largest impact on the model. A two-way sensitivity analysis becomes more challenging by varying multiple input parameters resulting in a combined affect on the model. Among the many types of sensitivity analysis, the best/worst case, break-even point, spider graph and the Monte Carlo Simulation will be analyzed. Due to the limitations of some methods, it is often recommended to complete multiple types of sensitivity analysis to confirm the results.
Best/Worst Case Analysis
Best case/worst case is a multi-way sensitivity analysis that is often referred to as extreme case analysis. The relationships between two or more parameters are examined simultaneously to determine which parameters have the strongest impact on the model. As the number of parameters used increases, so does the difficulty of presenting and interpreting the analysis (Taylor, 2009). Specific confidence intervals are assigned to the parameters, which ultimately determine the range between the best and worst case. Upper and lower bounds are developed for each parameter; typically the most optimistic and the most pessimistic estimates are used (Taylor, 2009). As the extreme bounds are least likely to occur, once the analysis is complete it is often recommended to also conduct a partial sensitivity analysis to verify the importance of each parameter (Steiguer).
1. Define parameters 2. State all assumptions/uncertainties 3. Define confidence interval for each parameter 4. Specify the worst case for each parameter 5. Specify the best case for each parameter 6. Compare the results
Sensitivity analysis can be applied to many scenarios within multiple industries but are typically used to value a projects feasibility through analyzing the Net Present Value (NPV).
There are multiple uses of the best/worst case analysis even within the mining industry. It is most frequently used to complete a cost sensitivity analysis and to determine the parameters for a final open pit. When analyzing the project’s feasibility, like the example in the table above, the range of capital and operating costs, market price and discount rate fluctuations can be varied to determine the potential NPV. If a project is planned to be successful even at the worst case, with the highest CAPEX and OPEX and the lowest market price, than the project is very likely to be successful. On the other hand, if at the best case of the lowest CAPEX and OPEX and with the highest market price the NPV is negative or not worth the effort, then there is no point proceeding. The best/worst case method is often used as a quick check to determine the potential bounds of project’s NPV and an initial attempt at a risk assessment (Discount Rates, 2012). This use of this type of analysis also extends to determine the economic viability of other projects throughout various industries.
Mine and geologic modelling softwares, such as Whittle and Lynx, use extreme analysis in the algorithms to output the best and worst cases based in certain inputs. Exchange rates, metal price forecasts, slope angles, capital and operating costs are all varied to output pit limits, production rates and overall profitability (Smith, 1999). Whittle varies the parameters to determine the optimal pit shells, project NPV and schedules, while Lynx specializes in modelling underground operations.
Advantages and Limitations
The simplicity of best/worst case analysis makes it an appealing analysis to conduct and is most often used when there are multiple uncertainties (Newcomer, 2015). Best case is rarely used as the probability of every parameter operating at the most optimistic level is unlikely. It is found to be most useful when it is able to improve the worst case scenario by identifying which parameters can be adjusted to increase the overall accuracy (Karavirta & Shaffer, 2016). Depending on the confidence interval set, best case can be beneficial when there is a high probability of the scenario occurring.
The worst case is usually used by itself to ultimately determine whether a project is viable; if it is feasible at the most conservative estimate then the project has a high chance of being successful. To incorporate a reasonable estimate, an additional case is developed known as the average or base case. The middle of the parameters are chosen and the analysis is conducted. However, creating the average case is not always possible as there needs to be limited uncertainties and the distribution and relationships between the parameters need to be well understood.
Break-even is a visual sensitivity analysis that determines the specific point where the revenue equals the associated costs. Any amounts of the revenue that are above the break-even point result in a profitable scenario, as shown by the grey area in the graph below. As long as the sum of the total costs are below the total revenue and both are past the break-even point, the company/project is seen to be economically feasible. This type of sensitivity analysis is used by analysts to determine the minimum production required while varying the price and costs, and vice versa.
For example, if a product costs $1 to produce, and there are fixed costs of $10, the break-even point for selling the products would be:
• If the price is $2: the break-even point will be 10 products ($10/ ($2-$1) = 10) • If the price is $3: the break-even point will be 5 products ($10/ ($3-$1) = 5)
Since this analysis does not consider market price or demand, a supplemental method to confirm the results is often recommended. Break-even analysis assumed that products are completely consumed by the market or that the market conditions stay at a constant position, which is never accurate. Therefore, break-even analysis is often criticized for being too optimistic for any long-term estimate (Riley, 2015).
Short-term projections are often made as an initial step in a pre-feasibility stage of a developing a mining project. Once the assumption for the metals values and the associated costs have be made, breakeven analysis can be helpful to estimate the production rates for mining and milling processes. For this simple analysis the revenue can be increased by increasing the production but issues arise when the mining and milling capacities are considered. The challenges include the necessary expansion of the plant and increase quantity or size of equipment and the associated increase of total costs. As a result the breakeven point on the graph will shift to a higher position on the graph and potentially reducing the benefits of increasing production.
Spider Graph Analysis
Within the economic analysis and feasibility stages of the mine design process there is often uncertainty with the input parameters of an economic model. Spider graph sensitivity analysis measures the impact on project economics as a result of changing one or more of the parameters within the given range (Marshall, 1999). Analyzing these impacts can help a mining company determine, which model inputs result in the greatest change in the project economics (Law, 2014). This sensitivity technique is possibly the most widely used analysis in determining the economic stability of a project (Pannell, 1997).
To perform a spider graph analysis, an economic model must first be in place. Outlined below are the steps involved in performing a spider graph analysis:
1.Determine which economic performance characteristics are to be evaluated. • (Typically: NPV and/or IRR) 2.Determine which model input parameters have the greatest impact on the economic model. • (Typically: Metal Price, CAPEX, OPEX, Grade, Recovery, Discount Rate) 3.Determine the possible range in deviation/error from the assumed value for each of the input parameters. This will be in percentage form as inputs have different units. • (Typically: +/-20%) 4.Determine new economic performance characteristics as a result of changes to a given input parameter. It is important for this analysis that all other input parameters be held at the original assumed value. 5.The determined values of performance characteristics can now be plotted for each range of input parameters.
The simplicity behind interpreting and analyzing a spider graph make it a preferred method. The graph below is an arbitrary example that illustrates the variation of the parameters around the base case. The parameter that has the largest positive slope on the positive range past the base case, this being Gold Price, contributes the most to the project economics. Meanwhile the example also shows how the same parameter actually is most detrimental to the project having the largest negative slope on the negative extent of the range. Likewise, the element with the greatest negative slope will have the highest negative impact on project economics when increasing, and be the most beneficial while in the negative range past the base case.
As illustrated in the example, Gold Price, Silver Price and Recovery all contribute positively to the base case project NPV on the positive range, while an increase in OPEX or CAPEX will negatively affect the project economics. More important however, is the rate at which these inputs change the project’s NPV. The price of gold is seen to have the largest overall impact on project economics and the largest positive affect on NPV when it increases. Conversely, the cost of CAPEX is seen to have the largest negative impact on project NPV while increasing. The Recovery is found to have the smallest effect on project economics as illustrated by the smallest slope.
As illustrated in the example in graph above, the primary application of a spider graph analysis is to determine the economic impact of the most critical model parameters (Marshall, 1999). Spider graph analysis also provides insight to how profitable or unprofitable a project may be as a result of a changing economic landscape following an economic analysis (Marshall, 1999). Without performing this evaluation, a mining company would be less certain of their economic position following a change in one or more of their key parameters without re-running the model (Pannell, 1997). Finally, a spider graph analysis can be a useful tool in evaluating or troubleshooting an economic model and is often the most common method chosen.
The primary limitation of the spider graph analysis is that only one financial input can be modelled at a given time, unlike such simulations as the Monte Carlo method (discussed below). In reality, the changes of a market landscape ultimately result in a variation of multiple input parameters simultaneously. The second limitation of a spider graph sensitivity analysis is that the probability of input change is not accounted for (Marshall, 1999). As such, risk exposure for each input parameter is not considered. Therefore, the most important input parameter might demonstrate the smallest slope on a spider diagram, yet be extremely volatile.
Monte Carlo Simulation
The Monte Carlo Method was determined by John von Neumann, Stanislaw Ulam and Nicholas Metropolis who were working on the nuclear research program called the Manhattan Project in the 1940s (Thomopoulos, 2013). The premise behind the Monte Carlo is that it is capable of solving problems that are too intensive to solve with an analytical approach. Since the 1940s this approach has been widely used in the fields of business, science and engineering. With an increase in computer technology and computational abilities Monte Carlo simulation has become ever more practical to tackle problems that require a large number of random inputs (Thomopoulos, 2013).
The Monte Carlo is an estimation method that uses relationships between random variables that have defined or estimated probabilities. The power of this type of simulation is the ability to handle complex problems with known relationships. The selection process is done through a random number generator that considers previously defined probability distributions (Rubinstein & Kroese, 2008). Once the parameters and relationships have been defined a computer will proceed to a random selection process to produce the result. This process is repeated until a desired confidence limit has been reached or there are marginal returns. The following are the generic steps to conduct a general Monte Carlo Simulation:
1. Determine the problem and a desired output 2. Determine the variables and relationships between each 3. Define the random variable parameters • Continuous or Discrete • Univariate or Multivariate 4. Assign probabilities to the random variables if necessary 5. Run simulation 6. Terminate at either a certain number of runs, elapsed time or confidence bounds
A random variable can either be continuous or discrete. Continuous numbers are within a range, while discrete numbers are from a given set . The generation and assigned probabilities of producing these random variables can greatly affect the results. This is one of the cautions that must be considered when using Monte Carlo. An incorrect estimation of probabilities can drastically affect the end result making the simulation invalid. The table below shows common random variable types and different distributions (Thomopoulos, 2013; Rubinstein & Kroese, 2008).
Monte Carlo Simulation can be used to solve a wide variety of problems and can produce multiple types of outputs. It can be used to predict a certain number or to provide a statistical distribution. For example the simulation could be used to determine the probability of a specific number occurring, such as the number produced on a roulette table or a from poker hand (Thomopoulos, 2013). Likewise if the end result is unknown it can be used to solve a complex algorithm and output a distribution. Monte Carlo is often used throughout the engineering and other industries to quantify the risk of complex scenarios (Kwak & Ingall, 2007).
Applications in Mining
Monte Carlo simulation has made its way into mining with applications in the areas of slope stability, risk management, scheduling and sensitivity analysis (Kwak & Ingall, 2007).
El-Ramy et al. used Monte Carlo simulation in 2002 to determine slope stability based on the probabilistic parameters that were determined at the James Bay Hydroelectric Project. Spatial and probabilistic distributions were created for the varying random variables and a complete Monte Carlo Simulation was conducted (El-Ramly, Morgenstern & Cruden, 2002).
The use of Monte Carlo in project scheduling is not widely used, although it does offer some application (Kwak & Ingall, 2007). A Monte Carlo simulation can be used to determine expected project completion date and can apply a confidence limit to the estimate. This is done by assigning probabilities to each required task, such as a best, worst and most-likely case. This is then applied to a probability distribution that is then used to perform the Monte Carlo Simulation. The limitation of the simulation in this application is that it does not account for management intervention, which can assist in keeping the project on schedule if it is severely behind (Kwak & Ingall, 2007).
Cost Sensitivity and NPV Analysis
This is an important case to consider in mining because it can provide a range of estimates for the expected project value or NPV. Typically a single NPV is calculated for a project, while a Monte Carlo Simulation allows multiple values to be calculated. The NPV process follows the same method as all other Monte Carlo simulations; the cash flows and relationships are first determined and then a probability distribution is assigned to each. This can then be used to assess the various probabilities associated with each NPV to determine the level of risk of a project and its sensitivity to each variable (Kwak & Ingall, 2007).
Blasting and Fragmentation
Morin et al. conducted research to develop an applicable Monte Carlo Simulation that could be used in determining rock fragmentation after a blast. The goal of having a well fragmented rock mass is to reduce the overall cost of mining by reducing loading, hauling and crushing costs. The relationship and variable definition was done through the Kuz-Ram fragmentation model. Morin et al. then conducted triangular and exponential distributions around the rock parameter to produce a resulting distribution. The model was proven to have a practical application however, the challenge is with the data collection for the estimate triangular estimations, as it may be costly and difficult to obtain (Morin & Ficarazzo, 2006).
Buxey, G. M. (1979). The Vehicle Scheduling Problem and Monte Carlo Simulation. The Journal of the Operational Research Society, 563-573.
Discount Rates. (2012, March 13). Retrieved from Centre for Excellence in Mining Innovation.
El-Ramly, H., Morgenstern, N. R., & Cruden, D. M. (2002). Probabilistic slope stability analysis for practice. Canadian Geotechnical Journal, 665-683.
Gordon, R., Pickering, M., & Bisson, D. (n.d.). Uncertainty Analysis by the "Worst Case" Method. Journal of Chemical Education, 780-781.
Henrichson, C., Matthies, C., & Rinaldi, J. (n.d.). Sensitivity Analysis. Retrieved from Cost-benefit knowledge bank for criminal justice (CBKB).
Karavirta, V., & Shaffer, C. (2016, January 15). Best, Worst, and Average Cases. Retrieved from OpenDSA: http://algoviz.org/OpenDSA/Books/Everything/html/AnalCases.html#
Kwak, Y. H., & Ingall, L. (2007, February). Exploring Monte Carlo Simulation Applications for Project Management. Risk Management, pp. 44-57.
Law, J. (2014). A Dictionary of Finance and Banking (5 ed.). N_A: Oxford University Press.
Marshall, H. E. (1999). Technology Management Handbook. Boca Raton: CRC Press LLC.
Mazzchi, T. A., & van Dorp, J. R. (2006). Chapter 5 - Sensitivity Analysis. In R. T. Clemen, & T. Reilly, Making Hard Decisions (pp. 1-29). GWU.
Morin, M. A., & Ficarazzo, F. (2006). Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz-Ram model. Computer & Geosciences, 352-359.
Newcomer, K. E., Hatry, P. H., & Wholey, J. S. (2015). Handbook of Practical Program Evaluation. New Jersey: John Wiley & Sons Inc.
Pannell, D. J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16, 139-152.
Riley, J. (2015). Breakeven Analysis - Strengths and Limitations. Retrieved from tutor2u: http://www.tutor2u.net/business/reference/breakeven-analysis-strengths-and-limitations
Rubinstein, R. Y., & Kroese, D. P. (2008). Simulation and the Monte Carlo Method. New Jersey: John Wiley & Sons Inc.
Smith, M. (1999). Geologic and Mine Modelling Using Techbase and Lynx. Netherlands: A. A. Balkema, Rotterdam.
Steiguer, J. d. (n.d.). Lesson 10 - Conducting Sensitivity Analysis. A Student's Guide to Cost-Benefit Analysis for Natural Resources. Arizona: The University of Arizona. Retrieved from A Student's Guide to Cost-Benefit Analysis for Natural Resources.
Taylor, M. (2009, April). What is sensitivity analysis? What is...? series, pp. 1-8.
Thomopoulos, N. T. (2013). Essentials of Monte Carlo Simulation - Statistical Methods for Building Simulation Models. New York: Springer.