Geostatistics and resource estimation techniques

From QueensMineDesignWiki
Jump to: navigation, search

This article explores the topic of "Geostatistics and resource estimation techniques" in association with the Queen's University Mine Design Wiki

Contents

Background [1]

Estimating the physical characteristics (tonnage, grade, size, shape and location) of a mineral deposit is an extremely important process of the exploration phase within the overall mining operation. This process involves a team of a geologist and geostatistician in order to determine information for the mineral Resource Block Model. Mineral resources are classified as measured, indicated and inferred depending on the level of confidence associated with the geological model. This geological information must be reported following a very strict set of standards and regulations dependent upon the country where the ore zone is found.

What is Geostatistics?

Geostatistics uses univariate and bivariate statistics in order analyze and determine the positional and time related data variations within geological data sets. The use of geostatistics helps define a geologic model of the ore deposit and its characteristics. When analyzing data sets the accuracy, precision and bias are measured and controlled for optimum results. [2]

Univariate statistics are simple statistics used to summarize and explain the dataset to the audience.

  • Measures of centre: mean, median and mode
  • Measures of location: minimum, maximum and quartiles
  • Measures of spread: variance, standard deviation and IQR
  • Measures of shape: skewness and coefficient of variation

Bivariate statistics are pairs of statistics used to show the relationship between the data sets.

Spatial Continuity

(thumbnail)
h-scatter plots for four lags in the northerly direction[3]

Spatial continuity measures the smoothness of the transition between closely spaced samples in a particular direction[4]. Two assay data close to each other are more likely to have similar values than those far apart. When analyzing a contour map of assay data, the values are not usually randomly distributed but rather, lower values tend to be around low values and higher values tend to be around high values. Having a single very low value around a cluster of significantly high values raises suspicion. The h-scatter plot is a vital tool used to describe spatial continuity. It describes the relationship between the value of one variable and the value of the same variable at a nearby location. The distance between the pairs of variables is known as the lag. The correlation coefficient of pairs at each lag is the calculated. Data is paired in the northerly direction. The distance between adjacent variables on the grid is 1 unit North and East.


From the plot, V(t) on the x-axis is the initial sample and V(t+h) on the y axis is the paired sample at a distance h. It is clear that as the lag increases i.e. h, the correlation coefficient of the h-scatter plots deviates from gets smaller. Plot d) in Figure 1 with a lag of h = (0,4) shows the lowest correlation as the cloud of data is the fattest. Plot a) with a lag of h = (0,1) shows the highest correlation due to the narrow cloud shown by the data points. This therefore emphasizes the notion that two assay data close to each other are more likely to have similar values than those far apart.


Variography

The variogram is an additional tool used to describe special continuity of a set of variables in space. It performs a function closely related to the h-scatter plot but makes use of the moment of inertia about the x=y line on the graphs for each lag.

Geostat3.png
The moment of inertia is essentially the natural measure of the fatness of the cloud of data. An h-scatter plot with high correlation will generally have a low moment of inertia. The greater the deviation for the x=y line, the grater the moment of inertia. On a variogram the separation distance also known as the lag is plotted on the x-axis and the moment of inertia on the y-axis.

(thumbnail)
Variogram Model used to measure spatial continuity of grade[2].


Figure 3 illustrates an experimental variogram model, all variogram models have three main components that provide a description of the special continuity of assay data.

Sill (S) and Range (R)

The sill is the y-axis value at which the variogram model levels out. The distance at which the model starts to flatten out is known as the range. All samples with separation distance smaller than the range are autocorretated and those with separation distance greater than the range are not. The larger the range the higher the correlation of samples.

Nugget

This is where the variogram model intercepts the y axis. In theory the nugget is expected to be at zero because there should not be any variation at zero separation (lag =0) between samples. The nugget effect can be attributed to measurement errors or spatial sources of variation at distances smaller than the sampling interval or both.

Resource Estimation

Resource estimation is used to determine and define the ore tonnage and grade of a geological deposit, from the developed block model. There are different estimation methods (see below) used for different scenarios dependent upon the ore boundaries, geological deposit geometry, grade variability and the amount of time and money available. A table displaying resource estimation method selection can be found below.

Nearest Neighbour Method

(thumbnail)
Nearest neighbor method of resource estimation[5]

The nearest neighbor method assigns grade values to blocks from the nearest sample point to the block. Closest sample gets a weight of one; all others get a weight of zero.

Advantages

  • Easy to understand
  • Easy to calculate manually
  • Easy to use as a repeatable standard
  • When automated, reasonably fast in 2D

Disadvantages

  • Local discontinuities are unrealistic
  • Produces biased estimates of grade and tonnage above an ore waste cut-off. Which is called the volume variance relationship i.e. the variability of the grade distribution depends on the volume of samples. Large volume samples mean small variability whereas small volume samples mean large variability.

Inverse Distance Method

(thumbnail)
Inverse distance method of resource estimation[5]

The simplest weighting function in common usage is based upon the inverse of the distance of the sample from the point to be estimated, usually raised to the second power, although higher or lower powers may be useful. [6]

Geostat6.png

Samples closer to the point of interest get a higher weighting than samples far away. Samples closer to the point of estimation are more likely to be similar in grade. Such inverse distance techniques introduce issues such as sample search and declustering decisions, and cater for the estimation of blocks of a defined size, in addition to point estimates.

Advantages

  • Computationally simple
  • Exponent gives flexibility. The same estimation procedure can be used to create very smooth estimates (like a moving average) or very variable estimates (like nearest neighbor)

Disadvantages

  • Preferential sampling makes estimates unreliable
  • Requires decision on which sample to use
  • Extremes create large halos of great estimates
  • Choice of exponent introduces arbitrariness

Kriging [2]

(thumbnail)
Input data in specific locations of geostatistical grid points.[2]

Kriging is the geostatistical estimation method developed to provide the optimal linear and unbiased estimates. It depends on expressing spatial variation of the property in terms of the variogram (or correlogram), and it minimizes the prediction errors, which are then estimated. In an estimated block model, kriging considers covariances among the samples, which will then reduce the weights of a cluster of samples, minimizing the effect of variable sample spacing. The effectiveness of kriging depends on the correct input of parameters that describe the variogram (or semivariogram) model because kriging is robust but also fragile, even a naive selection of parameters will provide an estimate comparable to many other grid estimation procedures.

How it works

Ordinary Kriging is a highly reliable method and usually recommended for most data sets, it assume that the data set has a stationary variance but also non-stationary mean value within search radius. This method first uses the geostatistical grids that are normally grid centered with output results located at points. The input data that is used in kriging (e.g. drillhole samples) should be presented with specific locations such that they correspond to the grid points x and y locations (northings and eastings like in the figure shown on the right).

Kriged estimates should not be systematically higher or lower than the true value used, which is why kriging weights are calculated by solving a set of equations shown below that minimize the variance of the estimation error.

Geostat9.png

Kriging uses a set of simultaneous linear equations for each point on the output grid such that all of the actual input data is optimally weighted according to distance using the semivariogram. These equations are often written using matrix notation, the correlation matrix on the left hand side records all of the redundancies between the samples, and ensures that the kriging weights considers sample clustering.

Geostat10.jpg

Kriged estimates can be represented graphically in geostatistical grids shown below to provide geological knowledge. The key to kriging is not just to provide interpretation but rather optimized interpretation, that is why modifications can be done to create variations in terms of anisotropy and the inclusion of trends, this shown by the graphical estimates below according to the continuity directions that were followed in variorums.

Ordinary Kriging Ordinary Kriging Ordinary Kriging


Advantages of Kriging

  • Very good in local and global estimates.
  • Geological knowledge is captured in variogram.
  • Statistical approach allows uncertainty to be quantified.

Disadvantages of kriging

  • Not easy to comprehend.
  • Computationally intensive: hardware, software.
  • Flexibility and power created by many parameters also create arbitrariness and more possibilities for error.

Other types of Kriging

Simple Kriging This method assumes that the data set has a stationary variance and a stationary mean value and requires the user to enter the mean value.

Universal Kriging This method incorporated trends into kriging. Universal Kriging involves a two-stage process where the surface representing the drift of the data is built in the first stage and the residuals for this surface are calculated in the second stage. With this method, the user can set the polynomial expression used to represent the drift surface.

Indicator/Probability Kriging These methods are applied to improve estimations when ore zones are erratic and grade distributions are highly variable and complex. This is because the method is robust in handling non-standard grade distributions and proved less smoothed estimates compared to Ordinary Kriging.

Gaussian Simulation

This is a process of replicating reality using a kriging model; it involves the realizations (or equiprobable solutions) of a random function that has the same statistical features as the sample data used to generate it. This produces better representations of the local variability because it adds in the local variability that is lost in kriging. Unlike kriging, simulation is not based on the local average of data which produces smoothed output as shown below.

The method is based on the principal that an appropriate simulation of a point is a value drawn from its conditional distribution given the values at some nearest points. The sequential nature rests on the fact that subsequent points that are simulated make use not only of the nearby original conditioning data, but also the nearby previously simulated values[7]

Points To Note About Simulation

  • Point-scale modelling.
  • Block averaging required getting block results.
  • Reproduces histogram and variogram.
  • Permits uncertainty assessment and risk quantification.

Summary of Resource Estimation Method Selection

(thumbnail)
Resource estimation method selection[1]

Resource Block Model

(thumbnail)
Example of a Surpac block model.[8]

The block model is created using geostatistics and the geological data gathered through drilling of the prospective ore zone. The block model is essentially a set of specifically sized "blocks" in the shape of the mineralized orebody. Although the blocks all have the same size, the characteristics of each block differ. The grade, density, rock type and confidence are all unique to each block within the entire block model. An example of a block model is shown on the right. Once the block model has been developed and analyzed, it is used to determine the ore resources and reserves (with project economics considerations) of the mineralized orebody. Mineral resources and reserves can be further classified depending on their geological confidence.

Mineral Resources[9]

A mineral resource can be explained as a concentration or occurrence of diamonds, natural solid inorganic material, or natural solid fossilized organic material including base and precious metals, coal, and industrial minerals in or on the Earth’s crust in such form and quantity and of such a grade or quality that it has reasonable prospects for economic extraction. The location, quantity, grade, geological characteristics and continuity of a mineral resource are known, estimated or interpreted from specific geological evidence and knowledge.

Inferred Mineral Resource

An inferred mineral resource is that part of a Mineral Resource for which quantity and grade or quality can be estimated on the basis of geological evidence and limited sampling and reasonably assumed, but not verified, geological and grade continuity. The estimate is based on limited information and sampling gathered through appropriate techniques from locations such as outcrops, trenches, pits, workings and drill holes.

Indicated Mineral Resource

An indicated mineral resource is that part of a Mineral Resource for which quantity, grade or quality, densities, shape and physical characteristics, can be estimated with a level of confidence sufficient to allow the appropriate application of technical and economic parameters, to support mine planning and evaluation of the economic viability of the deposit. The estimate is based on detailed and reliable exploration and testing information gathered through appropriate techniques from locations such as outcrops, trenches, pits, workings and drill holes that are spaced closely enough for geological and grade continuity to be reasonably assumed.

Measured Mineral Resource

A measured mineral resource is that part of a Mineral Resource for which quantity, grade or quality, densities, shape, and physical characteristics are so well established that they can be estimated with confidence sufficient to allow the appropriate application of technical and economic parameters, to support production planning and evaluation of the economic viability of the deposit. The estimate is based on detailed and reliable exploration, sampling and testing information gathered through appropriate techniques from locations such as outcrops, trenches, pits, workings and drill holes that are spaced closely enough to confirm both geological and grade continuity.

Mineral Reserves[9]

A Mineral Reserve is the economically mineable part of a Measured or Indicated Mineral Resource demonstrated by at least a Preliminary Feasibility Study. This Study must include adequate information on mining, processing, metallurgical, economic and other relevant factors that demonstrate, at the time of reporting, that economic extraction can be justified. A Mineral Reserve includes diluting materials and allowances for losses that may occur when the material is mined.

Probable Mineral Reserve

A probable mineral reserve is the economically mineable part of an Indicated and, in some circumstances, a Measured Mineral Resource demonstrated by at least a Preliminary Feasibility Study. This Study must include adequate information on mining, processing, metallurgical, economic, and other relevant factors that demonstrate, at the time of reporting, that economic extraction can be justified.

Proven Mineral Reserve

A proven mineral reserve is the economically mineable part of a Measured Mineral Resource demonstrated by at least a Preliminary Feasibility Study. This Study must include adequate information on mining, processing, metallurgical, economic, and other relevant factors that demonstrate, at the time of reporting, that economic extraction is justified.

Case History

When the Bre-X Minerals ltd. scandal was revealed in the spring of 1997, it was one of the largest core salting scams in history and galvanised the development of the NI 43-101 reporting standards. While not the first (Tapin Copper salted samples in the 1970s), it is one of the most popular and the catalyst for reporting reform.

The Bre-X Story

Formed in 1988 by a former stock broker, Bre-X Minerals commenced gold exploration in 1993 near the Busang River in Indonesia. After 3 years of exploration work, the Busang deposit had revealed a deposit containing 47 million ounces of gold, one of the largest deposits in the world.[10] The price of Bre-X stocks skyrocketed raising the market capitalization to a peak of USD$4.4 Billion in 1997. At this point, on behalf of a request from the Indonesian government, Bre-X partnered with Freeport-McMoRan Copper & Gold to help operate the mine. As part of internal due-diligence, Freeport began exploration work to confirm the Bre-X findings. 5 weeks after the announcement of the partnership, Freeport reported that their core samples “show insignificant amounts of gold."[11]

The scandal was perpetrated by carefully salting samples in a controlled and measured procedure. After the core samples were brought to surface someone added gold filings to the samples. The procedure likely involved the Pupos brothers, with flakes being added at night after everyone had left. The salting was done in such a way to “concoct a gigantic underground ore body that didn’t exist. [De Guzman] knew which sample bags came from which holes at what level, and he would salt accordingly to how he wanted this imaginary ore body to appear."[12] The key was to carefully measure and target the salting to ensure a reasonable spatial variogram could be generated as discussed previously. The resultant fallout from the scandal bankrupted Bre-X and cost investors billions of dollars. Additionally, shortly before the release of findings, exploration manager Michael de Guzman fell to his death from a helicopter en route to the site. Stockbroker and founder David Walsh died of a brain aneurysm in 1998. Only geologist John Felderhof is alive today, after being acquitted of insider trading charges in 2007 brought against him by the Ontario Securities Commission. Felderhof now lives with his 2nd wife in the Phillipines. To date, no one has been charged with the Bre-X scandal.

Emergence of Reporting Standards[13]

The Bre-X scandal as well as others like it resulted in the issuance of national standards to protect investors from fraudulent mining claims. Two main regulatory documents exist depending on the national jurisdiction the company is filed with. In Canada, the National Instruments NI 43-101 report details requirements of reporting mineralized findings. In Australia, the Joint Ore Reserves Committee Code (JORC Code), and South Africa mandates the South African Code for the Reporting of Mineral Resources and Mineral Reserves (SAMREC). All 3 codes are similar but not identical on requirements, definitions, and terminology. Regardless of the technicalities of each document, all exist to:

  • Set criteria for approval of assay labs and methods
  • Regulate ways of making sure samples are not tampered with
  • Ensure periodic, independent reporting of reserves and review and approval of reserve reports
  • Standardize for the disclosure of assay and drill results and procedures so that all data is clear to investors
  • Standardize definitions of reserve types and reserve calculations
  • Assign accountability to an individual deemed to be a competent person/ professional in the industry

The establishment and subsequent revisions of the NI 43 101 document by the Ontario Securities Commission provides a framework to adhere to when writing the report. By establishing these standards, investors are able to have a more reliable and honest review of potential mineralized zones.

References

  1. 1.0 1.1 Darling, P., & Noble, A. (2011). Mineral Resource Estimation. SME mining engineering handbook (3rd ed., p. 203). Englewood, Colo.: Society for Mining, Metallurgy, and Exploration.
  2. 2.0 2.1 2.2 2.3 Srivastava, M. R. (2013). Geostatistics and Orebody Modelling. Toronto: FSS Canada Consultants Inc.
  3. Isaaks, E. H. (1989). Applied Geostatistics. New York: Oxford University Press Inc.
  4. Myers, J. C. (1997). Geostatistical Error Management. Thompson: An Internation Thompson Publishing Company.
  5. 5.0 5.1 Mining Associates: Minerals & Energy Consultants. (2013). Resource Estimation and Surpac. Brisbane: Mining Associates.
  6. Glacken, I. M., & Snowden, D. V. (2001). Mineral Resource Estimation. In A. C. Edwards, Mineral Resource and Ore Reserve Estimation - The AusIMM Guide to Good Practice (pp. 189-198). Melbourne: The Australasian Institute of Mining and Metallurgy.
  7. Blackwell, A. J. (2002). Applied Mineral Inventory Estimation. United Kingdom: Cambridge University Press.
  8. Dassault Systèmes - GEOVIA. (2012). Geological Drillhole Database.
  9. 9.0 9.1 CIM Standing Committee on Reserve Definitions. (2010). CIM Definition Standards - For Mineral Resources and Mineral Reserves.
  10. Investopedia. (2010). Bre-X Minerals Ltd.
  11. Francis, Diane. (1997). "Chapter 13." Bre-X: The inside Story. Toronto: Key Porter. Print.
  12. Marotte, B. (1997). Shares: OSC, TSE to consider tougher mining rules in wake of bre-X woes. The Windsor Star.
  13. Den Tandt, M., & Howlett, K. (1997). Bre-X debacle may spawn new regulations OSC, TSE to examine juniors' reporting rules. The Globe and Mail.
Personal tools