Sampling QAQC

From QueensMineDesignWiki
Revision as of 11:45, 18 January 2016 by SDMcKinnon (talk | contribs) (1 revision)

Jump to: navigation, search

{{#switch: | left =

{{#switch:{{#if: | {{{smallimage}}} |
                }}

| blank

| none =

| #default =

}} {{#if:{{#if: | {{{smallimageright}}} | }} | {{#ifeq:{{#if: | {{{smallimageright}}} | }}|none | | }} }} {{#switch:style

| | speedy | delete | content | style | move | protection | notice =

| #default =
This message box is using an invalid "type=style" parameter and needs fixing.
[[Category:Wikipedia message box parameter needs fixing|Template:Main otherSampling QAQC]]

}}{{#switch:{{#if:

                | {{{smallimage}}}
|
                }}

| blank = [[Category:Wikipedia message box parameter needs fixing|Template:Main otherSampling QAQC]] }} | #default =

{{#switch:

| blank

| none =

| #default =

}} {{#if: | {{#ifeq:|none | | }} }} {{#switch:style

| | speedy | delete | content | style | move | protection | notice =

| #default =
This message box is using an invalid "type=style" parameter and needs fixing.
[[Category:Wikipedia message box parameter needs fixing|Template:Main otherSampling QAQC]] }}{{#switch:

| blank = [[Category:Wikipedia message box parameter needs fixing|Template:Main otherSampling QAQC]] }} }}{{#ifeq:Sampling QAQC|Sampling QAQC|{{#ifeq:|Template|}}}}

{{#ifexist: MDW_Meta:Sampling QAQC | Template:MDW Meta:Sampling QAQC }}


The public reporting of Mineral Resources and Ore Reserves (in Canada, Mineral Reserves) must comply with various International Codes. In these there is always a requirement for the data to be appropriate for use in the estimation of tonnes and grades. This means that the collection of data should accord with sampling theory and the samples must be systematically reduced in particle size and mass to provide a representative sample. Ensuring that the data is appropriate is frequently managed using appropriate Quality Assurance (QA) and Quality Control (QC) procedures.


Sampling consists of extracting a representative parcel of material from the deposit and then determining the grade of that parcel through a process of subsampling and assaying. Such errors are best minimised by maintaining good QA procedures, and by consistent monitoring of QC data.


In reality there may be additional “errors” involved in this process which may not be identified with a sampling study. These errors include ignoring the spatial relationships inherent in geological data, as well as the non-quantifiable procedural errors that may be built into any data collection system. Such errors are best addressed by constant vigilance and audits.

International codes for resource and reserve reporting

There are a number of international standards in use for public reporting of resource and reserve estimates. These have been developed by the Canadian Institute of Mining, Metallurgy and Petroleum—the CIM (2004), the Australasian Joint Ore Reserves Committee—the JORC (2004), the South African Code for the Reporting of Mineral Valuation—the SAMVAL (2008) and the Security Exchange Commission—SEC (undated). All these standards specify the various requirements for data quality as the basis for carrying out geostatistical studies. An example extracted from the JORC Code is shown in Table 1, indicating the requirements in Codes that are regarded by practitioners as non-prescriptive.


Table 1: Extracts from JORC Code checklist of assessment and reporting criteria
Criteria Explanation
Sampling Techniques and Data
Sampling techniques • Nature and quality of sampling (e.g. cut channels, random chips etc.) and measures taken to ensure sample representivity.
Drilling techniques. • Drill type (e.g. core, reverse circulation, open-hole hammer, rotary air blast, auger, Bangka etc.) and details (e.g. core diameter, triple or standard tube, depth of diamond tails, face sampling bit or other type, whether core is oriented and if so, by what method, etc.).
Sub-sampling techniques • If core, whether cut or sawn and whether quarter, half or all core taken. and sample preparation.
• If non-core, whether riffled, tube sampled, rotary split etc. and whether sampled wet or dry.
• For all sample types, the nature, quality and appropriateness of the sample preparation technique.
• Quality control procedures adopted for all sub-sampling stages to maximise representivity of samples.
• Measures taken to ensure that the sampling is representative of the in situ material collected.
• Whether sample sizes are appropriate to the grainsize of the material being sampled.
Quality of assay data • The nature, quality and appropriateness of the assaying and laboratory procedures used and laboratory tests and whether the technique is considered partial or total.
• Nature of quality control procedures adopted (e.g. standards, blanks, duplicates, external laboratory checks) and whether acceptable levels of accuracy (i.e. lack of bias) and precision have been established.



In Canada, the requirements under National Instrument NI 43-101 (2005) are clearly specified. Data verification is defined as the process of confirming that data has been generated with proper procedures, has been accurately transcribed from the original source and is suitable to be used. NI43-101 includes the following separate specific requirements:

... the issuer must include in the written disclosure (a) a statement whether a qualified person has verified the data disclosed, including sampling, analytical and test data underlying the information or opinions contained in the written disclosure; (b) a description of how the data was verified and any limitations on the verification process; and (c) an explanation of any failure to verify the data.

... the issuer must include in the written disclosure ...(d) any drilling, sampling, recovery or other factors that could materially affect the accuracy or reliability of the data referred to in this subsection; (e) a summary description of the type of analytical or testing procedures utilized, sample size, the name and location of each analytical or testing laboratory used, and any relationship of the laboratory to the issuer;...

 

Sampling theory

The theory of particulate sampling was first defined by Gy (1979) and has been expounded by Pitard (1993), Shaw (1997) and others. Improvements on the basic theory have been provided by Bongarcon (1998) and Bongarcon and Gy (1999), bringing theoretical and experimental results much closer together.

The Fundamental Sampling Error (FE) of Gy (1979) is a formal description of the observed relationship between the particle size of a component of interest (e.g. gold) within a sample and the nominal particle size (defined by convention as the 95% passing size) of that sample. The FE is measured as a relative variance, which is additive for the various stages of a sampling protocol. Thus, the total experimental error is the sampling error, plus sample preparation error, plus assaying error. At any stage of the sampling and assaying process, the differential relative variance is due solely to a specified part of the process and can be determined using suitably designed experiments.
In summary and in its simplest form, the Fundamental Sampling Error is represented by the following equation:

 




This relationship may also be frequently seen as:

 

Where:


is the Fundamental Sampling Error expressed as a relative variance

is a sampling constant

is the nominal particle size (95% passing) in cm


is the mass of the sample in grams

is the mass of the lot in grams

is an exponent characterising the deposit of interest.

A parametric approach to defining the sampling constant requires definition of further factors:


Where:


is the composition factor

is the shape factor of particles or fragments

is the size distribution (grouping) factor

is a function of the sample particle size and the liberation size


The parametric method has been shown to provide only a first-pass approximation and is only useful when there is no data available from a deposit. Empirical sampling tests are preferable for quantifying the relationship between mass and particle size for a specific orebody. Test work is usually designed using sampling tree experiments or paired samples. Solving for the parameters and enables a sampling nomogram to be defined for the deposit of interest. The nomogram enables prediction of the total sampling error that would be obtained using alternative sampling protocols.


Gy's Sampling theory  also refers to non-quantifiable errors that arise due to mistakes in sampling, sampling preparation and assaying. Problems in data recording, data management and in grade interpolation cause additional errors. All these errors may be reduced by quality control practices, vigilance and audits to reduce their potential impact on profitability.


Accuracy and precision


The difference between the estimated value and the true value for any determination (or prediction) is termed the error. We are interested in two aspects of this error. The consistent components of this error are termed bias and reflect the accuracy of the determination method. The random components of this error are termed precision and reflect the repeatability of the method.


The classic analogy proposed to understand the difference between these two components of error is in shooting at a fixed target. Figures 1 and 2 illustrate the relationships between bias and precision. If the arrows fall equally around the bull’s-eye, as shown on both targets in Figure 1, they show good accuracy. If the arrows are tightly clustered as in the target on the right, then they show high precision.

 
QAQC Figure 1 Low Bias.JPG

Figure 1: Low Bias

The objective is, of course, to have both high accuracy and high precision. It is not appropriate to discuss the average accuracy without qualifying it by discussing the repeatability of the individual results.

QAQC Figure 2 High Bias.jpg

Figure 2: High Bias

The differences between bias and precision are reflected in the way these aspects of the total error are presented and used:

Bias is frequently discussed in terms of the differences in the central tendencies (e.g. mean, median, etc.) of two sets of data, or between a set of data and the true value.

Precision is frequently discussed in terms of the variability of sets of data by comparing the distribution of the differences. Common measures for this are the second order moments such as the standard deviation and variance or their standardised equivalents, the coefficient of variation (CV) and the relative variance (the square of the CV).
The Limit of Detection (LOD) is defined as the lower limit of assaying where the precision approaches ±100%.


Representative samples

 In describing any form of sampling, from diamond drilling, reverse circulation drilling, trenching or any other source, it is important to conform to the following conventional terminology of Gy (after Pitard, 1993):

Lot - the total collection of material for which an estimate of some component is required.
Component - the property that is to be estimated using a sample: e.g. grade, density, thickness, etc.
Sample - part of a lot on which a determination of a component will be carried out, where each part of the lot had an equal probability of being selected into the sample.
Specimen - part of a lot on which a determination of a component will be carried out, but for which the rule of equal probability of selection has not been respected.
Increment - part or whole of a sample, selected from the lot with a single cut of the sampling device. A number of increments may be combined to form one sample.

The objective of representative sampling is to obtain samples rather than specimens.  Determination of the grade within a drilled interval is an example of the use of sampling “to estimate the component of the lot.”  It is a difficult enough process to ensure that the sampling is correctly carried out when the lot is regarded as all of the material from within a single drilled interval.  Once this problem is appreciated, the greater difficulty in obtaining representative samples of the deposit becomes clear.

Sampling protocols

The process of sampling a drilled interval can be defined by developing and testing a sampling protocol.  Such a protocol can be characterised by description of the nominal particle size and mass at each stage of subsampling.  The following example not only illustrates the minimum information required but can also be regarded as a minimum safe initial sampling protocol, until experimental work is carried out as an early part of resource definition sampling:

“Each 5 kg sample was dried and reduced to less than 6 mm in a jaw crusher. The whole sample was then cone crushed to 2 mm. The crushed sample was riffle split in half, to approximately 2.5 kg, then pulverised to 90% passing 75 microns in a Labtechnics LM5 mill using a single large puck. The entire pulp was roll-mixed on a plastic sheet and four 100 g cuts were taken with a small trowel, to provide 400 g for storage in a kraft envelope. The residue was rebagged and stored for six months. From the 400 g subsample 50 g was weighed out for fire assay.”

Particle sizing tests and experimental repeatability sampling should be carried out at each stage of comminution, i.e. in the above example, after the jaw crusher, cone crusher and pulveriser. 

Sampling tree experiment

The use of 100 pairs of samples at each stage constitutes one example of a sampling tree experiment, by means of which, the total relative variance and consequently, the differential relative variance can be defined for each stage of the sampling protocol.  This procedure should be used to optimise the sample preparation protocol, to cost effectively and minimise the total random error of the sample assays.  Frequently, a sampling protocol is summarised using a sampling nomogram that indicates the errors at each stage of the process.  An example of a sampling nomogram is shown in Figure 3.  Note that the diagonal line defines a constant error and that the calibration of such plots (i.e. determination of the actual error value along any diagonal line) is done using a sampling tree experiment.

 

QAQC Figure 3a.jpg

Figure 3: An example of a sampling nomogram

Quality Assurance and Quality Control


A written Quality Assurance (QA) program will describe the steps being taken to minimise sampling errors. Written protocols usually define (1) the sampling program, (2) the preparation of subsamples, (3) the assaying procedures, and (4) the procedures and criteria for Quality Control (QC). Many mining companies now use such written procedures and laboratories routinely make available their written procedures to clients (sometimes under confidentiality agreements).

Quality Control (QC) refers to the results for standards, blanks, duplicates of samples and repeats of previously prepared pulps that are all submitted to the laboratory with the samples. For such QC data to be accepted by an independent auditor it is usually a requirement that they be submitted "blind" to the laboratory in such a way that the laboratory cannot identify them.


Following a number of significant fraud cases in the mining industry, QC now includes attending to the Chain of Custody, to ensure that the integrity of samples collected in the field is not compromised.


Analysis of paired data


Pairs of samples that are prepared and assayed in the same manner provide a measure of the random error of sampling. The total error is the sum of the errors due to splitting the initial duplicate, preparing the sample and assaying the sample. Pulps are produced by crushing and pulverising the particle size of drill core or Reverse Circulation drilling chips to a nominal size (eg 90% passing 75 μm) before a small subsample (say 200 g) is retained for assay in a pulp packet. Residue samples are additional pulps collected after sample preparation before assaying.

A systematic difference between pairs of assays (bias) is best demonstrated by deviations of the trend of paired data from a 45o line on a scatterplot. Comparisons of the error between the two estimates can be made by examining the Half Absolute Relative Difference (HARD) for the paired data, produced by dividing the absolute difference between the two values by the mean of the two values, as discussed in Shaw (1997). Measures of precision derived using this relative measure are directly comparable from one deposit to another. Scatterplots of the HARD against the mean of the samples provide a good visual indication of the quality of the assaying at various grades. A HARD value of 33.3% indicates that one assay is exactly twice the other (e.g. 10.0 g/t Au and 20.0 g/t Au).

Standards

Standards are samples of known (usually certified) grade that are submitted to monitor the accuracy of a laboratory, i.e. the ability of the laboratory to get the correct or known result. A laboratory showing a systematic difference from the expected result is said to exhibit a bias. Standard samples ensure that the laboratory quality control procedures are effective and that significant bias is not evident within or between assay batches. These standard samples may be variously referred to as Certified Reference Materials(CRMs), Standard Reference Samples (SRSs), etc. Standard samples can be purchased commercially from a number of suppliers. Preparation on site may be done using rejected drill cuttings and can provide standards with the same matrix as the assay samples from the deposit, an advantage in validating the assaying methods. Significant effort is required to ensure that the pulp is sufficiently ground and homogenised to ensure that there are multiple aliquots of pulp that are sufficiently identical.

Verification of the average grade and inherent variability of a CRM, done by assaying at a number of laboratories using a variety of techniques, is referred to as a "round-robin." Even if the site prepares some standards, the acquisition of some independently certified standards is still recommended. Standards should be submitted with all pulp repeats. The grades of the standards must be similar to those being validated.

Blanks

Blanks are barren samples with an expected very low grade. These are submitted to ensure that there is no contamination between samples during the sample preparation or assaying. If the blank samples following high grade samples have elevated grades, then there have been problems.

Duplicates

Duplicates are samples collected, prepared and assayed in an identical manner as an original sample, to provide a measure of the total error of sampling. When this error is derived in relative terms, the total error is the sum of the errors due to splitting the initial duplicate, preparing the sample and assaying the sample. There is no point in submitting waste samples as Duplicates. Field Duplicates should be collected at the drill rig or trench. Laboratory Duplicates may be produced by taking a second split after crushing diamond drill core, before the pulverising stage. Submitting the second half of sawn diamond core is actually a way of measuring the difference in grade between very closely adjacent different samples in the deposit. While it does not provide a measure of the sample preparation or assaying errors, it may provide a measure of the nugget effect in a deposit as a part of defining a variogram for a geostatistics study.

Crushing and pulverising reduces the particle size of drill core and RC chips to a nominal size (e.g. 90% passing 75 μm) and then a small subsample (say 200 g) of this pulp is retained for assay in a pulp packet. Residues of samples may be collected at all stages of the sampling protocol.

Repeats

Repeats are samples that have been previously prepared and assayed (so they are already finely comminuted pulps) and that then have been re-submitted for another identical analysis. Comparison of the results provides a measure of the precision of a laboratory, i.e. the ability of the laboratory to get the same assay result under the same conditions.

Pairs of samples assayed at different laboratories may help to define the inter-laboratory precision and may also identify a bias between the two laboratories.

Sources of error in assaying


Any assay method has an associated error. Measures of precision quantify the expectation that a result can be reproduced. There are a number of factors that cumulatively affect the precision of any assay result, including:

  • the mass of material being assayed
  • the homogeneity of the material being assayed
  • the concentration of the component of interest
  • matrix effects due to other elements in solution
  • instrument calibration and drift
  • volumetric or mass determinations, e.g. using non-calibrated glassware or poorly calibrated balances.
  • the integrity of the chemicals used to dissolve the component of interest, etc.

An example of a QC program


Check samples (Standards, Duplicates and Repeats) should be inserted so as provide reasonable control of the sample batches being analysed. In a fire assaying procedure, a batch of 40 samples would be expected to contain:

  • 1 Blank
  • 2 Standards of different grades
  • 1 Field duplicate and
  • 1 Pulp repeat

This provides a significant and satisfactory level of control over the assaying and reasonable monitoring of the sample preparation (there is 5% control using Standards and 5% control using re-assaying of Duplicates and Repeats). The procedure to insert the Standards into the batch should be random: they should look the same as the pulp Repeats (recovered from the laboratory and renumbered), and all checks should be submitted blind.

The results must be reviewed and approved by site personnel before the reported grades are incorporated into the database. If anomalies are detected, the whole batch may need to be re-analysed (or else handled according to procedures defined in the written QA protocol). The laboratory will certainly also run its own standards and checks, which are not blind, but for which results should also be reported to the client.

Laboratories should provide the results in electronic format, followed up with a signed certificate (which may also be digital). All the assay results should be stored in a relational database that must separately identify all check samples and their known values. In the case of Standards, these are the Expected Mean and Expected Variance. In the case of field Duplicates and pulp Repeats this is the matching sample number for the original analysis.


Various graphical plots are used to monitor the precision of field duplicates or pulp repeats (such as scatterplots, Figure 4) and the accuracy of Standards (such as X-Bar plots, Figure 5). QAQC Figure 4a.jpg
Figure 4: Field duplicates of Esconda blast hole cone sampling from which a relative precision of was obtained

Where:

UAL = upper action limit
UWL = upper watch limit
CL = expected value centre line
LWL = lower watch limit
LAL  = lower action limit.


Figure_5
Figure 5: Sampling control charts showing unsatisfactory features
From Khosrowshahi et al (2004).

References


Bongarcon, D. (1998). Extensions to the demonstration of Gy's formula. Exploration and Mining Geology, v. 7, 1&2. p. 149-154.


Bongarcon, D. and Gy, P. (1999). The most common error in applying Gy's formula in the theory of mineral sampling. Australian Journal of Mining, August.


CIM. (2004). CIM Definition Standards On Mineral Resources and Mineral Reserves. Prepared by the CIM Standing Committee on Reserve Definitions, Canadian Institute of Mining, Metallurgy and Petroleum, November 14.

Gy, P. (1979). The Sampling Of Particulate Materials - Theory And Practice. Elsevier, Amsterdam, 431 pp.


JORC. (2004). Australasian Code for Reporting of Exploration Results, Mineral Resources and Ore Reserves (The JORC Code). Prepared by the Joint Ore Reserves Committee of the Australasian Institute of Mining and Metallurgy, Australian Institute of Geoscientists and Minerals Council of Australia (JORC), effective 17 December 2004.


Khosrowshahi, S., Shaw, W.J. and Yeates, G. (2004) Quantification of risk using simulation of the Chain of Mining - a case study on Escondida Copper. Orebody Modelling and Strategic Mine Planning, Perth, WA, 22-24 November, AusIMM, p. 381-389.


NI 43-101. (2005). Standards of Disclosure for Mineral Projects. Canadian National Instrument 43-101, Ontario Securities Commission, 28 OSCB pp 10355-10367.

Pitard, F.F. (1993) Pierre Gy's Sampling Theory and Sampling Practice, Second Edition. CRC Press. Florida, 488 p.


SAMVAL.(2008). The South African Code for the Reporting of Mineral Asset Valuation. The South African Mineral Asset Valuation Committee (SAMVAL) Working Group, Johannesburg.


SEC. (undated). The Securities Act. Industry Guide 7 documents the requirements for reporting mineral resources or mineral reserves in the USA, pp 34-36. accessed at  http://www.sec.gov/about/forms/industryguides.pdf.


Shaw, W.J. (1997) Validation of sampling and assaying quality for bankable feasibility studies. The Resource Database Towards 2000, Wollongong, 16 May, Australasian Institute of Mining and Metallurgy, p. 69-79.