Forest Mensuration. Brack and Wood


Types of error
Describing error
Error calculations

Error ©

An error may be defined as
the difference between the measured value and the true value.
It is important for anyone involved in measurement to have a general knowledge of likely error sources, so that: With this understanding, a uniform standard of precision can be applied in all of the steps involved in arriving at an estimate. Such a standard reduces the chance of wasting resources by measuring some things with little error, and others with great error when the final result uses both measurements.

Errors arise from many sources. It pays the natural resource manager or scientist to determine as early as possible what are likely to be the dominant sources of error in the measurement task and to devote sufficient time to devising ways of reducing these errors. This is best accomplished by a preliminary trial - in short, a rehearsal. As well as providing a provisional estimate of the size of the various errors, the rehearsal enables one to check that the procedures are appropriate and sound.

There are four kinds of error:

  1. mistake
  2. accidental error
  3. bias
  4. sampling error


Mistakes are caused by human carelessness, casualness or fallibility, e.g. incorrect use or reading of an instrument, error in recording, arithmetic error in calculations. There is no excuse for mistakes, but we all make them! In general, never be satisfied with a single reading no matter what you are measuring. Repeat the measurement. This shows up careless mistakes and improves the precision of the final result.

Accidental error

Accidental errors are unavoidable. They arise due to inconstant environmental conditions, limitations or deficiencies of instruments, assumptions and methods. Accidental error is usually not important as the error tends to be compensating.

Accidental error can be reduced by using more accurate and precise equipment but this can be expensive. A competent scientist is expected to be able to assess in advance how good an instrument needs to be in order to give results of an accuracy sufficient for the task in hand. In other words, he / she is expected to make an appropriate choice from the equipment available (or to design a more appropriate instrument).


Bias is a systematic distortion in a measurement, i.e. it is a non-compensating error. Common sources of bias are:
  1. flaw in measurement instrument or tool, e.g. survey tape 50 cm short;
  2. flaw in the method of selecting a sample, e.g. stocking counts - some observers always count the boundary tree, others always exclude it;
  3. flaw in the technique of estimating a parameter, e.g. stand volume : using a volume function or model in a forest without prior check of its suitability for applicationint that forest; inappropriate assumptions about formulae;
  4. subjectivity of operators.
The only practical way to minimise measurement bias is by:

  1. continual check of instruments and assumptions;
  2. meticulous training;
  3. care in the use of instruments and application of methods.
Complete elimination of bias may be costly. One may have to compromise in which case one should recognize that bias is present and appreciate its effects.

To avoid bias being introduced via faulty instruments, it is essential to check all instruments before one commences any important measuring project and re-check periodically during the course of the project.

Sampling error

Sampling error is the error associated with an estimate purely due to

Describing errors

Two terms are closely related to error, viz. accuracy and precision:

Error calculations

There are three fundamental theorems in determining errors. Anyone involved in measurement should be familiar with them.
  1. When A = B + C and the errors on B and C are b and c, then the error (a) on A is given by:
    a = sqrt(b^2 + c^2)
  2. When A = B x C, then the error (a) on A is given by:
    a/A = sqrt([b/B]^2 + [c/C]^2)
  3. When A =K x C^n where K is a constant, then the error (a) on A is given by:
    a/A = n x c / C

    ^n denotes raising to the power of n,
    and sqrt(Y) denotes taking the square root of Y
The measure of the random errors that affect precision is the statistical parameter standard error. The smaller the standard error of an estimate, the more precise is that estimate.

Note: there is little sense in taking measurements in the field to a precision greater than needed for their ultimate use. Conversely, the precision of field measurements should not be less than that required for later computations.

Sun, 11 May 1997