Using repeated measurements to improve the standard uncertainty

Technical notes | 2016 | EurachemInstrumentation
Other
Industries
Other
Manufacturer

Summary

Significance of the topic

The evaluation of uncertainty arising from random effects is fundamental for reliable analytical results, method validation and laboratory quality assurance. Understanding when and how repeated measurements reduce the standard uncertainty is essential for: ensuring correct reporting of measurement results, designing efficient sampling and QC regimes, and building defensible uncertainty budgets used in regulatory and accreditation contexts.

Objectives and overview of the guidance

This guidance clarifies how repeated measurements should be used to estimate the standard uncertainty due to random variation. It explains the simple reduction of uncertainty when reporting a mean value, the assumptions required for that reduction to be valid, and common situations where the simple formula does not apply. Practical examples illustrate appropriate alternative treatments when observations are not independent or conditions change over time.

Methodology and theoretical basis

The standard uncertainty arising from random effects is commonly estimated from the observed standard deviation s of repeated measurements. For a single measurement the standard uncertainty is s. For a mean of n independent measurements, the standard uncertainty of the mean decreases according to u_xbar = s / sqrt(n). This relationship relies on the assumptions that all observations are independent and that they are obtained under stable, consistent measurement conditions. Typical sets of conditions referenced are: repeatability (same procedure, operator and short timescale), intermediate precision (within-laboratory reproducibility), and reproducibility (across laboratories). The guidance stresses that the formula only quantifies uncertainty under the measurement conditions in which the data were collected.

Examples and applicability

  • When Equation u_xbar = s / sqrt(n) applies: If inhomogeneity of a test item is a dominant source of variability, taking multiple random test portions and measuring each under repeatability conditions yields independent observations. In that case the standard deviation of the mean correctly reduces the uncertainty by sqrt(n).
  • When Equation u_xbar = s / sqrt(n) does not apply: Grouped data — for example, duplicate QC measurements made each day where each group shares a common calibration error — violate independence. The duplicates within each group are correlated and the simple formula overstates the effective number of independent observations. A practical approach is to compute the mean for each group (e.g., daily mean) and use the standard deviation of these group means divided by the square root of the number of groups to estimate the uncertainty of the central QC line. Analysis of variance (ANOVA) or other hierarchical methods are also appropriate for grouped structures.
  • Time-dependent or autocorrelated data — when instrument drift or a changing test item causes correlation between successive measurements, part of each observation’s error is shared with neighboring observations. In such cases independence is violated and dedicated statistical techniques that model correlation (time-series methods, mixed models, or specialized uncertainty propagation approaches) are required.

Used instrumentation

The source examples mention common laboratory equipment and procedures rather than a detailed instrument list: a volumetric pipette used for calibration studies, routine measurement systems subject to calibration, and instruments that may exhibit drift. No specific analytical instrument models were specified. The guidance refers generically to calibration actions, QC charts and the laboratory measurement system rather than to particular hardware.

Main results and discussion

  • The standard deviation of observations quantifies random variation affecting single measurements.
  • For the mean of n independent observations taken under stable, repeatable conditions, the standard uncertainty of the mean is reduced by a factor sqrt(n): u_xbar = s / sqrt(n).
  • The crucial requirement for this reduction is independence and stability; when observations are grouped or time-correlated, the independence assumption fails and the simple formula is invalid.
  • Appropriate alternatives include aggregating to group means, applying ANOVA or hierarchical models to partition variance components, and using statistical methods that explicitly account for autocorrelation.

Benefits and practical applications

  • Appropriately applying the reduction of uncertainty for means improves precision in reporting averaged results and can justify reduced uncertainty statements for certified values or method performance claims.
  • Recognizing when independence is violated prevents underestimation of uncertainty that could lead to overconfident decisions in QC, compliance testing, or reporting of reference values.
  • Using group-wise summaries or variance-component analysis provides more realistic uncertainty estimates in routine internal QC, calibration exercises, and inter-day studies.

Future trends and opportunities

  • Broader adoption of mixed-effects and hierarchical models to separate within-run, between-run and between-operator or between-instrument variance components, supported by user-friendly software, will improve routine uncertainty evaluation.
  • Time-series techniques and explicit autocorrelation modelling will become more common for long-running measurement systems showing drift or serial dependence.
  • Bayesian approaches to uncertainty evaluation can provide flexible frameworks to combine prior knowledge, small sample information and hierarchical structure in a coherent probabilistic way.
  • Automation of data capture, combined with standardized analysis pipelines (including ANOVA and residual diagnostics), will help laboratories detect non-independence early and apply appropriate corrections.
  • Improved experimental design (planned replication, randomized sampling, balanced grouping) will reduce ambiguity about independence and improve the efficiency of uncertainty estimation.

Conclusion

The simple reduction of standard uncertainty for a mean (u_xbar = s / sqrt(n)) is a powerful and widely applicable rule, but its validity depends on independence and stability of measurement conditions. When data are grouped or autocorrelated, direct application of the formula leads to misleadingly small uncertainties; instead, use group means, ANOVA, mixed models or time-series methods to obtain realistic uncertainty estimates. Awareness of these issues and selection of appropriate statistical tools are essential for robust uncertainty evaluation in analytical laboratories.

References

  • Eurolab Technical Report 1/2006: Guide to the Evaluation of Measurement Uncertainty for Quantitative Test Results, Appendix A.5.
  • Eurachem/CITAC Measurement Uncertainty and Traceability Working Group, Second English edition 2016.

Content was automatically generated from an orignal PDF document using AI and may contain inaccuracies.

Downloadable PDF for viewing
 

Similar PDF

Toggle
Signal, Noise, and Detection Limits in Mass Spectrometry
Application Note Chemical Analysis Signal, Noise, and Detection Limits in Mass Spectrometry Authors Greg Wells, Harry Prest, and Charles William Russ IV, Agilent Technologies, Inc. Abstract In the past, the signal-to-noise of a chromatographic peak determined from a single measurement…
Key words
signal, signalidl, idlnoise, noiseanalyte, analytepopulation, populationestimate, estimatemeasurements, measurementsmean, meandeviation, deviationbackground, backgroundvalue, valuestatistically, statisticallygenerally, generallyamount, amountfrom
EffiChem 5.0 software for easier lab compliance and operation
EffiChem 5.0 software for easier lab compliance and operation Already 5th version of the EffiChem software helps pharmaceutical companies and ISO 17025 accredited laboratories. Why EffiChem 5.0: Data integrity In compliance with the current regulations and standards, according to authority…
Key words
management, managementlaboratory, laboratoryeffichem, effichemlims, limsrecords, recordsmodule, moduleshall, shalluncertainty, uncertaintyvalidation, validationqms, qmsmodules, modulescontrol, controlcharts, chartsfunctionalities, functionalitiestraining
VaMPIS - Validation of Measurement Procedures that Include Sampling
VaMPIS - Validation of Measurement Procedures that Include Sampling 1. Introduction Validation of analytical methods (i.e. procedures) usually excludes the primary sampling, but this is now widely recognised as the first step in the measurement procedure [1] (Fig.1). Validation of…
Key words
sampling, samplingmeasurement, measurementprocedure, procedureuncertainty, uncertaintyvampis, vampistarget, targetsitu, situffp, ffpduplicate, duplicatesteps, stepsvalidation, validationactual, actualarising, arisingprimary, primarydecision
Why use Signal-To-Noise as a Measure of MS Performance When it is Often Meaningless?
Why use Signal-To-Noise as a Measure of MS Performance When it is Often Meaningless? Technical Overview Authors Abstract Greg Wells, Harry Prest, and The signal-to-noise of a chromatographic peak determined from a single measurement Charles William Russ IV, has served…
Key words
noise, noisesnr, snrsignal, signaldetection, detectionperformance, performanceanalyte, analytemetric, metricmerit, meritlimits, limitsinjections, injectionsguidelines, guidelinesidl, idlreplicate, replicatepeak, peakvariance
Other projects
LCMS
ICPMS
Follow us
FacebookX (Twitter)LinkedInYouTube
More information
WebinarsAbout usContact usTerms of use
LabRulez s.r.o. All rights reserved. Content available under a CC BY-SA 4.0 Attribution-ShareAlike