Analytical Performance

Describing and characterizing analytical performance is very much oriented toward metrology. The general conventions and standards to consider include precision, trueness, and accuracy, along with a number of other more technical characteristics that describe the performance of a test.

Analytical Precision and Imprecision

Precision is a measure of the random error or variability observed in measurement results and is a product of the sample handling and analytical process. Precision is typically expressed as the standard deviation found in sample results. Precision includes the definitions of repeatability and reproducibility. Repeatability is the precision with which, in a series of measurements carried out in sequence, results of a measurement agree with each other. Reproducibility, on the other hand, means the precision with which measurements of the same standard specimen carried out not in sequence, but under conditions that differ in some specified way (common examples are day to day, or lab to lab), agree with each other. The difference between actual and perfect agreement of these values is referred to as imprecision.

Analytical Trueness and Bias

Trueness describes the closeness of agreement of an average value from a large series of measurements with a "true value" or an accepted reference value. The numerical value that represents the difference between the two is gener ally referred to as bias. Bias refers to systematic differences between measurement results and the true value of a parameter that is being measured. Bias in measurement results can be introduced in a number of ways, including through sampling, sample handling, sample preparation, matrix interference, cleanup, and determinative processes.

Analytical Accuracy

Accuracy refers to a composite assessment that comprises both random and systematic influences (i.e., both precision and trueness). Its numerical value is the total error of measurement.

Other Metrics Describing Analytical Performance

Aside from metrics for analytical accuracy, a number of parameters are of importance in defining and/or determining the utility of a particular test:

1. The limit ofdetection (LOD) is the smallest amount of an analyte that can reliably be detected by an assay, with a stated confidence limit. The definition includes a number of different detection limits that define different properties of the assay, including the lower limit of detection (LLOD), the instrument detection limit (IDL), the method detection limit (MDL), the (lower) limit of quantitation (LOQ or LLOQ), and the practical quantitation limit (PQL). The detection limits are estimated from the mean of a number of repeated measurements of the blank, the standard deviation of the blank measurements, and a defined confidence factor. The PQL is defined simply as about five times the MDL.

In practical terms, the lower limit of detection is the lowest level of analyte that can be statistically distinguished from the blank (i.e., from background noise). LLOD is a function of the variability of the blank and the sensitivity of the assay. The LLOD is usually considered to be a value that is 3 standard deviations above the mean of the blank. Using this formula, the chance of misclassification is 7%. If 2 standard deviations are used, the chance of misclassification is 16%. Values below the LLOD should be reported as "less than the LLOD value" rather than as a finite value.

The LLOD can commonly be distinguished from an additional variable that is not assay- specific but instrument- specific. Most analytical instruments produce a signal even when a blank (matrix without analyte) is analyzed. This signal is referred to as the instrument detection level or instrument noise level. The IDL is the analyte concentration that is required to produce a signal greater than three times the standard deviation of the noise level. Ideally, this would be equivalent to the assay 's LLOD (as determined under optimal conditions), but is usually somewhat higher.

2. The method detection limit (MDL) is a metric that is similar to the IDL, but is based on samples that have gone through the entire sample preparation scheme prior to analysis, such as extractions, digestions, concentrations or dilutions, or fractionations, as well as interference by other components present in a complex matrix. The recovery of an analyte in an assay is the detector response obtained from an amount of the analyte added to and extracted from the biological matrix, compared to the detector response obtained for the true concentration of the pure authentic standard.

Recovery pertains to the extraction efficiency of an analytical method within the limits of variability. Recovery of the analyte need not be 100%, but the extent of recovery of an analyte and of the internal standard should be consistent, precise, and reproducible. Recovery experiments should be performed by comparing the analytical results for extracted samples at three concentrations (low, medium, and high) with unextracted standards that represent 100% recovery.

3. The limit of quantitation (or quantification), also referred to as lower limit of quantification (LLOQ), is set at a higher concentration than the LLOD; in the statistical method, it generally is defined as 10 standard deviations above the mean blank value, thus presenting a greater probability that a value at the LLOQ is "real" and not just a random fluctuation of the blank reading. The lowest standard on the calibration curve should be accepted as the limit of quantification if the analyte response at the LLOQ is at least five times the response compared to the blank response, and if the analyte peak (the response) is identifiable, discrete, and reproducible with a precision of 20% and an accuracy of 80 to 120%.

The LLOQ can differ drastically between laboratories, so another parameter for detection limit is commonly used, the practical quantita-tion limit (PQL). The PQL is commonly defined as 3 to 10 times the MDL.

4. Selectivity is the ability of an analytical method to differentiate and quantify the analyte in the presence of other components in the sample. For selectivity, analyses of blank samples of the appropriate biological matrix (plasma, urine, or other matrix) should be obtained from a sufficiently large and representative number of sources. Each blank sample should be tested for interference, and selectivity should be ensured at the lower limit of quantification (LLOQ).

5. The coefficient of variation (CV) is a normalized measure of dispersion of a probability distribution. It is defined as the ratio of the standard deviation to the mean. It is often reported as a percentage (%) by multiplying the calculation by 100. The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. The coefficient of variation is a dimensionless number, so when comparing between data sets with different units or wildly different means, one should use the coefficient of variation for comparison instead of the standard deviation.

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Post a comment