Laboratory QC
The primary tool used by laboratories to perform routine QC is the periodic testing of QC specimens, which are manufactured to provide a stable analyte concentration with a long shelf life. A laboratory establishes the target concentrations and analytic imprecision for the analytes in the control specimen assayed on their instruments. Thereafter, the laboratory periodically tests the control specimens and applies QC rules to the control specimen results to make a decision about whether the instrument is operating as intended or whether an out-of-control error condition has occurred.
Traditionally, laboratories determine how many control specimens to test and what QC rules to use based on a desire to have a low probability of making the erroneous decision that the instrument is out-of-control when it is, in fact, operating as intended (false rejection rate) and a high probability of making the correct decision that the instrument is out-of-control when indeed an out-of-control error has occurred (error detection rate).2 Clearly, the in-control or out-of-control status of a laboratory's instruments influences the reliability (quality) of the patient results reported by the lab. Laboratories tend to design QC strategies with a focus limited to controlling the state of the instrument rather than controlling the risk of producing and reporting unreliable patient results that could compromise patient care.
Focusing on the Patient
One approach that more directly focuses on the quality of patient results (rather than the state of the instrument) is to design QC strategies that control the expected number of unreliable patient results reported because of an undetected out-of-control error condition in the lab.3 What do we mean by an "unreliable" patient result? The quality of a patient result depends on the difference between the patient specimen's true concentration and the value reported by the laboratory. We define an unreliable patient result as one where the difference between the patient specimen's true concentration and the value reported exceeds a specified total allowable error, TEa. If the error in a patient's result exceeds TEa we assume it places the patient at increased risk of experiencing a medically inappropriate action.
Fig. 1
How does a laboratory decide what the TEa specification for an analyte should be? This is not a simple question to answer, but a number of different lists of TEa specifications that include hundreds of analytes have been produced and are available from a variety of sources (see, for example, the quality specifications found in the QC documents section at www.qcnet.com).
Once a TEa for an analyte has been specified, then for any possible out-of-control state, the probability of producing patient results with errors that exceed the TEa can be computed, as demonstrated in Fig. 1. The gray curve represents the frequency distribution of measurement errors for an instrument operating as intended. The distribution is centered on zero and the width of the distribution reflects the inherent analytical imprecision of the instrument. The black curve represents the frequency distribution of measurement errors after a hypothetical out-of-control error condition has occurred. The out-of-control error condition causes the instrument to produce results that are too high.
The area under the out-of-control measurement error distribution that is either greater than TEa or less than -TEa (shaded in red) reflects the probability of producing an unreliable patient result. In this case, 10% of the area under the curve is shaded red. If an out-of-control error condition of this magnitude occurred, then while the instrument was operating in this state we'd expect 10% of the patient results produced to be unreliable.
The expected number of unreliable patient results reported while an undetected out-of-control error condition exists will not only depend on the likelihood of producing unreliable results in the presence of the error condition, but also on how many patient results are produced before the error condition is detected and corrected. This is illustrated in Fig. 2.
In this example, each vertical line represents a patient specimen being tested on the instrument. Each diamond represents a QC event where QC specimens are tested and QC rules are applied. A green diamond implies the QC results are accepted; a red diamond means the QC results are rejected. At a point between the second and third QC event an out-of-control error condition occurs, causing a sustained shift in the testing process (such as shown in Fig. 1). Given the magnitude of this particular out-of-control condition and the power of the QC rules, the error condition isn't detected until the third QC event after it occurred. Each red asterisk denotes an unreliable patient result that was produced during the existence of the out-of-control error condition.
Fig. 2
Notice some of the important relationships between QC events, the number of patients tested between QC events, and the number of unreliable patient results illustrated in Fig. 2:
- Not all the results produced during an error condition are unreliable (the probability of producing an unreliable result during an error condition increases with the magnitude of the error).
- QC events do not always detect an error condition on the first try (the probability of a QC event detecting an error condition depends on the error detection rate of the QC rule, the number of control samples used, and the magnitude of the error).
- If the error condition in the example was smaller, we would expect proportionally fewer of the results tested during the undetected error condition to be unreliable (red asterisks in Fig. 2), but more QC events needed to detect it. Conversely, if the error condition in the example was larger, we would expect proportionally more of the results tested during the undetected error condition to be unreliable,and fewer QC events needed to detect it. If the error condition were large enough, all of the patient test results after its occurrence would be unreliable and it would almost certainly be detected at the first QC event ( third diamond in Fig. 2).
QC thinking that focuses on the instrument is concerned with the likelihood that a QC rule will trigger an alert after an error condition has occurred (the probability of a red diamond in Fig. 2). Instrument-focused QC strategies are designed to control the number of QC events required to detect an error condition.
QC thinking that focuses on the patient, on the other hand, is concerned with how many unreliable patient results are produced while an undetected error condition exists (the number of reds asterisks in Fig. 2). Patient-focused QC strategies should be designed to control the number of unreliable patient results produced before the error condition is detected.
Dr. Parvin is manager of Advanced Statistical Research; John Yundt-Pacheco is scientific fellow; and Max Williams is Global Scientific and Professional Affairs Manager, Bio-Rad.
References
- International Organization for Standardization (2007) Medical laboratories - particular requirements for quality and competence. ISO 15189. International Organization for Standardization (ISO), Geneva.
- Westgard JO, Barry PL. Cost-effective quality control: managing the quality and productivity of analytical processes. Washington DC: AACC Press, 1986.
- Parvin CA. Assessing the impact of the frequency of quality control testing on the quality of reported patient results. Clin Chem 2008;54:2049-54.
Copyright 2015 Merion Matters. All rights reserved.