Quality control plays an essential role in reducing error and minimizing risk
By David Plaut, Deena Davis and Nathalie Lepage
While quality control (QC) has been used for decades, there are several reasons it's still a relevant and timely topic of discussion today. For example, there's been significant improvement in instrument precision over the decades since QC was proposed in the 1970s and '80s. As well, computers are more sophisticated now and able to do more tasks related to quality. There's also sufficient evidence to use the average of normal (AoN) or the patient moving average and benefits of using delta checks (DC). Finally, one can reevaluate the number of patient samples analyzed in each run/batch as a way to reduce reruns and risk.
Before we address these aspects, let's look at some data on the sources of errors (and frequency) that are usually attributed to the lab. Usually these errors are divided into three areas: pre-analytical, analytical and post-analytical. For example, of 88,000 (1.4 percent) defects from 6,200,000 total results, 41 percent were observed in the pre-analytical phase of testing, 55 percent in the post-analytical phase and only 4 percent in the analytical phase (0.056 percent of the total).1
Standard Deviations (%CVs) Have Dropped Significantly
At the 2014 AACC meeting, a poster was presented showing decreases in SDs from 20 prercent to nearly 80 percent (Table 1). These improvements indicate that we need to review the current QC rules for accepting and rejecting runs. A change in SD of 50 percent translates a 1 2S rule into a 1 3S rule. Such a change would reduce the number or false rejects by a factor of 8-fold. A change of "only" 1 2S to 1 2.5SD would reduce false rejects by 50 percent. In either case, the true rejects would be little affected - If at all.
Computers
Within the past few years, computers have been shown to be able to not only spot a change in means and SD, but also assess the current QC system for each analyte and propose a rule (or set of rules) and make suggestions of how best to reduce the total error (TE) of any analyte (Fig. 1, is an example of such an adjunct to QC. This one is available at no charge at this web site). Such evaluation of the QC system will serve to validate a QC program, reduce false rejects and improve error detection (TR). Without the correct mean and SD being assigned to all analytes, the possibility of errors escaping grows.
The Possible Role of the AoN or Median of Normal Patient Data
The idea of truncating the data in a run of patient samples to only those values that were in the reference interval dates back about five decades. At that time, it was not very feasible as the average would have to be calculated with a desktop calculator and, thus, was time consuming. Since the range of the normal patient data is larger than a control sample, the AoN or median may be less sensitive to a change in the mean of the analytical system than the QC data. The authors of this article agree that the AoN or median should be used as an adjunct tool in troubleshooting. For example, if the control(s) exceed the limits and the AoN exceeds its limit, there is evidence that the change was not due to the control(s). With computers today, this is a simple assistant to the QC protocol. Two caveats: AoN does not work well with all analytes (e.g., drugs) and requires a run of 20 or more samples.
Delta Checks
DCs compare the current patient sample with an earlier one alleged from the same patient. The elapsed time between the two samples from the same patient can be as short as hours or as long as months. DCs have a variety of uses in the laboratory in monitoring both patient results and QC values. Although these DCs have been discussed over many years, the idea is not often employed. One reason is that when they were introduced, computers were not generally programmed to do the checking. Today, that is not an impediment. By monitoring such "stable" values as hemoglobin, hematocrit and potassium on the same patients from one time to another, it is possible to find specimens that have been diluted or contaminated during collection - or even mislabeled with the wrong patient's information - as well as a critical change in something like hemoglobin or a tumor marker.
When a patient is admitted to the hospital with a hemoglobin of 10.2 g/dl and the CBC the following morning shows a hemoglobin of 7.5 g/dl, there is cause for concern. It is this difference that can be flagged with a DC, indicating the results warrant special attention. The first question you should ask is, "Is there a reasonable explanation for this variation (e.g., did the patient go to surgery?" If no, there is a problem and a need to question the integrity-if not, the identity-of the sample.
It may not be necessary to use DCs on many analytes, as a few stable analytes should suffice to check for ID mix-ups.
Without DC, these types of situations can be easily missed if a clinical laboratory scientist is busy trying to get out morning draws. By having the LIS look specifically for these shifts in values and flag them as DCs, it is one more level of patient safety in place.
Run Length
It is common to put the controls on at the beginning of a run and not analyze them again during that run. Even with the instrument being checked for temperature and a dozen or more parameters during the run, it is still possible for an error to arise undetected. This fact is a reason to run controls at the end of the run. This is especially true if a particular analyte is prone to errors or a 1 2SD reject rule is used. The cost and time of reanalyzing an entire run may more than pay for running the controls at both the beginning and end.
Conclusions
QC plays an essential role in the laboratory. With the assistance of the staff and wise use of computers, it is extremely effective and time saving. With appropriate use of the tools we have sketched here, the already small number of errors that leave the laboratory will be still smaller, thus reducing the risk to patients.
David Plaut is a chemist and statistician in Plano, Texas. Deena Davis is point-of-care coordinator at Bozeman-Deaconess Hospital in Bozeman, Mont. Nathalie Lepage is a clinical biochemist and a biochemical geneticist at the Children's Hospital of Eastern Ontario and an associate professor in the Department of Pathology and Laboratory Medicine at the University of Ottawa, Ontario, Canada.
References
- Boone DJ, Steindel SD, Herron R, Howanitz PJ, Bachner P, Meier F, et al. Transfusion medicine monitoring practices. Arch Pathol Lab Med 1995;119:999-1006.
Table 1
Analyte
|
Method
|
1988 Mean % CV
|
2012 Mean % CV
|
% Change
|
Hemoglobin
|
Coulter
|
2.9
|
1.7
|
41
|
WBC
|
Coulter
|
3.8
|
2.7
|
24
|
Cholesterol
|
Abbott
|
2.9
|
1.0
|
66
|
AST
|
Roche
|
8.4
|
5.2
|
33
|
Ca
|
Baker
|
4.9
|
1.3
|
77
|
Prothrombin Time
|
*
|
6.4
|
2.8
|
56
|
APTT
|
Stago
|
10.4
|
2.9
|
78
|
Comparison of Mean % CV Over a 24-year Period, as Observed in CAP Surveys
Copyright 2015 Merion Matters. All rights reserved.