If the QC rule with mean and SD across multiple instruments measuring the same analyte is the same (as long as the QC rule mean and SD are designed appropriately), the overall risk of reporting unreliable results is lower compared to the traditional approach of setting QC rules on running mean and SD. Using a QC rule with a fixed mean appears to balance the risk of reporting incorrect results across multiple instruments, while the fixed SD allocates more error detection capability to poorer performing instruments, resulting in a lower overall risk of reporting incorrect results. Setting a QC rule with fixed means and SDs for multiple instruments performing the same assays can provide good QC performance characteristics from the perspective of the reliability of patient results.
Recover from a small out-of-control condition by spot-checking patient specimens near medical decision limits. Repeat all the patients at the medical decision limits up to the last successful QC event, and assess the magnitude of the measurement error by taking the difference between the new result and the old result. If the measurement error is greater than the allowable total error specification, the result should be corrected.
Recover from a large out-of control-condition by retesting patient samples in batches of 10 back to the last successful QC event, assessing the magnitude of the measurement errors by retesting the last 10 samples and taking the difference between the new results and the old results. If the measurement error is greater than the allowable total error specification, then continue with the retesting in batches of 10 until the measurement error is less than the allowable total error specification or is insignificant. Those results with measurement error greater than the total error specification should be corrected.
Review Limits
Set review limits for automated chemistry tests that don't have critical values or delta checks. Historical patient data is used to generate histograms for each analyte and determine the 95 percent non-parametric confidence intervals. The histograms and confidence intervals are visually inspected to establish the review codes where results outside of these historical limits would occur less than or equal to 5 percent of the time. The limits are widened for high volume tests to minimize false rejections. Patient results outside these limits are subject to review. This approach can be readily implemented in a typical auto verification workflow.
A high sigma process can accommodate QC rules with lower false rejection rates as they are easy to QC. QC rejection limit, defined as a fraction (f) of the allowable total error (TEa) is a simple yet effective QC rule (±f*TEa) for high sigma-metric processes. A QC rule with rejection limits as ±0.6* TEa has at least 90 percent error detection capability with a false rejection rate less than 1 percent for any process with sigma-metric value greater than 5.3. The higher the sigma metric, the greater the error detection rate and lower the false rejection rate for these rules.
A low sigma process needs a QC rule with high error detection capability, as they are difficult to QC. New Z-squared QC rules (Z2) and repeat Z-squared are also shown to be more powerful than traditional multi-rules.
The rejection limits are set to achieve an acceptable false rejection rates and the required error detection. The family of Z2 QC rules provides superior error detection ability with fewer QC samples for low sigma processes.
Performance Criteria
A recent study showed that the analytical imprecision and bias have considerable effect on misclassifying a patient result. Historical known-patient data of diseased and normal patients was taken. Decision limits were used to classify patients as diseased or normal. The bias and imprecision of the assay was varied and misclassifications were recorded. The lower the imprecision and bias was, the lower the misclassification rate. The misclassifications were used to identify an acceptable value for bias and imprecision for an assay. The study also showed that false classifications increased when allowable total error limits were increased. Consequently, it was recommended that analytical quality specification be set at maximum bias and imprecision.
These new technologies, in conjunction with current recommended best practices, may help laboratories design better quality control strategies that would identify failures that could otherwise compromise the integrity of the patient results, which may lead to patient harm. Some of these technologies may be readily implemented by the lab today while others may require additional work, such as software changes or internal studies to determine the approach that works best for the laboratory.
Lakshmi Kuchipudi is a senior scientist at Bio-Rad Labs and is currently working on her PhD in Statistics at Texas;M.
Copyright 2015 Merion Matters. All rights reserved.