The building blocks for the design of a good QC system to ensure that reliable patient results are produced and reported are
- identify failure modes that create hazards
- evaluate the risk of each hazard
- devise a QC plan to mitigate hazards and recover from failures when they occur
- continually monitor the clinical diagnostic process to evaluate the failures that occur
- assess when it is necessary to re-evaluate hazards, risk or the QC plan.
Each failure provides a new opportunity to strengthen the quality of the clinical diagnostic process and ensure the reliability of future results. This monitoring aspect is referred to as post-implementation monitoring in the CLSI EP23-A guideline.5
Data Useful for Improvement
One of the most valuable sources of data for identifying opportunities for improving the quality of the total test system are data generated from failures. Some failure data are fairly easy to obtain - such as the frequency of a specific failure mode or the cumulative frequency of all failure modes for a given testing process. How often are QC rejections generated? How often is calibration required? How often are patient specimens reevaluated? These data should be recorded in a fashion that facilitates analysis. Basic failure rates should be recorded for all analytes, summarized and monitored on a regular basis.
Other critical data on failures are not so easily obtained, especially failures that may persist for an extended period. These types of failure modes have been referred to as persistent out-of-control conditions6 and large-scale testing errors.7
Questions for the laboratory to address in these cases are:
• How large was the failure (magnitude of the out-of-control condition)?
• When did the failure occur?
• How many patient results were affected by the failure?
An effort should be made to estimate the type, magnitude and duration of a detected out-of-control condition to facilitate a timely and effective recovery from the failure. As a laboratory accumulates data over time regarding the types and magnitudes of out-of-control conditions that have occurred they are in a better position to design QC strategies targeted toward those types of failures.
By collecting and recording the details of each out-of-control condition, the laboratory builds information infrastructure needed to trigger a reassessment of how the laboratory is managing the production of reliable patient results for a clinical process. If the initial risk assessment estimated that a given failure mode was likely to occur once every 2 years and it has occurred 3 times this year, a re-evaluation is in order. The acceptability of the estimated number of unreliable results produced because of an out-of-control condition will likely need to be questioned if the out-of-control condition is occurring more frequently than anticipated. The frequency of QC events may need to be increased to detect the out-of-control condition sooner to reduce the impact of the failure, or the QC rule may need to be modified in favor of a QC rule with greater error detection power.
Another benefit that can occur from analyzing the details of failures is the discovery of a hazardous situation that had not yet been identified in the laboratory's risk assessment and addressed in its current QC plan. The newly identified hazard may suggest an additional critical control point in the testing process. Consider this scenario: The QC results from the laboratory's last QC event were on the high side, but still acceptable so patient specimen testing continued. A calibration is performed prior to the next scheduled QC event. QC results are obtained immediately after the calibration. The QC results are acceptable and patient specimen testing continues. Unfortunately, several complaints from treating physicians question the integrity of some of the recent patient results. After investigation, it is determined that a number of the specimens evaluated just prior to the calibration had unacceptably high results. The most likely explanation is that an undetected out-of-control condition existed prior to the calibration that was corrected by the calibration and, therefore, never identified by the lab. To mitigate the likelihood of this hazardous situation occurring in the future, the lab adds a new critical control point. A QC event is scheduled immediately before each calibration. By doing so, the laboratory increases its chance of detecting any out-of-control conditions affecting patient specimens examined since the last accepted QC event before the test system's state is changed by the new calibration.
In the prior scenario, an opportunity to reduce patient risk was accomplished by adding a new critical control point to the QC testing plan. Risk might also be reduced by decreasing the likelihood that an incorrectly reported patient result will lead to patient harm. Assume a hospital has an ED policy to discharge patients presenting with chest pain if they have a negative ECG and a cardiac troponin result below the cutoff. The lab is notified that several patients have re-presented and found to have had MI events. During investigation, the lab discovers that an instrument failure led to false negatives. Due to the patient impact, in addition to other corrective action, the multi-discipline team decides to reduce the likelihood of causing patient harm due to a false negative troponin result by changing discharge requirements to include serial negative troponin results and modifying its QC procedures to assure that any out-of-control condition in troponin testing is detected sooner.
Dr. Parvin is manager of Advanced Statistical Research; John Yundt-Pacheco is Scientific Fellow; and Andy Quintenz is Global Scientific and Professional Affairs Manager, Bio-Rad.
References
- Parvin CA, Yundt-Pacheco J, Williams M. Designing a quality control strategy: In the modern laboratory three questions must be answered. ADVANCE for Administrators of the Laboratory 2011;20(5):53-4.
- Parvin CA, Yundt-Pacheco J, Williams M. The frequency of quality control testing. QC testing by time or number of patient specimens and the implications for patient risk are explored. ADVANCE for Administrators of the Laboratory 2011;20(7):66-9.
- Parvin CA, Yundt-Pacheco J, Quintenz A. Statistical QC & risk management. The combination can improve the overall quality of patient results. ADVANCE for Administrators of the Laboratory 2012;21(8):35-7.
- Parvin CA, Yundt-Pacheco J, Williams M. Recovering from an out-of-control condition. The laboratory must assess the impact and have a corrective action strategy. ADVANCE for Administrators of the Laboratory 2011;20(11):42-4.
- CLSI. Laboratory Quality Control Based on Risk Management; Approved Guideline. CLSI document EP23-A. Wayne PA: Clinical and Laboratory Standards Institute; 2011.
- Parvin CA. Assessing the impact of the frequency of quality control testing on the quality of reported patient results. Clinical Chemistry 2008;54(12):2049-54.
- Valenstein PN, Alpern GA, Keren DF. Responding to large-scale testing errors. American Journal for Clinical Pathology 2010;133:440-6.
Copyright 2015 Merion Matters. All rights reserved.