The Lab and Risk Reduction, Part 6

Welcome. In this, our third installment of dealing with risks during the analytical phase, we discuss the role of quality control as another approach to reducing risks to patients. Once an analytical method has been validated with regard to bias and imprecision, it remains to the QC to detect changes in the mean or the SD.

A 2×2 grid is a useful tool for understanding and discussing the frequency of errors (and hence risks):

Data Are

Good Bad

“In”

QC Label

“Out”

A set of patients analyzed for a particular analyte at the same time is a ‘run.’ This run is either good (without error) or bad (with error). Labels (“In” and “Out”) are given to the run by the QC rules. Theoretically, it is not 100% possible to detect every error (and every error does not have an effect on the patients in the run). Nor is it possible to accept every good run. Thus, there are runs that are rejected even though there was no error present. These are false rejects and are results of the normal (Gaussian) curve. But given the TE allowed (TEa) it is possible, most of the time, to select one or more QC rules that will detect an error that is larger than the TEa (that is TE > TEa). In order to do this, it is occasionally necessary to use QC rules that will reject (“Out”) a good run. We can label the four possible outcomes as follows:

Data

Good Bad

“In” True accept False accept

QC sign (risks to patients)

“Out” False rejects True reject

(increased costs and time)

These four outcomes do not occur with the same rates. “Bad” runs are quite rare and are detected most of the time. Most runs are good (~99%) but can be falsely rejected as often as 10% of the time when using the 1 2 SD QC rule and two controls.

Applying the TE and TEa for each analyte for each level of control will provide a highly efficient rule or rules for QC with a minimum of false rejects. In the case where the false rejects are cost ineffective it is possible to adjust the method – by reducing the SD and/or the bias – and reducing the false rejects without lowering the error detection.

Three examples will help understand this.

For the first example, let’s imagine an analyte with a TE of 7% and a TEa of 12%. The error budget or margin of error is 5%. The bias is 2% and the %CV is 2.5%. (TE = 2% + 2*2.5% = 7%). If the QC rule is set at 1 2SD for both Level I and Level II, there is a 10% chance that any given run would be rejected when there was no error. That is a 10% false reject rate. If the QC rule were changed to 1 3 SD, there is a nearly 100% chance that a change in bias or SD that would exceed the TEa would be detected with a reduction in false rejects from 10% to 0.6%!

In the second example, imagine that the 1 3 SD rule is being used for both levels of control, but that a new lot of calibrator shifts the mean of Level I from 52 to 54. The group mean is 50. The CV remained at 3%. Thus the TE moved from 10% to 14%. Assume the TEa is 15%. Even with the 1 2SD QC rule for both levels of control, the error detection would not be sufficient. It may be necessary to use another lot of calibrator. It is possible that the majority of the peer group would also move up a bit. It might be wise to check with the control vendor to see if other labs have reported a small, but significant shift.

In our last example, let’s imagine a method (e.g., prothrombin time) that uses the 1 3 SD rule for both controls. Level I has a mean of 12.4 with an SD of 0.15 (%CV = 1.2%); Level II has a mean of 31 with an SD of 0.35 (%CV = 1.1%). The group means are 12.6 and 32. The TEa is 15%. New reagents are delivered and checked out. The mean of Level I moves to 12.6 – a change of more than 1 SD. Level II also moves up by 1 – almost 1 SD! But these shifts are toward the group mean. Arguably a good thing. Our point? All things being equal, there is a 50% chance that a shift will be toward the group, just as there is a 50% chance of a shift away from the group.

It is important, if not imperative, that new lots of reagents and calibrators be tested before being put on line. This should be done with replicates of the controls (3-4 each) and 6-8 patients, new reagent side by side with the current reagents or calibrators. Measure the average of the difference means and compare the TE before and after the possible change. Another statistic is the Student t-test that will detect a difference between the means. However it does not give as much information about TE as does the average or % bias. We suggest using just the means and % bias.

Keep in mind that QC is the best error detector once the method has been validated. It is worthwhile to compare TE and TEa and then select the best rule(s) for detecting errors (thus reducing risks).

Our next installment will discuss the risks associated with post-analytical measurements.

The authors thank William McLellan for his assistance in preparing this installment. Don’t hesitate to send comments and questions to davidsplaut@gmail.com

About The Author