Vol. 19 • Issue 7 • Page 57
This is the first of a 3-part series.
Activities needed to manage and implement the quality processes specific to the validation and verifications of method performance in the clinical laboratory are crucial concepts. In this first of a three-part series, we focus on the role of performance standards for clinical laboratory tests.
Quality Management: Not Just QC
Most laboratorians initially started with the task of tracking quality control (QC) functions, running the necessary controls and manually filling in hard copy Levey-Jennings process control charts with big, black markers. In recent years, total quality management (TQM) has expanded to include quality assurance (QA) functions designed to define, refine and automate the entire process from receipt of the requisition to the production and delivery of test results and beyond. Depending on the certifications needed to operate a specific lab, regulations intended to ensure the quality of test results include CLIA ’88, with additional “enhanced” requirements by deemed proficiency testing providers such as CAP, New York State and numerous state health organizations, as well as inspection agencies like The Joint Commission or COLA. Table 1 lists the QA activities required on a one-time basis or on a regular basis.
Setting performance standards for the acceptable performance of clinical laboratory tests on a day-to-day basis is a key component of the QA function. When ðdiscussing ðperformance standards or performance goals, it is in reference to the total error allowed for a single replicate measurement of a patient specimen as compared to a “true” value. The definition of total allowable error (TEa) as an indicator for medical usefulness dates back at least 30 years, is specified in many of the current regulations, and is now in widespread use. Because a single point is measured, it includes the elements of both imprecision and bias from the true value (e.g., accuracy) also referred to as TEa around the “true value.”
Total error includes two components: The first is allowable random error (REa) as a measure of imprecision; and second, allowable systematic error (SEa) or bias, a measure of accuracy as the deviation from the true value. These two metrics define the two key values on which the quality of our primary product, patient results, is based. REa can assist in targeting the SD to be used in daily QC.
After the TEa is agreed upon, SEa and REa can be budgeted into the whole. Defining the budget for accuracy (SEa) as a percent of TEa turns out to be very dependent on being able to use standard materials that are traceable to a true value or reference value applicable to the method in question. If the bias is significant, the REa budget needs to shrink to maintain the same error rate.
A model that fits well for most clinical labs follows a three standard deviation (SD) model, equivalent to an error rate of 1,350 errors per million (epm) and aligning with the CLIA recommendation of ±3 SD for analytes without specific numerical guidelines. In this model, TEa = SEa + 3 x REa. TEa will be the first metric to be defined.
How Are Values for Established Tests Defined?
There’s no single allowable error that is perfect for all methods. The object is to use values that are “just right.” Total error should be small enough be defensible and clinically responsible, minimizing the probability of releasing inaccurate results. The appropriate TEa should also be large enough to be attainable and analytically achievable without accruing excessive costs to keep the process in control.
Where Do Performance Standards Come From?
Most of the time, TEa is either predetermined by the applicable regulations or by derivation using available data. The first thing to consider is whether an analyte has universally recognized medical requirements, which only a small number of analytes do. TEa for Total Cholesterol, HDL Cholesterol, Triglyceride, LDL Cholesterol, Creatinine and HbA1c has been standardized and should be used. Otherwise, the values from regulatory sources such as CLIA or CAP should be used. CLIA ’88 includes TEa criteria for about 75 ðanalytes. Most are expressed in terms of ± concentration units, percentages or both. Some are expressed in terms of the achievable SD distribution in a proficiency survey (e.g., TSH is specified as ± 3 SD).
Values provided in the CLIA regulation are, by definition, the largest one would want to specify. Technology in 2010 is much improved since CLIA was first enacted; for many analytes on modern automated instruments, responsible labs can pare the value down to an achievable smaller value without affecting the costs to maintain the process.
When there is no pre-defined or mandated TEa value for a particular test, it is possible to derive one using data that is readily available from a peer group proficiency survey. This is relatively straightforward for established tests or those already approved or cleared by the FDA. You can also use performance data from peer group surveys and it is recommended that you review the data for your instrument’s specific peer group. A generalized recommendation for calculating an allowable error estimate using peer group data is outlined in Table 4.
Why a Concentration Component?
For many analytes, a TEa expressed as percent will not work at concentrations near the lower limit of the reportable range. The analytical sensitivity of the method may dictate that below a certain value, the SD will remain fairly constant. For example, in an experiment to verify reportable range and accuracy for LDH, the TEa is 20%. Suppose the assigned value of a low standard is 5 units, and the mean measured value is 7 units (40% above the defined value). While the difference is clinically insignificant, the test for accuracy fails. A concentration component should be defined for TEa in addition to the percentage to prevent setting unrealistic expectations at the low end.
There are several ways to obtain a usable value for the TEa at the low end. The manufacturer may offer a low-end precision SD, or the SD obtained from a low concentration sample in a peer group PT survey or monthly QC survey could be used. A good source would be the observed low-end total precision SD from the CLSI-EP5 complex precision experiment. In all cases, multiply the SD by 3.
Therefore, for most analytes, it is desirable to use concentration at the low end and percentage at the high end. This is expressed as “x units or Y%, whichever is greater.” There is a crossover point at which the concentration will equal the percent. Using a percentage target at these low levels often gives an unachievable value. The Figure shows a typical test’s error profile expressed in CV and SD.
How Are Performance Standards Applied to the QA Process?
Depending on the specific experiment being run, pass/fail criteria may rely on one or more of the components of the total error. Some experiments such as two instrument comparisons or multiple instrument comparisons base the pass/fail criteria on whether observed data points lie within total error limits compared to the “true” value. Experiments that assess accuracy or calibration verification require comparison to the systematic error to pinpoint the method’s observed bias for acceptance. Modules assessing imprecision will use the allowable random error budget as the acceptance goal. Table 5 lists those modules in EP Evaluator™that use performance standards as the primary indicator of the pass/fail criteria, and shows whether it uses systematic error and/or random error in addition to total allowable error.
The use of performance standards TEa, SEa and REa is fundamental to the method evaluation phase of laboratory quality assurance. The key is to obtain and use TEa that is both attainable (performance goals are achievable on the method in question) and defensible (the goals are clinically responsible with no greater error rate than is acceptable).
Carol Lee is a consultant for Data Innovations. Dr. Rhoads designed and developed EP Evaluator®i> and is the director of Rhoads, a brand of Data Innovations.