Differential Success

As the College of American Pathologists reported in March 2010, the absolute count is preferred for reporting the WBC differential.1 To stop reporting proportional (percentage) differentials, there are hurdles to overcome: physician and staff resistance, redesigning formats, and education. Once you’ve made the switch, what happens when you need to perform a manual differential?

When to Do Manual Diffs?

Once a switch to only absolute numbers is made, an increasing reliance on the accuracy of the automated differential is inevitable. Technologists and physicians will be comfortable with absolute ranges and prefer them to percentages. It makes sense to first review criteria for when to perform a manual differential.

One article published in the American Journal of Clinical Pathology points out that reflexively performing a manual differential count based on just instrument flags can be problematic for just this reason. If an instrument flags abnormalities that do not exist and a manual differential is done, the automated differential will be replaced with less accurate numbers. The authors recommend a slide review first.2

A slide review validates instrument performance and eliminates false positives. For example, if your hematology instrument flags “Reactive Lymphocytes,” instead of performing a manual differential, scan the smear for abnormal lymphocytes; if a number exceeding criteria are seen (e.g., two or more in five fields), perform a manual differential.

Criteria may be arbitrary. As a CAP Q-Probes found, practice varies in manual review of blood smears. In the study the median group reviewed approximately 25% of CBCs. The most frequent prompts were hematology analyzer flags that included WBC flag (36.7%), immature cell flag (25.5%), and red cell and platelet flags.3

But the International Consensus Group for Hematology Review in 2002 developed 41 rules after testing over 13,000 samples in 15 laboratories.4 These rules are in the public domain and available at http://www.islh.org, providing criteria for slide review and/or manual differential based on delta checking, differential numbers, and instrument flags.5

Our Changing Paradigm

Reflex criteria are only part of the story. If a technologist ends up replacing automated absolutes with values calculated from a 100-cell manual differential, the purpose that made the switch in the first place is defeated. A new paradigm demands a new approach.

Manual differentials have been traditionally counted as 100 cells as a matter of convenience. But many validation protocols call for 200- or 400-cell differentials to check automated counts, because the precision of counts varies with WBC count and smear technique. This imprecision can be calculated as follows:

2 SD = 2 x SQRT((# cells x (100 – # cells)) / total cells counted)

Thus, a nine-cell reactive lymphocyte count has a theoretical range of 3-15%. A total lymphocyte count of 35%, however, has a range of 25-46%. A WBC count of 5,000 has a calculated absolute lymphocyte imprecision of almost 14%, much higher than automated instruments. A table in Laboratory Diagnosis and Management by Clinical Methods from a 1960 study mirrors this variation. In the table, our lymphocyte counts have ranges of 4-17 and 25-46%.6 And a physician tracking numbers or making diagnostic decisions is unlikely to be aware of this inherent inaccuracy.

Perhaps, reporting and relying on absolute numbers diminishes the importance of enumerating all percentages in a total differential. As Dr. Joan Etzell, a member of the CAP Hematology and Clinical Microscopy Resource Committee advises in CAP Today, “immature granulocytes” that include metamyelocytes and myelocytes, as well as promyelocytes and blasts, should be reported separately.7

A Better Approach

One approach to resolve variation in precision is to simply count more cells. Instead of performing a reflexive 100-cell differential, in other words, count 200 or even 400 cells to calculate more accurate absolutes. But as Table 1 suggests, this approach only marginally affects precision.

Table 1: The Effects of Different Cell Counts

Cells Counted

100 Total Counted

200 Total Counted

400 Total Counted









A second approach is to report a 100-cell manual differential along with automated absolute numbers when a manual differential is needed. This “backslider” approach can give physicians confusing or misleading information, especially if percentages and absolutes for major cell lines don’t appear to match.

A third approach, hinted at by Dr. Etzell above, is to report percentage counts of only abnormal cells with automated absolute counts. This best of both worlds keeps more accurate automated absolute neutrophil count (ANC) intact and reports raw percentages of metamyelocytes and myelocytes. A suggested format is in Table 2.

Table 2: Suggested Format for Reporting Abnormal Percentages



x 103 / µL



x 106 / µL





















x 103 / µL




x 103 / µL



x 103 / µL



x 103 / µL



x 103 / µL



x 103 / µL

















Mind the “S”

Any quality improvement represents a cycle of constantly measuring, planning, and implementing change. This is commonly known as the PDSA (Plan-Do-Study-Act) cycle, a tool used to generate continuous improvement in many areas of your organization. It applies in this case, too, as shown in Table 3.

Table 3: PDSA Applied


CBC Reporting Improvement

P – Plan what you want to achieve

Design your format changes, revise procedures, train staff, educate physicians

D – Do what you planned in the previous step

Follow an implementation action plan with accountability and deadlines

S – Study the outcome to see what happened

Measure performance and analyze the data to see if the change worked

A – Act based on an analysis of your study

If the new reporting works, build it into your system; resolve problems by returning to the Plan step and repeat the cycle

It’s easy to forget the “S” (Study) once other problems arise. This crucial step is needed to make sure planning is implemented successfully, but it also checks for unintended consequences: physicians ordering manual differentials or insisting on percentage counts, technologists feeling less comfortable with manual differential technique over time, variation in reporting abnormal cell counts, etc.

Data should be collected to track how this change affects process and this information analyzed in open discussion with staff. Key questions to answer are:

  • What really happened?
  • Did we get the outcome we expected?
  • Do we need to do anything different?

While the nature of abnormal cell identification is subjective, manual differential performance can be broadly compared to your instrument using a chi-square test. The Excel CHITEST function, for example, calculates the probability that differences between two data sets are significant or due to random variation. An actual range of values (technologist manual differential) is compared to an expected range (automated numbers); a probability of .05 or less is significant, indicating a competency review.

Applying new technology to an existing process — in this case, more accurate absolute numbers to a manual differential — is a good opportunity to test your PDSA skills. It’s also a good chance to give the doctors just what they need to treat their patients, leading to better patient care.

Scott Warner is lab manager at Penobscot Valley Hospital, Lincoln, ME.

References on page 2


1. Etzell J. For WBC differentials, report in absolute numbers. Available at: http://www.cap.org/. Last accessed: 5/8/12.

2. Lantis K et al. Elimination of instrument-driven reflex manual differential leukocyte counts Available at: http://ajcp.ascpjournals.org/content/119/5/656.full.pdf. Last accessed: 5/8/12.

3. CAP. 2008 CAP today Q&A. Available at: http://capstaging.cap.org/apps/docs/committees/hematology/2008_CAP_TODAY_Questions.pdf. Last accessed: 5/8/12.

4. Barnes PW et al. The international consensus group for hematology review: suggested criteria for action following automated CBC and WBC differential analysis. (Abstract) Available at: http://www.ncbi.nlm.nih.gov/pubmed/16024331. Last accessed: 5/8/12.

5. ISLH. Consensus guidelines: rules. Available at: http://www.islh.org/web/index.php?page=consensus_rules. Last accessed: 5/8/12.

6. Warner S. Using chi-square and a PC to assess competency. Available at: http://findarticles.com/p/articles/mi_m3230/is_7_33/ai_77107546/pg_2/?tag=content;col1. Last accessed: 5/10/12.

CAP Today. To and fro on band count reporting and clinical utility. Available at: http://www.cap.org/apps/cap.portal?_nfpb=true&cntvwrPtlt_actionOverride=%2Fportlets%2FcontentViewer%2Fshow&_windowLabel=cntvwrPtlt&cntvwrPtlt%7BactionForm.contentReference%7D=cap_today%2F1110%2F1110f_band.html&_state=maximized&_pageLabel=cntvwr. Last accessed: 5/10/12.

About The Author