Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 149  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 




 
Table of Contents    
TECHNICAL NOTE
J Pathol Inform 2021,  12:19

Use of middleware data to dissect and optimize hematology autoverification


Department of Pathology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA

Date of Submission07-Oct-2020
Date of Decision01-Nov-2020
Date of Acceptance20-Nov-2020
Date of Web Publication07-Apr-2021

Correspondence Address:
Dr. Matthew D Krasowski
Department of Pathology, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, C-671 GH, Iowa City, IA 52242
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpi.jpi_89_20

Rights and Permissions
   Abstract 


Background: Hematology analysis comprises some of the highest volume tests run in clinical laboratories. Autoverification of hematology results using computer-based rules reduces turnaround time for many specimens, while strategically targeting specimen review by technologist or pathologist. Methods: Autoverification rules had been developed over a decade at an 800-bed tertiary/quarternary care academic medical central laboratory serving both adult and pediatric populations. In the process of migrating to newer hematology instruments, we analyzed the rates of the autoverification rules/flags most commonly associated with triggering manual review. We were particularly interested in rules that on their own often led to manual review in the absence of other flags. Prior to the study, autoverification rates were 87.8% (out of 16,073 orders) for complete blood count (CBC) if ordered as a panel and 85.8% (out of 1,940 orders) for CBC components ordered individually (not as the panel). Results: Detailed analysis of rules/flags that frequently triggered indicated that the immature granulocyte (IG) flag (an instrument parameter) and rules that reflexed platelet by impedance method (PLT-I) to platelet by fluorescent method (PLT-F) represented the two biggest opportunities to increase autoverification. The IG flag threshold had previously been validated at 2%, a setting that resulted in this flag alone preventing autoverification in 6.0% of all samples. The IG flag threshold was raised to 5% after detailed chart review; this was also the instrument vendor's default recommendation for the newer hematology analyzers. Analysis also supported switching to PLT-F for all platelet analysis. Autoverification rates increased to 93.5% (out of 91,692 orders) for CBC as a panel and 89.8% (out of 11,982 orders) for individual components after changes in rules and laboratory practice. Conclusions: Detailed analysis of autoverification of hematology testing at an academic medical center clinical laboratory that had been using a set of autoverification rules for over a decade revealed opportunities to optimize the parameters. The data analysis was challenging and time-consuming, highlighting opportunities for improvement in software tools that allow for more rapid and routine evaluation of autoverification parameters.

Keywords: Algorithms, clinical laboratory information system, hematology, informatics, middleware


How to cite this article:
Starks RD, Merrill AE, Davis SR, Voss DR, Goldsmith PJ, Brown BS, Kulhavy J, Krasowski MD. Use of middleware data to dissect and optimize hematology autoverification. J Pathol Inform 2021;12:19

How to cite this URL:
Starks RD, Merrill AE, Davis SR, Voss DR, Goldsmith PJ, Brown BS, Kulhavy J, Krasowski MD. Use of middleware data to dissect and optimize hematology autoverification. J Pathol Inform [serial online] 2021 [cited 2021 Apr 21];12:19. Available from: https://www.jpathinformatics.org/text.asp?2021/12/1/19/313145




   Introduction Top


Autoverification, the use of computer-based rules employed in the laboratory information system (LIS) and/or middleware software to determine release of laboratory test results, is now a routine practice in core clinical laboratories.[1],[2],[3],[4] The use of well-designed autoverification rules improves both quality and efficiency.[1],[2],[4] Autoverification rules have been described in detail for clinical chemistry, blood gas, and coagulation analysis, often achieving autoverification rates of >90%.[5],[6],[7],[8],[9],[10],[11],[12]

In contrast, published studies regarding the application of autoverification in hematopathology are more limited.[13],[14] Zhao et al. describe the implementation of autoverification rules in hematology analysis in a multicenter setting with 76%–85% autoverification rates.[14] The necessity of manual review of peripheral blood smears precludes achieving the high autoverification rates seen in clinical chemistry. On the other hand, high rates of manual review may place a strain on limited laboratory resources and delay turnaround time without adding clinical value. In 2005, The International Consensus Group for Hematology (ICGH) issued guidelines to establish a uniform set of criteria for manual review of automated hematology testing.[15],[16],[17],[18] The proposed criteria for manual review includes quantitative and qualitative parameters. Pratumvinit et al. optimized the ICGH guidelines to significantly reduce their review rates and increase autoverification.[18] The basic qualitative criteria used for manual review are well-established; however, the specific quantitative cutoffs to trigger manual review are largely set by the individual laboratory, with some recommendations for individual parameters provided by instrument vendors or published literature.[7],[15],[16],[19],[20],[21] Individual laboratories ideally should optimize their own set of rules to maintain both quality and efficiency within their own context of instrumentation, staffing, and patient population. However, data analysis on specific flags and their clinical impact may be quite challenging to assess.

In this study, we evaluated autoverification rules at an 800-bed tertiary/quarternary academic medical center core clinical laboratory for a complete blood count (CBC) with white blood cell (WBC) count differential (Diff) and the “a la carte” ordering of individual CBC components. The laboratory had developed and validated autoverification protocols over a decade. Feedback from laboratory staff suggested that some rules were resulting in manual review without clear clinical benefit. We therefore sought opportunities for improvement by assessing the flags that most frequently held specimens for manual review. Our analysis also illustrates some of the data analytical challenges associated with evaluating hematology autoverification.


   Methods Top


Institutional details

The present study was performed at an approximately 800-bed tertiary/quaternary care academic medical center. The medical center services included pediatric and adult inpatient units, multiple intensive care units (ICUs), a level I trauma capable emergency treatment center, and outpatient services. Pediatric and adult hematology/oncology services include both inpatient and outpatient populations. For the purpose of this study, patients 18 years and older were classified as adults, with pediatric patients <18-years old. The data in the study were collected as part of a retrospective study approved by the university Institutional Review Board (protocol #201801719) covering the time period from January 1, 2018, to July 31, 2018. This study was carried out in accordance with the Code of Ethics of the World Medical Association (declaration of Helsinki).

Data extraction and analysis

The electronic health record (EHR) throughout the retrospective study period was Epic (Epic Systems, Inc., Madison, Wisconsin, USA), which has been in place for May 2009. The middleware software was Data Innovations (DI) Instrument Manager (DI, Burlington Vermont, USA) version 8.14; autoverification rules are predominantly within the DI middleware.[5],[22] The laboratory information system is Epic Beaker Clinical Pathology.[23] Data were extracted from DI using Microsoft Open Database Connectivity (Microsoft Corporation, Redmond, Washington, USA) and analyzed using Microsoft Excel. Instrument flag data were retrieved from the analyzer and required extensive data cleanup and manual review to assure integrity. One major challenge is that the error messages concatenate on one another in a variety of combinations. Additional File 1 shows an example of the data, de-identified to remove identifying data fields related to accession number, dates/times, and personnel performing the testing. The flag fields are not transmitted to the laboratory information system (Epic Beaker Clinical Pathology)[23] nor are the operation identification numbers that specify who reviewed, released, and rejected results. These fields would be needed to calculate percent autoverification in the laboratory information system if that were a goal.

Instrument flags

In our laboratory, instrument flags are generated either from the automated hematology instrument manufacturer (Sysmex, America) or by our own laboratory-validated rules built in middleware (summarized in [Table 1] and indicating origin of rule). These flags are either global (i.e., applied to every sample) or patient-specific (e.g., a patient known to have previous samples that required special handling or analysis). When a sample triggers a flag, several outcomes are possible: (1) automatically release the CBC component results but hold the WBC Diff for manual review, (2) hold both the CBC and WBC Diff for manual review, and (3) release all results to LIS/EHR without manual review (assuming no other flags intervene). For example, the flag for the presence of immature granulocytes (IG) above a set percentage will hold only the WBC Diff and release the CBC, while the thrombocytopenia flag will hold both the CBC and WBC Diff for manual review. IGs on manual review include metamyelocytes, myelocytes, and promyelocytes. Critical value flags, in the absence of other flags, do not preclude autoverification; notification of the clinical services for critical values is by telephone per protocol.
Table 1: Flags for manual review of complete blood cell count and white blood cell count differential tests

Click here to view


Automated analyzers

Automated hematology testing was performed by a Sysmex XN-9000 hematology analyzer with a fully automated hematology slide preparation and staining system (Sysmex America, Inc., Lincolnshire, Illinois, USA). This instrument performs platelet (PLT) enumeration either by disruption of electrical current (PLT-I) or by a flow cytometric method using a fluorescent oxazine dye (PLT-F). Briefly, for the PLT-F method, the dye binds to platelet organelles, is then irradiated by laser beam, and the corresponding forward-scattered light and side-scattered fluorescence are plotted.[24] PLT-F method better distinguishes between platelets and fragmented red blood cells.[24],[25],[26] During the timeframe for the present study, PLT-F used higher cost reagents than PLT-I (approximately 50% more at onset of project).


   Results Top


Volume of testing and frequency of flags

Over a 6-month period, a total of 132,432 specimens had CBC with or without WBC Diff or an a la carte order for individual CBC components (PLT, hemoglobin, and hematocrit). Manual review by a technologist was performed on 10,314 of those specimens (7.8%). During this period, a total of 53,396 instrument flags were triggered (note that an individual specimen may trigger up to 15 flags), with 80.3% of samples not associated with any flag. Overall, 9.7% of specimens triggered a single flag, 5.0% triggered two flags, and <1% of samples triggered 5 or more flags [Figure 1]a.
Figure 1: The number of samples during a 6-month period without an associated flag (80.3%) or with one to four flags are shown in (a). The distribution of samples by patient care area for adult and pediatric patients is shown in (b). Heme/Onc: Hematology/Oncology, ICU: Intensive care unit, ED: Emergency department, OR: Operating room

Click here to view


Pediatric ICUs (including both neonatal and pediatric units) had the highest percentage of flagged samples, with one or more flags on 52.5% of specimens [Figure 1]b. Adult and pediatric non-ICU inpatient units had 29.6% and 28.4% samples, respectively, with at least one flag. Adult hematology/oncology services, which include both an inpatient bone marrow transplant unit and outpatient clinics, had a 28.8% rate of samples with one or more flags. Rate of sample flags was much lower in outpatient (excluding hematology/oncology), emergency department, and operating room locations, at approximately 10% or less in both adult and pediatric populations.

Frequently triggered flags

To analyze the patterns of flags that frequently triggered manual review for both WBC and PLT parameters, we began by reviewing WBC parameters. This was limited to a 30-day period of analysis due to the extensive nature of data cleanup and manual review for the middleware and instrument data. We looked at two outcomes: (1) flags that would release the CBC while holding the WBC Diff for manual review and (2) flags that would hold both the CBC and WBC Diff for manual review. In the first category of releasing the CBC and holding the WBC Diff for manual review, the IG present flag represented 9.6% of flags during a 30-day review period (20,576 samples and 1,980 flags) [Figure 2]a. The next most frequently triggered flag was the WBC abnormal scattergram at 5.3% (1,087 flags) followed by abnormal lymphocytes or blasts flag at 4.7% (962 flags) [Figure 2]a. These top three most frequently triggered flags are instrument flags, with the ≥2% IG cutoff specified by the laboratory (discussed in more detail below).
Figure 2: The most frequently triggered flags that resulted in manual review of WBC differential while automatically releasing the CBC during a 30-day period are shown in (a) with IG Present as the only flag triggered in 9.6% of samples. In (b), the six-most frequently triggered flags that hold both the CBC and WBC differential for manual review are shown with the most frequently triggered flag Thombocytopenia, Rerun PLT-F, 8.0%. IG: Immature granulocytes, Abn WBCs: Abnormal white blood cells, Abn Lymphs/Blasts: Abnormal lymphocytes or blasts, Lymphs: Lymphocytes, MCV: Mean corpuscular volume, PLT: Platelet, HGB: Hemoglobin

Click here to view


For platelets, the PLT-I method was the main methodology used to generate a platelet count, with PLT-F used in certain circumstances. Samples were run for PLT-F based on the following flags: (1) PLT-I <70 k/mm3 (“thrombocytopenia”), (2) 50% change in either direction within the last 7 days (“delta failure”), (3) pediatric inpatients and pediatric hematology/oncology clinic patients (due to known higher rate of red blood cell fragmentation and other specimen challenges), and/or (4) platelet abnormal distribution flag on the hematology analyzer. For 20,576 samples and 1,637 flags during the review period, we identified PLT-I <70 k/mm3 as accounting for 8.0% of flags that were holding both the CBC and WBC Diff to re-run for PLT-F [Figure 2]b. The next most frequently triggered flags to hold CBC and WBC Diff for manual review were PLT clumps (2.2%, 460 flags) and PLT delta failure (1.7%, 349 flags) [Figure 2]b.

Most frequently triggered single flag

Next, we examined the samples during a 6-month period that had only a single flag. By far, the IG flag (intended to detect metamyelocytes, myelocytes, and promyelocytes) was the most frequently triggered single-flag, representing 6.0% of flags (3200 samples) [Figure 3]a. The left shift and the abnormal lymphocyte/blasts flags both represent 0.80% (each 425 flags), while 0.37% of single flags (199 flags) was due to the WBC abnormal scattergram [Figure 3]a. All four flags are generated by instrument rules. The left shift flag primarily detects bands and metamyelocytes. In 1.1% of samples, the IG and left shift flags occurred together and were the only flags present (608 flags) [Figure 3]a.
Figure 3: When a single flag for manual review was triggered, the four most frequent rules identified are shown including a potential overlap of parameters in IG Present and Left Shift in (a). Shown in (b) is the difference in manual review rates when the IG cutoff is changed from ≥2% (804 samples) to ≥5% (234 samples). IG: Immature granulocytes, Abn Lymphs or Blasts: Abnormal lymphocytes or blasts, WBC Abn Scatter: White blood cell abnormal scattergram

Click here to view


Optimization of immature granulocyte flag

The IG flag data prompted us to perform more detailed review of the clinical utility of this flag. The IG flag had been set for ≥2% based on a validation study performed on an earlier generation of hematology analyzer used in the laboratory. The instrument vendor recommended a default trigger for the IG flag at 5%, while a range of 3–5% IG has been reported in the literature.[27],[28],[29] In order to assess the effect on our patient population if we changed the IG parameter to ≥5%, we performed detailed chart review on CBC samples that had only the IG rule triggered.

In a 30-day period, 804 samples underwent manual review due solely to the IG flag with the rule set to trigger at ≥2%; of those reviewed, only 29.1% (234 samples) had an IG of ≥5% [Figure 3]b. Of the 570 samples with <5% IG but ≥2%, most came from inpatient units, with a breakdown of 412 inpatients (72.3%), 145 outpatients (25.4%), and 13 emergency department patients (2.3%). Within the 570 samples, manual chart review identified 4.7% samples from 27 unique patients with promyelocytes (0.9–2.0%) and one with blasts (0.9%). All of these samples were from patients on inpatient or adult hematology/oncology services and were follow-up specimens from patients already worked-up and being followed for hematologic issues. Fourteen patients with promyelocytes identified were positive for malignancy, six of which were simultaneously receiving chemotherapy. Seventeen of the 27 patients identified with promyelocytes were receiving daily CBCs during an inpatient encounter. The data were then analyzed to see how the IG estimate compared to the identification of metamyelocytes, myelocytes, and promyelocytes in these specimens by a technologist. Manual review of the 570 samples led to lower %IG in 91.1% of samples and higher %IG in only 8.6% of samples. Thus, the IG flag appears to over-estimate relative to manual slide review.

Extrapolating from the 1 month of data, samples with <5% IG but ≥2% comprise an estimated 6,840 samples per year. Given that chart review of this subset did not identify any case where the manual review led to the identification of promyelocytes or blasts that had not already been identified in previous laboratory studies, we made decision to raise IG threshold to 5% to match the manufacturer recommendation. Thereafter, the IG parameter, if present as the only flag, only triggered manual review if 5% or greater. The change in this threshold did not impact measurement of other flags.

Decreased review and re-running of complete blood counts with PLT-F

Based on the data and support from the published literature, the laboratory made the decision to switch to the PLT-F method instead of the PLT-I method. Similar to the change in IG threshold, the switch to PLT-F method had highest impact on inpatient samples, with a breakdown of 59.2% inpatient (15.1% of which was ICUs), 31.0% outpatient, and 9.8% emergency department samples during the period of the study. The biggest impact on autoverification resulted from not needing to perform PLT-F for PLT-I <70 k/mm3.

Overall impact of changes

In combination with the above-mentioned change in IG threshold, autoverification rates increased. [Figure 4] compares the autoverification rates before and after the changes in PLT-F and IG threshold. The percent increase in autoverification was 5.7% for CBC as a panel and 4.0% for individual CBC components. This translates to an estimated absolute reduction in manual review of 13,266 CBC panels and 1,248 CBC individual components per year. This has substantial impact on turnaround time for individual samples, since average turnaround time for manual differential is about 90 min depending on staffing levels and competing workload. Average time to actually perform manual differential depends on complexity of pathologic findings and technologist experience but is typically 5–15 min. Using 10 min as an approximate average time for review, the reduction would translate to nearly a full-time equivalent position (approximately 100 h/year or nearly 300 8-h shifts).
Figure 4: Comparison of platelet-related flags with the switch to universal use of platelet by fluorescent method (PLT-F) method shown in (a) and autoverification rates for complete blood count and individually ordered complete blood count-components in (b)

Click here to view



   Discussion Top


There is a growing body of literature related to the development and optimization of autoverification rules in hematopathology.[13],[14] This complements investigations of autoverification for clinical chemistry, blood gas, and coagulation analysis.[5],[6],[7],[8],[9],[10],[11],[12] Hematopathology presents particular challenges for autoverification in that rules are intended for a range of purposes including review of abnormal cells that might be misidentified or missed by instruments (e.g., blasts, Sezary cells), detection of phenomena that can distort analysis (e.g., RBC agglutination and platelet clumping), and unusual changes in quantitative parameters (e.g., dramatic decrease or increase in hemoglobin/hematocrit).[13],[14],[30] Some of the flags are associated with phenomena that might be a pre-analytical sample issue or a pathological process in the patient.[14],[18],[31],[32],[33],[34] A primary challenge for autoverification in hematopathology is to balance efficiency and turnaround time while performing manual review for samples where the review is likely to provide clinical benefit.[4],[14],[31],[33],[35] This is especially a challenge for laboratories that analyze a high percentage of samples from patients with hematologic abnormalities, especially those who undergo repeated laboratory analysis over time.

In the present study, we evaluated autoverification rules that had been developed over years in our core clinical laboratory. In this process, we were confronted with rules that had been adopted per manufacturer recommendation (especially instrument flags) and those that had been developed and validated over years into an autoverification rule set. We were particularly looking for rules and thresholds that might represent “low hanging fruit” in generating the high frequency of flags but with low clinical value.

A central challenge identified in our study is the difficulty in extracting and analyzing specific data for autoverification. Our laboratory uses middleware software for most of the autoverification rules. Data retrieval required running a third-party application every month to capture middleware data prior to off-site archival (where the extraction would be more difficult). As described in the methods, the data required extensive cleanup and formatting to be able to drill down to specific flags for patient specimens.

Operational improvements were facilitated by our analysis. The two main changes that were implemented based on the autoverification analysis were to increase the IG flag cutoff requiring manual review from 2% to 5% and to switch to the PLT-F method for all PLT counts. Ironically, the default manufacturer recommendation for the IG flag of 5% was a choice that minimized unnecessary manual intervention, as we did not identify any clear clinical advantage in the lower threshold that had been set based on experience with an earlier generation of hematology analyzer. The autoverification analysis related to platelets demonstrated the improved efficiency and lower rerun rates with the PLT-F method that can better distinguish between platelets and fragmented RBCs.[24],[25],[36],[37],[38] Given that our laboratory receives many pediatric samples, including from hematology/oncology patients, use of PLT-F minimized repeat analysis for specimens that often contain low sample volumes. The rule changes reported in the present study have now been in place, and we are not aware of any clinical issues arising from these changes.

Future directions would be the development of software that more easily enables analysis of autoverification rates and the impact of specific rules and flags. This may be with commercial vendor and/or home-grown software development. A data warehouse is a possibility. In the present study, such a warehouse would need to be able to access the DI database or the DI database would need to be regularly duplicated to a different server. To allow for reliable evaluation of auto-verification, the data warehouse would ideally have discrete data for specimen comments/flag and operator identification (which could indicate manual versus auto-verification). One practical challenge would be to avoid causing latency issues on the production server. Given limited resources and competing informatics projects, we have not yet pursued such a project. For laboratories seeking to further increase autoverification rates, even identifying one or two rules associated with a high rate of triggering manual review may allow for a significant increase in autoverification while maintaining high quality patient care.

Acknowledgments

The authors would like to thank staff within the University of Iowa Hospitals and Clinics Department of Pathology core laboratory and the University of Iowa Health Care Information Systems who helped provide the support for autoverification, middleware, and laboratory information system issues over the years.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
   References Top

1.
Crolla LJ, Westgard JO. Evaluation of rule-based autoverification protocols. Clin Leadersh Manag Rev 2003;17:268-72.  Back to cited text no. 1
    
2.
Jones JB. A strategic informatics approach to autoverification. Clin Lab Med 2013;33:161-81.  Back to cited text no. 2
    
3.
Pearlman ES, Bilello L, Stauffer J, Kamarinos A, Miele R, Wolfert MS. Implications of autoverification for the clinical laboratory. Clin Leadersh Manag Rev 2002;16:237-9.  Back to cited text no. 3
    
4.
Torke N, Boral L, Nguyen T, Perri A, Chakrin A. Process improvement and operational efficiency through test result autoverification. Clin Chem 2005;51:2406-8.  Back to cited text no. 4
    
5.
Krasowski MD, Davis SR, Drees D, Morris C, Kulhavy J, Crone C, et al. Autoverification in a core clinical chemistry laboratory at an academic medical center. J Pathol Inform 2014;5:13.  Back to cited text no. 5
[PUBMED]  [Full text]  
6.
Sediq AM, Abdel-Azeez AG. Designing an autoverification system in Zagazig University hospitals laboratories: Preliminary evaluation on thyroid function profile. Ann Saudi Med 2014;34:427-32.  Back to cited text no. 6
    
7.
Onelöv L, Gustafsson E, Grönlund E, Andersson H, Hellberg G, Järnberg I, et al. Autoverification of routine coagulation assays in a multi-center laboratory. Scand J Clin Lab Invest 2016;76:500-2.  Back to cited text no. 7
    
8.
Randell EW, Short G, Lee N, Beresford A, Spencer M, Kennell M, et al. Strategy for 90% autoverification of clinical chemistry and immunoassay test results using six sigma process improvement. Data Brief 2018;18:1740-9.  Back to cited text no. 8
    
9.
Randell EW, Short G, Lee N, Beresford A, Spencer M, Kennell M, et al. Autoverification process improvement by Six Sigma approach: Clinical chemistry immunoassay. Clin Biochem 2018;55:42-8.  Back to cited text no. 9
    
10.
Wu J, Pan M, Ouyang H, Yang Z, Zhang Q, Cai Y. Establishing and Evaluating Autoverification Rules with Intelligent Guidelines for Arterial Blood Gas Analysis in a Clinical Laboratory. SLAS Technol 2018;23:631-40.  Back to cited text no. 10
    
11.
Randell EW, Yenice S, Khine Wamono AA, Orth M. Autoverification of test results in the core clinical laboratory. Clin Biochem 2019;73:11-25.  Back to cited text no. 11
    
12.
Wang Z, Peng C, Kang H, Fan X, Mu R, Zhou L, et al. Design and evaluation of a LIS-based autoverification system for coagulation assays in a core clinical laboratory. BMC Med Inform Decis Mak 2019;19:123.  Back to cited text no. 12
    
13.
Fu Q, Ye C, Han B, Zhan X, Chen K, Huang F, et al. Designing and validating autoverification rules for hematology analysis in sysmex XN-9000 hematology system. Clin Lab 2020;66:549-56.  Back to cited text no. 13
    
14.
Zhao X, Wang XF, Wang JB, Lu XJ, Zhao YW, Li CB, et al. Multicenter study of autoverification methods of hematology analysis. J Biol Regul Homeost Agents 2016;30:571-7.  Back to cited text no. 14
    
15.
Buoro S, Mecca T, Seghezzi M, Manenti B, Azzarà G, Ottomano C, et al. Validation rules for blood smear revision after automated hematological testing using Mindray CAL-8000. J Clin Lab Anal 2017;31:e22067.  Back to cited text no. 15
    
16.
Froom P, Havis R, Barak M. The rate of manual peripheral blood smear reviews in outpatients. Clin Chem Lab Med 2009;47:1401-5.  Back to cited text no. 16
    
17.
Palmer L, Briggs C, McFadden S, Zini G, Burthem J, Rozenberg G, et al. ICSH recommendations for the standardization of nomenclature and grading of peripheral blood cell morphological features. Int J Lab Hematol 2015;37:287-303.  Back to cited text no. 17
    
18.
Pratumvinit B, Wongkrajang P, Reesukumal K, Klinbua C, Niamjoy P. Validation and optimization of criteria for manual smear review following automated blood cell analysis in a large university hospital. Arch Pathol Lab Med 2013;137:408-14.  Back to cited text no. 18
    
19.
Barnes PW. Comparison of performance characteristics between first- and third-generation hematology systems. Lab Hematol 2005;11:298-301.  Back to cited text no. 19
    
20.
Barth D. Approach to peripheral blood film assessment for pathologists. Semin Diagn Pathol 2012;29:31-48.  Back to cited text no. 20
    
21.
Rabizadeh E, Pickholtz I, Barak M, Froom P. Historical data decrease complete blood count reflex blood smear review rates without missing patients with acute leukaemia. J Clin Pathol 2013;66:692-4.  Back to cited text no. 21
    
22.
Grieme CV, Voss DR, Davis SR, Krasowski MD. Impact of endogenous and exogenous interferences on clinical chemistry parameters measured on blood gas analyzers. Clin Lab 2017;63:561-8.  Back to cited text no. 22
    
23.
Krasowski MD, Wilford JD, Howard W, Dane SK, Davis SR, Karandikar NJ, et al. Implementation of epic beaker clinical pathology at an academic medical center. J Pathol Inform 2016;7:7.  Back to cited text no. 23
[PUBMED]  [Full text]  
24.
Tanaka Y, Tanaka Y, Gondo K, Maruki Y, Kondo T, Asai S, et al. Performance evaluation of platelet counting by novel fluorescent dye staining in the XN-series automated hematology analyzers. J Clin Lab Anal 2014;28:341-8.  Back to cited text no. 24
    
25.
Schoorl M, Schoorl M, Oomes J, van Pelt J. New fluorescent method (PLT-F) on Sysmex XN2000 hematology analyzer achieved higher accuracy in low platelet counting. Am J Clin Pathol 2013;140:495-9.  Back to cited text no. 25
    
26.
Wada A, Takagi Y, Kono M, Morikawa T. Accuracy of a new platelet count system (PLT-F) depends on the staining property of its reagents. PLoS One 2015;10:e0141311.  Back to cited text no. 26
    
27.
Eilertsen H, Hagve TA. Do the flags related to immature granulocytes reported by the Sysmex XE-5000 warrant a microscopic slide review? Am J Clin Pathol 2014;142:553-60.  Back to cited text no. 27
    
28.
Fernandes B, Hamaguchi Y. Automated enumeration of immature granulocytes. Am J Clin Pathol 2007;128:454-63.  Back to cited text no. 28
    
29.
Maenhout TM, Marcelis L. Immature granulocyte count in peripheral blood by the Sysmex haematology XN series compared to microscopic differentiation. J Clin Pathol 2014;67:648-50.  Back to cited text no. 29
    
30.
Lantis KL, Harris RJ, Davis G, Renner N, Finn WG. Elimination of instrument-driven reflex manual differential leukocyte counts. Optimization of manual blood smear review criteria in a high-volume automated hematology laboratory. Am J Clin Pathol 2003;119:656-62.  Back to cited text no. 30
    
31.
Comar SR, Malvezzi M, Pasquini R. Evaluation of criteria of manual blood smear review following automated complete blood counts in a large university hospital. Rev Bras Hematol Hemoter 2017;39:306-17.  Back to cited text no. 31
    
32.
Ike SO, Nubila T, Ukaejiofo EO, Nubila IN, Shu EN, Ezema I. Comparison of haematological parameters determined by the Sysmex KX-2IN automated haematology analyzer and the manual counts. BMC Clin Pathol 2010;10:3.  Back to cited text no. 32
    
33.
Lou AH, Elnenaei MO, Sadek I, Thompson S, Crocker BD, Nassar BA. Multiple pre- and post-analytical lean approaches to the improvement of the laboratory turnaround time in a large core laboratory. Clin Biochem 2017;50:864-9.  Back to cited text no. 33
    
34.
Sandhaus LM, Wald DN, Sauder KJ, Steele EL, Meyerson HJ. Measuring the clinical impact of pathologist reviews of blood and body fluid smears. Arch Pathol Lab Med 2007;131:468-72.  Back to cited text no. 34
    
35.
Novis DA, Walsh M, Wilkinson D, St Louis M, Ben-Ezra J. Laboratory productivity and the rate of manual peripheral blood smear review: A College of American Pathologists Q-Probes study of 95,141 complete blood count determinations performed in 263 institutions. Arch Pathol Lab Med 2006;130:596-601.  Back to cited text no. 35
    
36.
Schapkaitz E, Raburabu S. Performance evaluation of the new measurement channels on the automated Sysmex XN-9000 hematology analyzer. Clin Biochem 2018;53:132-8.  Back to cited text no. 36
    
37.
Tantanate C, Khowawisetsut L, Pattanapanyasat K. Performance evaluation of automated impedance and optical fluorescence platelet counts compared with international reference method in patients with thalassemia. Arch Pathol Lab Med 2017;141:830-6.  Back to cited text no. 37
    
38.
Tantanate C, Khowawisetsut L, Sukapirom K, Pattanapanyasat K. Analytical performance of automated platelet counts and impact on platelet transfusion guidance in patients with acute leukemia. Scand J Clin Lab Invest 2019;79:160-6.  Back to cited text no. 38
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4]
 
 
    Tables

  [Table 1]



 

 
Top
  

    

 
  Search
 
   Browse articles
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
   Introduction
   Methods
   Results
   Discussion
    References
    Article Figures
    Article Tables

 Article Access Statistics
    Viewed184    
    Printed8    
    Emailed0    
    PDF Downloaded16    
    Comments [Add]    

Recommend this journal