Journal of Pathology Informatics

COMMENTARY
Year
: 2017  |  Volume : 8  |  Issue : 1  |  Page : 14-

Making pathology diagnoses with glass or digital slides: Which modality is inferior?


Jonhan Ho1, Liron Pantanowitz2,  
1 Department of Dermatology, Division of Dermatopathology, University of Pittsburgh, Pittsburgh, PA, USA
2 Department of Pathology, University of Pittsburgh, Pittsburgh, PA, USA

Correspondence Address:
Jonhan Ho
Medical Arts Building, 3708 Fifth Avenue, Suite 500.68, Pittsburgh PA 15213
USA




How to cite this article:
Ho J, Pantanowitz L. Making pathology diagnoses with glass or digital slides: Which modality is inferior?.J Pathol Inform 2017;8:14-14


How to cite this URL:
Ho J, Pantanowitz L. Making pathology diagnoses with glass or digital slides: Which modality is inferior?. J Pathol Inform [serial online] 2017 [cited 2021 Jul 29 ];8:14-14
Available from: https://www.jpathinformatics.org/text.asp?2017/8/1/14/204196


Full Text



Shah et al. recently published an article entitled “Validation of diagnostic accuracy with whole slide imaging (WSI) compared with glass slide review in dermatopathology” using the College of American Pathologists (CAP) guidelines in the Journal of the American Academy of Dermatology.[1] A pathologist's experience when making a diagnosis using glass slides is different from their experience using digital pathology. Much like comparing apples and oranges, there are fundamental differences between these two modalities. One is fully analog, and the other is fully digital. One field of view (light microscope) is round, and the other (monitor) rectangular. One uses fingertips to physically push around a glass slide on a microscope stage; the other uses a mouse or another input device to navigate an image. Yet, they are both tools that access the same foundational knowledge that resides in a pathologist's memory and skill set for making a diagnosis. Since both tools can be used for the same purpose, we, as the tool users, must decide for ourselves if the newer digital tool is at least as good as, or maybe even better than, the venerable old microscope, a tool which has been refined for our use over the last several hundred years. However, given some of the fundamental differences between rendering diagnoses with glass versus digital slides, how do we compare these two modalities in a quantifiable method? This topic has been the source of lively discussion for the last decade within the digital pathology industry, including the US. Food and Drug Administration that regulates medical devices. The vast majority of conclusions derived from validation studies in the literature signify that whole slide images are at least equivalent to glass slides.

The article by Shah et al. specifically follows the CAP guidelines on validating digital pathology for making primary diagnoses, a consensus document that was authored by experts in the field who sought to guide pathologists to self-validate digital pathology systems in their own laboratories.[2] The article from Shah et al. is the third article so far to endorse the 2013 CAP guidelines, indicating that these guidelines are gaining traction in promoting the adoption of digital pathology.[3],[4] In their study, Shah et al. focused on a specific intended use, H&E - related dermatopathology, and measured intraobserver concordance. In this study, three dermatopathologists examined 181 cases using traditional light microscopy and whole slide images and found that glass-WSI intraobserver concordance (86.9%, 95% confidence interval [CI] 83.7–89.6) did not differ from glass-glass intraobserver concordance (90.3%, 95% CI 86.7–93.1) in any statistically meaningful way. This supports the premise that WSI for primary diagnosis is not inferior to traditional microscopy, corroborating the findings of other similar studies.[5],[6]

Interestingly, in this dermatopathology validation study, the authors report some notable differences in concordance between diagnostic subgroups. They examined concordance in melanocytic lesions, nonmelanocytic proliferations, and inflammatory cases. Prior studies have found that inflammatory cells, particularly eosinophils and neutrophils, can be difficult to identify using WSI. In contrast, Shah et al. demonstrated the highest glass-WSI intraobserver concordance with their skin inflammatory subgroup (96.1%, 95% CI 91.8–98.3). Conversely, their melanocytic lesions had a lower glass-WSI intraobserver concordance of 75.6% (95% CI, 68.5–81.5). The low rate of concordance for melanocytic lesions could be explained by the fact that dermatopathologists in this study relied only on H and E stained slides and did not have any immunohistochemical stained slides to review that are typically used in more difficult lesions. Three out of five major discordances in the melanocytic subgroup had WSI reads that were less malignant than their counterpart traditional microscopy diagnoses. The authors postulate that a contributor to this discordance could be difficulty evaluating the cytology of melanocytes with WSI. However, two out of five of their cases showed the opposite. Nonmelanocytic neoplasms fell in between with a glass-WSI intraobserver concordance of 89.1% (95% CI 83.4–93.0). Most importantly, there was no statistical difference in glass-WSI intraobserver concordance versus glass-glass intraobserver concordance, either overall or in any of the diagnostic subgroups. Moreover, the authors included minor disagreements along with major disagreements in their calculations of discordance. When minor discordances were removed (i.e., a binary system), overall glass-WSI intraobserver concordance rose to 97.4% (CI 95.6–98.5), and glass-glass intraobserver concordance rose to 98.6% (CI 96.6–99.5).

The Shah et al. study also highlighted two important exclusions from the CAP guidelines, that were the source of lively discussion. These were (1) a definition of concordance, and (2) acceptance criteria for equivalence as a percentage of concordance. The CAP gave individual pathology laboratories the freedom to define these items as applicable to their own individual environments and uses. In omitting these items from their guideline, the CAP recognized that one size may not fit all and that every laboratory will face it's own set of demands, just like opting to go with apples and/or oranges. Nonetheless, in this study, and in the overwhelming majority of related literature, the authors concluded that digital pathology systems are equivalent to glass slides for rendering diagnoses.

References

1Shah KK, Lehman JS, Gibson LE, Lohse CM, Comfere NI, Wieland CN. Validation of diagnostic accuracy with whole-slide imaging compared with glass slide review in dermatopathology. J Am Acad Dermatol 2016;75:1229-37.
2Pantanowitz L, Sinard JH, Henricks WH, Fatheree LA, Carter AB, Contis L, et al. Validating whole slide imaging for diagnostic purposes in pathology: Guideline from the College of American Pathologists Pathology and Laboratory Quality Center. Arch Pathol Lab Med 2013;137:1710-22.
3Thrall MJ, Wimmer JL, Schwartz MR. Validation of multiple whole slide imaging scanners based on the guideline from the College of American Pathologists Pathology and Laboratory Quality Center. Arch Pathol Lab Med 2015;139:656-64.
4Arnold MA, Chenever E, Baker PB, Boué DR, Fung B, Hammond S, et al. The College of American Pathologists guidelines for whole slide imaging validation are feasible for pediatric pathology: A pediatric pathology practice experience. Pediatr Dev Pathol 2015;18:109-16.
5Bauer TW, Schoenfield L, Slaw RJ, Yerian L, Sun Z, Henricks WH. Validation of whole slide imaging for primary diagnosis in surgical pathology. Arch Pathol Lab Med 2013;137:518-24.
6Snead DR, Tsang YW, Meskiri A, Kimani PK, Crossman R, Rajpoot NM, et al. Validation of digital pathology imaging for primary histopathological diagnosis. Histopathology 2016;68:1063-72.