Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 357  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 




 
Table of Contents    
ORIGINAL ARTICLE
J Pathol Inform 2022,  13:8

Measuring digital pathology throughput and tissue dropouts


1 Department of Pathology, Brigham and Women's Hospital; Department of Pathology, Harvard Medical School, Boston, MA, USA
2 Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA

Date of Submission19-Jan-2021
Date of Decision05-Jun-2021
Date of Acceptance20-Jun-2021
Date of Web Publication08-Jan-2022

Correspondence Address:
Prof. George L Mutter
Department of Pathology, Brigham and Women's Hospital, 75 Francis Street, Boston, MA 02115
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpi.jpi_5_21

Rights and Permissions
   Abstract 


Background: Digital pathology operations that precede viewing by a pathologist have a substantial impact on costs and fidelity of the digital image. Scan time and file size determine throughput and storage costs, whereas tissue omission during digital capture (“dropouts”) compromises downstream interpretation. We compared how these variables differ across scanners. Methods: A 212 slide set randomly selected from a gynecologic-gestational pathology practice was used to benchmark scan time, file size, and image completeness. Workflows included the Hamamatsu S210 scanner (operated under default and optimized profiles) and the Leica GT450. Digital tissue dropouts were detected by the aligned overlay of macroscopic glass slide camera images (reference) with images created by the slide scanners whole slide images. Results: File size and scan time were highly correlated within each platform. Differences in GT450, default S210, and optimized S210 performance were seen in average file size (1.4 vs. 2.5 vs. 3.4 GB) and scan time (93 vs. 376 vs. 721 s). Dropouts were seen in 29.5% (186/631) of successful scans overall: from a low of 13.7% (29/212) for the optimized S210 profile, followed by 34.6% (73/211) for the GT450 and 40.4% (84/208) for the default profile S210 profile. Small dislodged fragments, “shards,” were dropped in 22.2% (140/631) of slides, followed by tissue marginalized at the glass slide edges, 6.2% (39/631). “Unique dropouts,” those for which no equivalent appeared elsewhere in the scan, occurred in only three slides. Of these, 67% (2/3) were “floaters” or contaminants from other cases. Conclusions: Scanning speed and resultant file size vary greatly by scanner type, scanner operation settings, and clinical specimen mix (tissue type, tissue area). Digital image fidelity as measured by tissue dropout frequency and dropout type also varies according to the tissue type and scanner. Dropped tissues very rarely (1/631) represent actual specimen tissues that are not represented elsewhere in the scan, so in most cases cannot alter the diagnosis. Digital pathology platforms vary in their output efficiency and image fidelity to the glass original and should be matched to the intended application.

Keywords: Digital pathology, dropouts, image analysis, operations, scanner, whole-slide imaging


How to cite this article:
Mutter GL, Milstone DS, Hwang DH, Siegmund S, Bruce A. Measuring digital pathology throughput and tissue dropouts. J Pathol Inform 2022;13:8

How to cite this URL:
Mutter GL, Milstone DS, Hwang DH, Siegmund S, Bruce A. Measuring digital pathology throughput and tissue dropouts. J Pathol Inform [serial online] 2022 [cited 2022 Jan 25];13:8. Available from: https://www.jpathinformatics.org/text.asp?2022/13/1/8/335332




   Introduction Top


Digital pathology imaging applications in research, education, and clinical diagnosis are fulfilled by a diversity of “whole slide” digital pathology scanners. In most cases, “whole slide” designation is a misnomer, as scanners typically capture high-resolution images only in slide areas where tissue is detected using a vendor-specific algorithm. This is by design, as the omission of uncaptured areas reduces scanning time and file size, at a cost of unintended exclusion of some tissues known as “dropouts.” Key parameters to be considered include capture conditions (lighting, focus, color space), image file characteristics, throughput, cost of file storage, and ability to integrate with a preexisting or future ecosystem of networked hardware and software from scanner to end viewer.

Electronic delivery of digital histology images is an enabling technology for remote diagnosis that has recently seen expanded use during the COVID-19 pandemic.[1],[2],[3],[4] Prepandemic clinical use has been reported in some centralized hospital systems,[5],[6] in subspecialty practices,[7] and decentralized health-care systems lacking pathologists at all delivery sites.[8] Regulatory guidance for these activities in the United States is in its infancy, for many years having focused on Food and Drug Administration-mandated engineering controls of medical devices (scanners), and vendor designed and executed diagnostic impact studies embedded within regulatory, rather than public peer-reviewed documents. Pathologist end users have directly evaluated endpoints with which they are most vested and familiar; interobserver concordance of subjective diagnoses is rendered by a human pathologist.[9],[10],[11],[12] The results comparing subjective interpretation of one platform (glass) with another (digital) are an important but imprecise endpoint, as the baseline extent of diagnostic variation using glass alone can be quite high.[13],[14],[15],[16],[17] Key aspects of digital file production, especially the manner in which tissue characteristics interact with scanner type and settings to determine image quality, are practice-contextual elements in the digital pathology pipeline. Vendor estimates of operational throughput (scan time), digital storage costs (file size), and diagnostic errors (over or under diagnosis) may not extrapolate to a particular user's specific case-mix or diagnostic goals.

Scanner dropouts are defined as areas of tissue on the glass slide that are omitted in the digital image and are often replaced with background-matched or white/gray space. These are perhaps the most challenging element of digital imaging to measure, as source data created for regulatory certification are often not in the public domain. Characterization of extent and character of dropouts is not part of most glass-digital diagnostic concordance studies, and when performed is usually compromised by the limitations of a subjective human observer. For example, the replacement of skipped areas with background-matched space can lead the pathologist observer to underestimate the amount of omitted slide. Our department has begun to address some of these practical issues like due diligence preparation for a new digital pathology effort. We created representative test slide sets from actual patient material and benchmarked scanner operations and image quality using objective metrics such as image analysis. Pointedly, our effort was directed at those digital operations that take place before the pathologist ever looks at an image.

We here examine elements of digital pathology operations in a specialized clinical practice that have the potential to impact throughput efficiency, storage costs, and fidelity of the resultant digital image. We passed a representative randomly selected 212 slide test set of gynecologic and perinatal pathology slides through different whole-slide scanners and differing scan profile configurations to generate digital whole-slide images. Use of a single slide set across all conditions permitted controlled comparison between scanners. Scanning time and digital file size were collected as measures of scanning throughput and storage requirement (costs), respectively. We then used image analysis to overlay reference glass slides with their respective scanned file images. This allowed high-sensitivity identification of “dropouts,” areas of tissue present in the source glass slide that were overlooked in the scanning process and thus absent in the scanned whole-slide images. Stratification of scanned slides by tissue type and scale (small biopsies, large resection specimens) allowed us to determine that file characteristics and dropout rates are co-dependent on scanner type, scanning software settings, and tissue characteristics.


   Methods Top


Series compilation and slide preparation

Histological sections of 212 surgical pathology specimens were randomly selected from the Women's and Perinatal Pathology service at Brigham and Women's Hospital (Boston, MA, USA) as follows. Pathology reports were retrieved for 4077 specimens (“case”) received as wet tissue (excluding extramural slide consults) during a 3-month interval from February to April 2018. Each specimen was assigned a random number, and the number of component blocks was recorded. The case list was then sorted by a random number, and the first 212 specimens in the random-sorted list with available blocks were selected for the study. For each case, a different random number assignment was used to select one component block for the series. These 212 blocks were retrieved, sectioned at 4 μm, routinely stained with hematoxylin and eosin, and mounted with glass coverslips in an automated coverslipping machine. All histology processes were performed using routine procedures in our clinical histology laboratory before finished slides were labeled with an anonymized barcode identifier. [Table 1] shows the distribution of specimens by anatomic site of origin (tissue type) and specimen size (big = trimmed resection specimen, bx = small biopsy, or fragments).
Table 1: Case series, by tissue type and specimen size

Click here to view


Macroscopic camera photography, “Reference”

Stained glass slides were photographed at high resolution (21MP, 5616 × 3744 px) on a backlit light box using a Canon EOS 5D Mark II camera outfitted with a macro lens (Canon EF100 mm f/2.8L Macro IS USM). Image postprocessing in Adobe Lightroom Classic v 9.1 included lens aberration correction, adjustment of contrast and brightness, and conversion to gray scale.

Whole-slide scanning: “wsi”

All slides were batch imaged in whole-slide scanners using supplied autoloader racks under three conditions [Details in Supplemental Data]: (1) Hamamatsu S210 using default brightfield (“default”) profile settings; (2) Hamamatsu S210 using profile settings empirically optimized (“opt”) to minimize tissue dropouts; and (3) Leica GT450 standard settings. Scan time was estimated from file timestamps (S210) or scanner log (GT450) and the resulting file size was read from the destination computer operating system (Windows 10). Measurements of file size were captured as actual bytes per file, and scaling of units was performed on a decimal scale (e.g., 1 byte = 10000 bytes; 1 KB = 10001 bytes, 1 MB = 10002 bytes; 1 GB = 10003 bytes).

Full-frame tiff images of the entire field captured by the scanner were generated by opening each whole-slide image file (ndpi = S210 file type, svs = GT450 file type) in a multiformat viewer (NDP.View Plus v2.7.43 + Hamamatsu Photonics K.K., 2019), and exporting the screen contents as recompiled from the source wsi to a tiff file that was cropped to the margins of the captured field. TIFF files were imported into Adobe Lightroom for adjustment of contrast and brightness, and conversion to gray scale.

Preparation of color-coded overlays

Paired grayscale reference and wsi images were rescaled, aligned, and prepared as translucent overlays using StereoPhoto Maker Pro (64 bit) v6.02 (2020, by Masuji Suto and David Sykes, available online at http://stereo.jpn.org/eng/index.html) [Figure 1]. This produced a red (reference)–cyan (wsi scanner image) overlay in which tissue dropped out in the scanner image was visible as a red profile contributed by the reference image. Fragment edges were highlighted in Adobe Photoshop Creative Cloud (release 21.2.0, 64 bit), using the “Find Edges” filter. The resultant overlay is shown in [Figure 1]c, along with examples of types of tissue dropped out. A rectangle corresponding to the computer screen boundaries and aspect ratio used at the time of scanner wsi screen export is seen in the overlays as a pale cyan rectangle. This is an artifact of the workflow, which was ignored during visual interpretation.
Figure 1: Overlay of aligned reference (camera) and scanner images to detect dropouts. High-contrast grayscale images from a lossless camera (A, reference) and “wsi” digital scanner whole-slide image (B, Hamamatsu S210, default brightfield settings) were rescaled, aligned, and color coded (C, overlay) so that superimposed overlapping areas are blue, and dropouts are red. Details of three dropout regions include translucent fat (c1, c3) and tissue present on the margins of the slide (c2, “edge”). Case BD2019-2222, tissue from aortic lymph node dissection

Click here to view


Identification and classification of tissue dropouts in whole-slide scanner images

Color-coded overlays of reference and wsi images were screened for red signal indicating object areas present in the reference but missing in the wsi images [[Figure 1], c1-3]. These were compared with the original high-resolution color reference images, and source glass slides viewed under a standard optical microscope to classify the nature of the putative dropout. The screening overlays were very sensitive and capable of detecting contaminating dust and floating cellular contaminants [Figure 2]. These artifacts were not interpreted as dropouts for purposes of this study. Enumeration as a dropout was reserved for regions of native tissue, or sectioned mucus/blood/cell aggregates.
Figure 2: Artifacts excluded from tissue “dropout” tally. The dropout detection process [Figure 01] was capable of detecting very small, refractile, and folded contaminating structures that were artifacts not representative of the tissue section present on the slide. These included dust and fibers (a), compression boundaries caught by the microtome knife along the side edge of the paraffin block (b), individual disaggregated cells (c), and floating contaminants (d, clump of squamous cells in a placental section)

Click here to view


We scored each wsi as a dropout whenever visual inspection revealed one or more wsi dropped fragment(s) relative to the reference image. For each slide with dropouts, the predominant category was recorded as pale fat (translucent), small shattered fragments (shards), or peripheral or edge location [Table 2].
Table 2: Dropout types, by dropout class and scanner

Click here to view


For those whole-slide images with dropouts, the extent of dropout was estimated visually as an approximate percentage of tissue missing in the wsi compared to all tissue present in the reference image.

Estimation of “Unique” dropout frequency [Figure 7]
Figure 7: Surrogate representation of non-unique dropout tissues within scanned area. Some tissues present in the original glass slide (reference camera image, top) were dropped during creation of the digital whole slide image (dropout, red frame). In this example the dropout tissue repertoire was non-unique, as histologically equivalent tissues were included within the final digital scan (green frame). Scanned image for endometrial biopsy case DB2019-2243 from Hamamatsu S210 (default profile). See Figure 4c for color overlay of reference image with digital scan.

Click here to view


We scored fragments missing from the digital image according to histologic similarity to those retained. Guided by color-coded overlay dropout maps for each scan, digital slide tissue dropouts were photographed from the original glass slide using a standard optical photomicroscope. The reference (glass original) static image of each dropout was opened on one computer monitor, and the corresponding zoomable digital scan image opened on the second of a dual monitor display. A subspecialty gynecologic pathologist (GLM) then searched the digital whole-slide image for diagnostically equivalent tissues matching those documented in the dropout photomicrograph. The scanned slide was then summarily scored once: either (1) unique dropout, if no equivalent of one or more dropout tissue area(s) was represented in the scan, or (2) nonunique dropouts, when tissues comparable to all noted dropouts (“surrogates”) were present within the digital whole-slide scan. Thus, each slide containing a dropout received one overall score as unique or nonunique.

Statistical analysis

All statistical analyses and graphical data display were performed using SYSTAT (v13.1, Systat Software, Inc., San Jose, CA).


   Results Top


Complete scan failures

Complete scan failure, or inability of the scanner to generate a digital file, is distinct from tissue dropout in which a created digital file is lacking some tissue fragments present on the glass slide. Of the 212 slides scanned, scan failure frequency and tissue characteristics varied by scanner and scanning profile. Four (4/212, 1.9%) S210def scan failures were attributed to shattered tissues composed entirely of minute fragments without a focus point for the scanner. An additional (1/212, 0.5%) GT450 scan failure was attributed to a large blood clot where red cells scattered transmitted light, thereby denying the scanner a stable focus point.

File size and scan time

The file size was proportional to scan time for all scanner conditions, but the quantitative relationship differed systematically by scanner [Figure 3]. The fastest scan time and smallest file size were seen with the GT450 [[Figure 3], red circles], and the longest scan time and biggest file sizes were seen with the S210 operated under a setting profile [opt, [Figure 3], green crosses] designed to minimize dropouts. The S210 default profile settings were intermediate for both of these parameters. The linear relationships between file size (x, GB) and scan time (y, seconds) can also be summarized as a least-squares linear regression formula by scanner type as follows: (1) GT450 (r = 0.828), y seconds = 32.3 + 37 (file size in GB); (2) S210def (r = 0.887), y seconds = 122 + 102 (file size in GB); and (3) S210opt (r = 0.930), y seconds = 158 + 164 (file size in GB).
Figure 3: File size and scan time, by scanner (Leica GT450, Hamamatsu S210 default settings, Hamamatsu S210 optimized settings). File size (x-axis, x0.5) was proportional to scan time (yaxis, y0.5) on all conditions. The GT450 had shortest scan times and smallest file size, whereas the S210 using settings optimized to minimize dropouts was longest in scan time and produced the largest files. All 212 slides were scanned successfully with the S210 optimum, with one failure with the GT450 (211 scanned), and four failures with the S210 default (208 scanned)

Click here to view


For all scanners, greater amount of tissue on the slide [[Table 3], bigs vs. bx] significantly prolonged scan time and increased file size (Kruskal–Wallis P ≤ 0.001 for all comparisons). This matches expectations, as the scanning algorithms for all scanners involve high-resolution capture only of those areas of a slide in which tissue is detected. This trend held across tissue sites (e.g., cervix, placenta, etc.) included in the study.
Table 3: Scan time and size of files, by scanner and tissue type

Click here to view


Dropout type, by a scanner

[Table 2] tallies each slide once by a scanner. When a dropout is present, it is noted by dropout type best representing the whole slide. Excluding the five failed scans that did not produce any wsi file to evaluate, there were a total of 631 scans performed with 70.5% (445/631) dropout-free scans. The most frequent dropout type is “shards,” seen in 22.2% (140/631) of slides, followed by 6.2% (39/631) edge misses. The frequency of dropout types varied greatly across scanners (Chi-square P < 0.001). The S210Opt had the lowest dropout rate at 13.7% (29/212), followed by the GT450 at 34.6% (73/211) and S210Def at 40.4% (84/208). All were prone to the dropout of small shards, and the GT450 had a tendency to miss edge domains. Typical examples of dropouts are shown in [Figure 4], [Figure 5], [Figure 6].
Figure 4: Hamamatsu S210 (default profile) scan dropout examples (a-d). Each row shows one slide overlay (Left) and dropout detail (Right). In the overlay dropped tissues appear red and captured tissues cyan-blue (See figure 01). Dashed rectangles in the colored overlay indicate framing of the detail photomicrograph captured from original H&E glass slides. Dropouts were classified according to [Table 2]

Click here to view
Figure 5: Hamamatsu S210 (optimized profile) scan dropout examples (a-d). Each row shows one slide overlay (Left) and dropout detail (Right). In the overlay dropped tissues appear red and captured tissues cyan-blue (See figure 01). Dashed rectangles in the colored overlay indicate framing of the detail photomicrograph captured from original H&E glass slides. Dropouts were classified according to [Table 2]

Click here to view
Figure 6: Leica GT450 scan dropout examples (a-d). Each row shows one slide overlay (Left) and dropout detail (Right). In the overlay dropped tissues appear red and captured tissues cyanblue (See figure 01). Dashed rectangles in the colored overlay indicate framing of the detail photomicrograph captured from original H&E glass slides. Dropouts were classified according to [Table 2]

Click here to view


Estimated percentage of tissue dropped out

The majority of detected dropouts were very small in comparison to the total tissue present on the slide. This is illustrated in representative images [Figure 4], [Figure 5], [Figure 6] and tabulated as an estimated percentage of tissue lost by scanner [Table 4]. Notably, 78.5% (146/186) of dropouts involved 2% or less of all tissue on the slide. Larger percentage of loss were most common in the S210Def where fully 35.7% (30/84) of dropouts involved 3% or more of the tissue, and 3.6% (3/84) were missing more than a quarter of the tissue. [Figure 4]a is a large area of translucent fat that failed to scan on the S210BF but was successfully captured on the S210Opt and GT450.
Table 4: Estimated dropout extent (tissue %), by scanner

Click here to view


Frequency of histologically unique tissue dropouts

[Table 5] shows the scanning conditions under which unique tissue dropouts occurred, which were most frequent in the Hamamatsu S210 default profile. S210 scanning operations using the optimized profile had no unique dropouts, indicating some user control over their frequency in this versatile instrument. Of the four instances of scans with unique tissue dropout, one fragment was dropped by two scanners (S210 default, GT450). All unique dropouts are illustrated in [Figure 8], including the single fragment missed on two different scanners [Figure 8], [Panel A].
Table 5: Frequency of histologically unique scan dropouts

Click here to view
Figure 8: Unique digital dropout tissues not represented in scanned area. All unique tissue dropouts are illustrated here (left) alongside the source location within the whole-slide reference camera image (right). (a) S210default and GT450 unique dropout. Nondescript fragments of probable contaminating placental villi in endometrial curettings of a 71-year-old patient showing inactive endometrium. (b) S210 default unique dropout. Intact fragment of neoplastic endocervical glands in association with attached stroma, in a cervical biopsy diagnosed as scant fragments of neoplastic glandular epithelium. This dropout is considered unique because all neoplastic glandular epithelia retained in the digital image are detached and unassociated with their stromal context. (c) S210 default unique dropout. Simple epithelium lined fibrous tissue in vulvectomy with reactive stratified squamous epidermis. The epithelial lining of the dropped fragment is not represented in the digital image, and may represent either a contaminant from another case, or separate fragment of a dermal epithelial inclusion

Click here to view



   Conclusions Top


Our analysis highlights the frequency of scanner dropouts that can be difficult to appreciate by casual visual comparison of glass with digital images. Transparent overlay of the reference glass and scanned images is a highly sensitive method capable of objectively detecting very small (<0.5 mm), optically translucent (mucus, fibrin entrapped cells, and fat), and peripheral (slide edge) tissue elements that fail detection and scanning across hardware platforms. In several respects, our findings challenge the definitional limits of what comprises a “tissue dropout,” including minimum size, composition of missed material, and acceptable slide edge boundaries. Only a small subset of dropouts, namely those comprised of large or polygonal empty image domains that interrupt the overall pattern of tissue capture within an image, would be suspected by an observer viewing the digital rendition.

When they occur, dropouts were overwhelmingly small shards or peripheral edges of tissue represented elsewhere on the slide. Some scanners permit remediation of dropout extent through optimization of user addressable scanner settings, such as the Hamamatsu S210 optimization that reduced default dropouts rates of 40.4%–13.7% postoptimization. The Leica GT450 scanner, which has few user addressable settings, had an intermediate dropout frequency of 34.6%. Nonuniqueness of most scanner dropouts (98%, 178/182) made them of low diagnostic impact. Highly fragmented specimens are most prone to distribute comparable diagnostic tissue in both scanned and unscanned regions of the slide [Figure 4]c and [Figure 5]b. Dropouts of mucus and blood with or without dissociated cellular elements [Figure 5]b, [Figure 5]c and [Figure 6]b are typically of insufficient physical integrity to be diagnostic. Preanalytical tissue processing steps can remediate edge dropouts, including trimming tissue blocks to permit a few millimeters clearance across the slide width and instructing histology technicians to maintain this clearance during embedding and sectioning. When they occur, the significance of slide edge overhangs of large tissue slabs [Figure 5]a and [Figure 6]a may be informed by examination of visible portions of tissue. If the tissue context across the visible fragment is relatively uniform, and there are no concerns for critical spatial information (such as inked margins), the edge dropout is likely to be insignificant. Stray dropped intact tissue slices, such as the examples from cervical cone biopsies [Figure 4]b and [Figure 6]d, were also nonunique and could easily be questioned by the pathologist as floating contaminants. More concerning are failures of the scanner to occasionally recognize large areas of translucent tissue such as fat [Figure 4a], raising the possibility that primary fatty lesions and fatty node dissections could be underrepresented.

Of the 212 slides scanned on three platforms, there were only three instances of unique dropped tissue [Table 5]. All omissions were generated by the default setting S210 scanner, with one glass slide having an identical dropout on the GT450 [Figure 8]a. Of these, 67% (2/3) were likely “floaters” or contaminants from other cases [Figure 8]. A closer look at unique dropouts includes two determined by context to be probable “floater” contaminants from another case: (1) placental villi in an endometrial sample from a 71-year-old patient [Figure 8]a and (2) folded simple epithelium fragment in a vulvectomy with reactive stratified squamous epithelium [Figure 8]c. The third slide from a patient with an endocervical neoplasm had detached neoplastic epithelium in the digital image, but the single intact fragment of stroma + glands was excluded. It is unlikely the neoplasm would have been missed altogether in the digital image, but the lack of intact architecture compromises its classification. This dropout was only seen with the default S210. We conclude that unique dropouts were most common with default S210 scans, and were reduced in frequency, and clinical relevance, by use of the GT450, or optimization of S210 scanner settings.

Our study suggests several approaches to the mitigation of digital dropouts. First, laboratories should avoid tissue placement on glass slide regions (such as extreme edges) inaccessible to the scanner. Second, scanner settings for user-configurable instruments such as the Hamamatsu S210 can be empirically adjusted to achieve the desired balance between scan completeness and throughput. We found that vendor default S210 settings prioritized speed and file size (average 2.5GB files in 376 s) at the expense of completeness of tissue capture such that 40% of all scans had detectible dropouts. Empirical modification of S210 scan profile settings to increase tissue detection ('optimized”) resulted in high-resolution scanning of nearly the entire slide, with a decrease in dropout frequency from 40% to 14% and a decline in the proportion of missed tissue [Table 4]. This came at a high cost to throughput, doubling the average scan time (to 721 s) and increasing file size by 37% (to 3.4 GB) relative to vendor defaults. Finally, for those more comprehensively automated scanners without user adjustment options, like the Leica GT450, performance benchmarking by objective measures allows intercomparison with alternative instruments. Leica GT450 throughput performance was excellent, with the fastest scan times (85 s) and smallest file sizes (1.4 GB). Overall dropout rates were intermediate for the GT450 (35%), but fully a third of these were on slide edges [Table 2] that could be avoided by changing tissue processing methods.

The test set of 212 slides (212 randomly selected surgical pathology cases, one random slide per case) represents a specific service stream (gynecologic and obstetrical pathology) at our institution and would require some extrapolation to other practice environments. There is a strong linear relationship between scan time and file size [Table 3] on all platforms. These operational benchmarks, however, vary systematically with specimen mix, including tissue type (anatomic source) and specimen (biopsy/resection) scale [Table 3]. Sampling practices, especially the average number of total blocks per case, vary greatly. In our hospital, for example, a cervical biopsy averages 2.3 blocks/case, whereas a full cervical profile sliced from a larger specimen averages 9.9 blocks/case. All of these factors must be taken into consideration to accurately estimate the number of scanners and amount of file storage required to support a pathology practice.

This is a study of scanning operations and tissue capture fidelity in digital pathology, with the goal of establishing benchmark metrics for prepathologist digital scan production. There are already many excellent studies documenting glass-digital equivalence of the human pathologist diagnostic endpoint. This is evidence that digital pathology implementation under a variety of infrastructure types and across diverse practice settings can succeed. For those groups contemplating such a transition, we have shown that scanning speed and resultant file size vary greatly by scanner type, scanner operation settings, and specimen mix – parameters of high relevance to throughput and overhead cost of a digital pathology operation. Correspondingly, digital image fidelity as measured by tissue dropout frequency and dropout type also varies according to the same tissue and scanning parameters.

Acknowledgments

The authors would like to acknowledge the assistance of Delia Liepins (Director of Clinical Operations, BWH Pathology), for retrieval of tissue blocks, and Qing Sun (Histology Laboratory Manager, BWH), for preparation of histological sections.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.


   Description of additional data files Top


Supplemental Data

Contents:

  1. Scanner Settings
  2. Average Block Numbers In Selected Cases, By Case Type



   Supplement 1: Scanners and Profile Settings Used Top





   Supplement 2: Average Block Numbers in Selected Cases, by Case Type Top


The 212 slides in the test slide set represented different tissue types and specimen sizes (Big/bx) as designated in [Table 1]. Each of these was in turn randomly selected as one slide from a “case” that often contained more than one block. The Table below shows the average number of paraffin blocks sampled in the full case from which the test slide was selected for scanning.





 
   References Top

1.
Cimadamore A, Lopez-Beltran A, Scarpelli M, Cheng L, Montironi R. Digital pathology and COVID-19 and future crises: Pathologists can safely diagnose cases from home using a consumer monitor and a mini PC. J Clin Pathol 2020;73:695-6.  Back to cited text no. 1
    
2.
Browning L, Fryer E, Roskell D, White K, Colling R, Rittscher J, et al. Role of digital pathology in diagnostic histopathology in the response to COVID-19: Results from a survey of experience in a UK tertiary referral hospital. J Clin Pathol 2021;74:129-32.  Back to cited text no. 2
    
3.
Hanna MG, Reuter VE, Ardon O, Kim D, Sirintrapun SJ, Schüffler PJ, et al. Validation of a digital pathology system including remote review during the COVID-19 pandemic. Mod Pathol 2020;33:2115-27.  Back to cited text no. 3
    
4.
Henriksen J, Kolognizak T, Houghton T, Cherne S, Zhen D, Cimino PJ, et al. Rapid validation of telepathology by an academic neuropathology practice during the COVID-19 pandemic. Arch Pathol Lab Med 2020;144:1311-20.  Back to cited text no. 4
    
5.
Stathonikos N, Nguyen TQ, van Diest PJ. Rocky road to digital diagnostics: Implementation issues and exhilarating experiences. J Clin Pathol 2021;74:415-20.  Back to cited text no. 5
    
6.
Stathonikos N, Nguyen TQ, Spoto CP, Verdaasdonk MA, van Diest PJ. Being fully digital: Perspective of a Dutch academic pathology laboratory. Histopathology 2019;75:621-35.  Back to cited text no. 6
    
7.
Chong T, Palma-Diaz MF, Fisher C, Gui D, Ostrzega NL, Sempa G, et al. The California telepathology service: UCLA's experience in deploying a regional digital pathology subspecialty consultation network. J Pathol Inform 2019;10:31.  Back to cited text no. 7
[PUBMED]  [Full text]  
8.
Voelker HU, Stauch G, Strehl A, Azima Y, Mueller-Hermelink HK. Diagnostic validity of static telepathology supporting hospitals without local pathologists in low-income countries. J Telemed Telecare 2020;26:261-70.  Back to cited text no. 8
    
9.
Williams BJ, Hanby A, Millican-Slater R, Nijhawan A, Verghese E, Treanor D. Digital pathology for the primary diagnosis of breast histopathological specimens: An innovative validation and concordance study on digital pathology validation and training. Histopathology 2018;72:662-71.  Back to cited text no. 9
    
10.
Lee JJ, Jedrych J, Pantanowitz L, Ho J. Validation of digital pathology for primary histopathological diagnosis of routine, inflammatory dermatopathology cases. Am J Dermatopathol 2018;40:17-23.  Back to cited text no. 10
    
11.
Snead DR, Tsang YW, Meskiri A, Kimani PK, Crossman R, Rajpoot NM, et al. Validation of digital pathology imaging for primary histopathological diagnosis. Histopathology 2016;68:1063-72.  Back to cited text no. 11
    
12.
Loughrey MB, Kelly PJ, Houghton OP, Coleman HG, Houghton JP, Carson A, et al. Digital slide viewing for primary reporting in gastrointestinal pathology: A validation study. Virchows Arch 2015;467:137-44.  Back to cited text no. 12
    
13.
Sağlam A, Usubütün A, Dolgun A, Mutter GL, Salman MC, Kurtulan O, et al. Diagnostic and treatment reproducibility of cervical intraepithelial neoplasia/squamous intraepithelial lesion and factors affecting the diagnosis. Turk Patoloji Derg 2017;1:177-91.  Back to cited text no. 13
    
14.
Usubutun A, Mutter GL, Saglam A, Dolgun A, Ozkan EA, Ince T, et al. Reproducibility of endometrial intraepithelial neoplasia diagnosis is good, but influenced by the diagnostic style of pathologists. Mod Pathol 2012;25:877-84.  Back to cited text no. 14
    
15.
Carlson JW, Jarboe EA, Kindelberger D, Nucci MR, Hirsch MS, Crum CP. Serous tubal intraepithelial carcinoma: Diagnostic reproducibility and its implications. Int J Gynecol Pathol 2010;29:310-4.  Back to cited text no. 15
    
16.
Duggan MA, Brashert P, Ostor A, Scurry J, Billson V, Kneafsey P, et al. The accuracy and interobserver reproducibility of endometrial dating. Pathology 2001;33:292-7.  Back to cited text no. 16
    
17.
Mills AM, Gradecki SE, Horton BJ, Blackwell R, Moskaluk CA, Mandell JW, et al. Diagnostic efficiency in digital pathology: A comparison of optical versus digital assessment in 510 surgical pathology cases. Am J Surg Pathol 2018;42:53-9.  Back to cited text no. 17
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5]



 

 
Top
  

    

 
  Search
 
   Browse articles
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
   Introduction
   Methods
   Results
   Conclusions
    Description of a...
    Supplement 1: Sc...
    Supplement 2: Av...
    References
    Article Figures
    Article Tables

 Article Access Statistics
    Viewed246    
    Printed8    
    Emailed0    
    PDF Downloaded38    
    Comments [Add]    

Recommend this journal