Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 774  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 




 
Table of Contents    
ORIGINAL ARTICLE
J Pathol Inform 2014,  5:21

Validation of a novel robotic telepathology platform for neuropathology intraoperative touch preparations


Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Houston, TX 77030, USA

Date of Submission01-Apr-2014
Date of Acceptance10-Jun-2014
Date of Web Publication28-Jul-2014

Correspondence Address:
Michael J Thrall
Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Houston, TX 77030
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2153-3539.137642

Rights and Permissions
   Abstract 

Background: Robotic telepathology (RT) allows a remote pathologist to control and view a glass slide over the internet. This technology has been demonstrated to be effective on several platforms, but we present the first report on the validation of RT using the iScan Coreo Au whole slide imaging scanner. Methods: One intraoperative touch preparation slide from each of 100 cases were examined twice (200 total cases) using glass slides and RT, with a 3 week washout period between viewings, on two different scanners at two remote sites. This included 75 consecutive neuropathology cases and 25 consecutive general surgical pathology cases. Interpretations were compared using intraobserver variability. Results: Of the 200 total cases, one failed on RT. There were 47 total interpretive variances. Most of these were the result of less specific interpretations or an inability to identify scant diagnostic material on RT. Nine interpretive variances had potentially significant clinical implications (4.5%). Using the final diagnosis as a basis for comparison to evaluate these nine cases, three RT interpretations and three glass slide interpretations were considered to be discrepant. In the other three cases, both modalities were discrepant. This distribution of discrepancies indicates that underlying case difficulty, not the RT technology, probably accounts for these major variances. For the subset of 68 neoplastic neuropathology cases, the unweighted kappa of agreement between glass slides and RT was 0.68 (good agreement). RT took 225 s on average versus only 71 s per glass slide. Conclusions: This validation demonstrates that RT using the iScan Coreo Au system is a reasonable method for supplying remote neuropathology expertise for the intraoperative interpretation of touch preparations, but is limited by the slowness of the robotics, crude focusing, and the challenge of determining where to examine the slide using small thumbnail images.

Keywords: Intraoperative consultation, neuropathology, remote robotic microscopy, touch preparation, validation


How to cite this article:
Thrall MJ, Rivera AL, Takei H, Powell SZ. Validation of a novel robotic telepathology platform for neuropathology intraoperative touch preparations. J Pathol Inform 2014;5:21

How to cite this URL:
Thrall MJ, Rivera AL, Takei H, Powell SZ. Validation of a novel robotic telepathology platform for neuropathology intraoperative touch preparations. J Pathol Inform [serial online] 2014 [cited 2019 Dec 16];5:21. Available from: http://www.jpathinformatics.org/text.asp?2014/5/1/21/137642


   Introduction Top


Robotic telepathology (RT) enables a pathologist at a distance to use a personal computer to simultaneously control a robotic microscope and view the microscopic fields on a monitor screen for the purpose of diagnosis. This technology allows for the employment of pathology expertise not readily available at the site of slide production. It was originally described in the late 1990's [1],[2] and early 2000's. [3],[4],[5],[6],[7],[8] Of the studies that have been published, several relate to neuropathology [4],[9-11] or to cytologic preparations [8],[12-14] comparable to intraoperative touch preparations (ITPs).

Robotic telepathology has not achieved widespread adoption for a variety of reasons. One major reason is the existence of competing technologies that can be used to perform similar functions with equal or greater diagnostic accuracy. Transmission of images from a camera mounted on a microscope under the control of a remote human operator, usually another pathologist, pathology trainee, or cytotechnologist, has been shown to be similarly effective as a means of remote consultation for cytology specimens similar to ITP. [15],[16],[17] This technology has the advantages of lower equipment costs than RT. Human manipulation of the glass slides with verbal guidance from the remote pathologist is also faster than any robotic equipment developed to date. There are several disadvantages. The need for someone trained to operate a microscope at the remote site may be prohibitively expensive due to skilled labor costs. Furthermore, the remote pathologist might be shown an incorrect slide without any way of recognizing the error due to lack of transmission or recording of the slide label. Finally, the interpreting pathologist has limited control over the slides and images, possibly resulting in important material not being seen.

Another competing technology that emerged after the introduction of RT, and largely filled the needs, for which RT was initially designed, is whole slide imaging (WSI). This technology uses robotics to image an entire glass slide for transmission over a network and viewing on a computer monitor. WSI is clearly superior for many possible RT applications because it provides more comprehensive image information to the remote pathologist with faster response times to operator instructions. The utility of WSI for intraoperative neuropathology consultation has been demonstrated by multiple centers. [9],[18] The fastest scanners may be able to produce a ×20 magnification single-plane image suitable for frozen section interpretation in only 2-3 min. Cytologic smears, however, present greater difficulties in the intraoperative setting. [19] Smears typically encompass a large area of the slide, need good resolution at high power to see individual cells and small clusters, and require focus control to see multiple layers of thick cell aggregates (the Z-stack). Some scanners are capable of creating ×40 images with a complete-slide Z-stack of levels, but doing so requires considerably longer than the 10-15 min that is acceptable in the intraoperative setting.

Therefore, there would appear to be a residual niche for RT: Situations where the remote pathologist wanted or needed complete control over the slide and the results were needed promptly. Unfortunately, it is difficult to justify the cost and difficulty of setting up RT for such a narrow application. However, a new option has recently entered the market that makes more economic sense. The iScan Coreo Au WSI scanner manufactured by Ventana Medical Systems (Tucson, AZ; formerly BioImagene) has an integrated software module capable of performing RT known as LiveMode. The ability of a single machine to perform both WSI and RT increases its utility. In a setting where WSI capability is desired for other purposes and justifies the investment, such as in our hospital system, the RT capability can be exploited with little additional cost.

Our institution consists of a large central hospital and four smaller branch hospitals. Two of the smaller hospitals perform occasional neurosurgical procedures, but the cost of maintaining neuropathology expertise on site at both hospitals is prohibitive. The main hospital has frequent neuropathology specimens and multiple certified neuropathologists are present there to provide constant coverage. Traditionally, we have employed a camera mounted on a microscope operated by one of the pathologists from the smaller hospital as a means of remote consultation on ITP and frozen sections. A system-wide WSI initiative created an opportunity to evaluate the use of RT as an alternative method for evaluating ITP.

Robotic telepathology has already been shown to be effective. We have performed a validation study on the iScan Coreo Au RT system to test its performance prior to clinical use, with previous studies providing a baseline for comparison. We hypothesize that the performance of this novel platform will be similar to what has been previously demonstrated for other RT platforms.


   Methods Top


Validation of RT was performed on two iScan Coreo Au scanners. This scanning platform was chosen, in part, because it offered the additional RT feature. This scanner is available from the manufacturer on a payment model based on the number of cases scanned; however, our hospital system purchased the equipment outright. The cost of the scanner is comparable to other systems, within the previously published range of $35,000-100,000 for WSI equipment. [20] The scanners contain internal robotics capable of moving the slide and changing the microscope lenses between ×4, ×10, ×20, and ×40 options. The resolution is 0.46 μm/pixel at ×20, similar to other widely used systems, but lower than some others on the market. [20] A low-power thumbnail image is produced as a first step and serves to guide the pathologist as he or she decides where to examine the slide. The thumbnail includes the slide label, enabling an identification check. The thumbnail image is not stored. Each scanner was placed in a suburban hospital that requires remote neuropathology expertise from the central hospital for immediate interpretations. The scanners were connected to the hospital intranet that connects the remote and central hospitals with high-speed dedicated lines. For RT, no server is needed, unlike WSI. The images were viewed using the companion LiveMode software (Ventana Medical Systems, Tucson, AZ) on the desktop computers of the participating pathologists at the main hospital. The LiveMode software is password-protected to ensure confidentiality. All of the computers exceeded the minimum specifications for running LiveMode, including Windows (Redmond, WA, USA) XP operating system and 2 GB RAM. The monitors had a screen resolution of 1280 × 1084 pixels. Each pathologist used his or her everyday workstation and special monitors or settings were not employed.

Each scanner was validated using a set of 100 slides from 100 archived cases with ITP. These cases were originally performed without telepathology. Seventy five consecutive neuropathology ITP were selected for validation by the three practicing neuropathologists (SP, HT, AR) and an additional 25 general surgical pathology ITP were selected for validation by a practicing general surgical pathologist (MT). Although neuropathology is the primary intended use for RT in our institution, general surgical pathology cases were included in the validation, so that rare nonneuropathology cases could potentially also be reviewed by this method without having to re-validate the system. Each pathologist viewed 25 cases in each round of validation for one scanner, for a total of 50 cases. The two rounds of validation occurred 9 months apart. The touch prep slides selected from the archive were loaded into the scanner and the validating pathologist was able to view them at any time. The slide labels were not altered, but the pathologists did not look up patient information in hospital systems. They used only a brief clinical history provided on validation worksheets when interpreting the slides. Additional information such as radiology images and the intraoperative surgical findings were not available.

In the absence of any widely accepted standards for the validation of RT, we chose to model our RT validation on the draft College of American Pathologists (CAP) consensus statements for WSI that were available at the time. Since then a final version of the guideline has been published. [21] Based on the draft, we chose 100 cases for validation. If we had used the final version we might have instead chosen only 60 cases. Other portions of the same draft version were also followed: The cases were assigned to be viewed half as glass slides and half as RT with at least 3 weeks (21 days) before the cases were viewed again with the other modality. In the final version, the minimum period between viewings was reduced to 2 weeks.

Pathologists were trained in the use of LiveMode prior to the validation, but had little other previous experience with RT. Each pathologist provided an interpretation, date, and time spent for each case. For RT, the time included everything from the time the slide was selected from the LiveMode menu for review by the pathologist until a final interpretation was rendered, including the time needed to generate a thumbnail, time spent waiting for the robotics, and the time used by the pathologists to actually view the selected fields. If a case needed to be reloaded in the scanner the time for that process was also included. A comment section was also available for each case, but comments were not mandatory. Cases that required the slide to be reloaded in the scanner were recorded in the comment section. A comment that the case was not viewable by RT after reloading led to its exclusion.

Intraobserver variability was used as the basis for comparison of RT with traditional glass slide microscopy, along the lines recommended for WSI validation by the CAP. [21] Perfect agreement was defined as identical interpretations on glass slides and RT. Minor variances included ITP interpretations that were more vague or hedged on one modality than the other, such as "glioma" by glass slide, but "hypercellular glial tissue" and/or "possible glioma" by RT. Minor variances also included cases deemed nondiagnostic by RT, but with focal interpretable material seen on the glass slide. Major variances were defined as interpretive differences between modalities that would result in misclassification of the specimen into a category with significant differences in patient care, such as "glioma" by glass slide, but "lymphoma" by RT.

Kappa statistics were calculated with the unweighted method of Cohen on the VassarStats website. [22] The study was approved by the Institutional Review Board of the hospital system.


   Results Top


In the first validation of one scanner, one slide failed RT and 78 of the remaining 99 cases had perfect intraobserver agreement. There were 18 minor variances. The remaining three cases were major variances with potentially significant clinical implications, as listed in [Table 1]. In the second validation of the other scanner, all 100 cases were successfully evaluated. There was perfect agreement for 74 cases. Minor intraobserver variances were seen in 20 cases. Six cases had major variances, shown in [Table 1].
Table 1: Potentially clinically significant intraobserver discrepancies between remote robotic microscopy and conventional microscopy

Click here to view


Overall, nine potentially clinically significant major intraobserver variances occurred out of 200 cases (199 of which were successful by RT). Intraobserver variability is a common phenomenon in the review of glass slides even without the use of any additional technology, so it cannot be assumed that all of these variances represent an error attributable to RT. In an attempt to determine whether the glass slide or RT interpretation was closer to the "true" diagnosis for these nine cases, and by extension which one is most likely to represent an error, both interpretations were compared to the final diagnosis for each case, as shown in [Table 1]. The final diagnosis has the benefit of incorporating a full review of all material including history and radiology findings, frozen and permanent histology, and any special or immunohistochemical stains that were performed, making it the best available standard of the "true" interpretation. Compared with the original sign out the diagnosis of the corresponding case, the glass slide interpretation was correct or nearly correct and the RT interpretation was incorrect in three cases. Conversely, the RT interpretation was correct or nearly correct and the glass slide interpretation was incorrect for another three cases. For the remaining three cases, both modalities produced results that differed significantly from the original sign out diagnosis.

Kappa statistics were calculated for the 75 neuropathology cases, each of which was reviewed twice (150 total), comparing glass slide and RT interpretations. For purposes of kappa analysis, interpretations were divided into three discrete categories: Neoplastic, suspicious for neoplasm (including "hypercellular glial tissue"), and nondiagnostic. Six of the cases were excluded because the target was a nonneoplastic lesion. One other case was excluded because RT failed to work during one round of validation. This left a total of 68 cases, each evaluated twice by glass slides and RT, once on each scanner (136 total). The overall unweighted kappa was 0.68 (good agreement; 95% confidence interval: 0.54-0.82). Excluding relatively straightforward pituitary cases, almost all of which were adenomas, the unweighted kappa decreased slightly to 0.66 (good agreement; 95% confidence interval: 0.51-0.81).

The time needed to examine the slides was recorded for most of the cases. Examination of the glass slides took an average of 71 s (1 min and 11 s) and examination of RT took an average of 225 s (3 min and 45 s). This includes two cases that were delayed by a stall in the robotics requiring reloading of the slides (683 s and 769 s). The median examination time for RT was 201 s (3 min and 21 s). When only neuropathology cases are included, the results for the average times are essentially the same: 68 s for glass slides and 208 s for RT. The slower times recorded for RT reflect the relative difficulty of finding and viewing areas of interest on the slide using the remote robotics as opposed to manually moving a slide on a microscope. [Figure 1] illustrates a slide with abundant diagnostic material requiring relatively little searching of the slide and hence needing only a short time to reach an interpretation. [Figure 2] illustrates the opposite situation, a sparse slide requiring extensive effort to find diagnostic material before it can be viewed on high power. The LiveMode viewer thumbnail, used by the remote pathologist when deciding where to examine the slide in detail, is small and has low magnification. This makes identification of areas of interest difficult and results in wasted time when examining sparse touch preps. Pathologists struggling to find diagnostic foci using the thumbnail accounted for 15 outlier cases that required more than double the median time (more than 402 s) to reach an interpretation.
Figure 1: LiveMode work station image from a readily interpretable case. This screen capture shows a ×40 view of a cluster of epithelioid cells, stained with H&E, with a vague whorling pattern and nuclear pseudoinclusions, consistent with meningioma. The thumbnail on the left shows many small clumps on the slide, most of which have an appearance similar to the shown example, making this case readily interpretable

Click here to view
Figure 2: LiveMode work station image from a challenging sparsely cellular case. This screen capture shows a ×40 of a touch preparation, stained with H&E, containing mostly blood with only a few nondiagnostic cells. Most of the aggregates seen in the thumbnail image have a similar appearance, making interpretation of this slide by remote robotic microscopy very laborious. The final diagnosis for this case was intracerebral hematoma

Click here to view


Comments by the pathologists were not required for each case, but many were recorded. A total of 53 comments were registered over the 200 total case examinations, including 10 comments about glass slides. The most frequent comment was that the material on the slide was scant, hindering the interpretation. This comment accounted for 23 of the total, including nine of the comments about glass slides. Another comment made repeatedly (12 times) was that RT could not obtain good focus on cells of interest. Two comments were made to note instances when the RT equipment failed, requiring a restart to continue the examination of the slide. Four comments were positive, stating that a good interpretation could be achieved quickly with RT for that particular case.


   Conclusions Top


This validation study has demonstrated the feasibility of RT using the iScan Coreo Au LiveMode platform. The system worked well for both neuropathology and general surgical pathology ITP slides. Although a small number of potentially clinically significant errors were identified (nine of 199 cases; 4.5%), closer analysis of these cases, including comparison with the final diagnoses, showed that the glass slides were about as frequently incorrectly interpreted as the RT images. This indicates that most of the difficulties are attributable to the underlying challenges of the case itself rather than the RT technology. It should also be kept in mind that for this validation only one ITP slide was used per case. In actual clinical practice, multiple ITP and frozen sections could be examined. There would also be more history and clinical information available. Finally, in many instances, a nonspecific "hedge" interpretation favoring an entity or providing a differential would be sufficient for the purposes of intraoperative consultation, reducing the likelihood of significant errors. Considering everything, the findings are felt to be sufficient to validate the use of RT for the intended application in our practice.

A large study investigating the clinical experience with neurotelepathology compared intraoperative interpretations with final diagnoses over a 5 year period. [11] They predominantly used the Nikon (Melville, NY) Coolscope RT system for telepathology. This study found a "discrepant" rate, essentially the same as the "major variances" in our study, of 3.2% for telepathology and 3.3% for conventional microscopy. This is slightly lower than our intraobserver major variance rate of 4.5%. In that study, they also had an "inexact" rate of 21.1% for telepathology and 16.6% for conventional microscopy. This is similar to our similarly classified "minor variances" rate of 19%. Although the designs of the two studies are very different, the comparability of the frequency of major and minor variances in our study and in the clinical experience reported by these authors, both with and without the use of telepathology, supports our assertion that much of the intraobserver variability seen in our study is attributable to underlying case difficulty rather than technical limitations of RT.

To our knowledge, ours is the first published report of the performance of this RT platform. The prior study most comparable to this report is a validation study of cytology specimens using the Trestle platform showed similarly high intraobserver agreement of 92.5-95%. [14] A small study involving 40 pulmonary specimens found three significant intraobserver discrepancies (92.5% agreement) using the Nikon system. [7] Other studies report intraobserver agreement approaching 100% for surgical pathology cases with multiple slides using the Apollo, [23] Trestle, [6] and Motic [24] systems. Overall, it seems that the iScan Coreo Au RT module performs at approximately the same level of effectiveness as previous RT-specific modalities tested by intraobserver agreement, while also having the capacity to perform WSI. Unfortunately, this device cannot perform RT and WSI simultaneously, limiting its application in the intraoperative setting to only one mode or the other.

The pathologists who used this system found it to be slow and frustrating. Three major problems were encountered. The first was the delayed responsiveness of the RT system. The remote microscope does not move with the speed and efficiency pathologists expect from glass slides under a microscope. The remote connection to the robotic microscope means that commands must travel via the intranet (or internet) before being executed. This delay is compounded by the slow responses of the robotics that move the slide and change the microscope lenses and focus. These delays preclude systematic scanning of the entire slide in a reasonable amount of time, requiring selective viewing that may miss potentially significant areas. Previous reports have also emphasized the relative slowness of RT systems, [5],[7] though one report noted that times improved dramatically with greater experience. [23] The second problem relates to focusing. The microscope is not capable of the fine focus control available on a standard microscope. The scanner automatically finds a "best" plane of focus and only allows limited changes of focal planes in a fairly crude stepwise manner. Finally, the thumbnail image provided to help the pathologist decide where to look on the slide has significant limitations. The thumbnail itself it quite small when viewed on the computer screen, and small cell clusters are not readily visualized. Sparsely cellular or widely distributed touch preparations proved to be especially problematic for this reason. Together, these issues led pathologists to feel that despite spending extra time looking at the slide, they saw much less of it than when using conventional microscopy. Despite these limitations, we found RT using the iScan Coreo Au platform to be a reasonable means of providing intraoperative neuropathology expertise to remote sites.

 
   References Top

1.Dunn BE, Almagro UA, Choi H, Sheth NK, Arnold JS, Recla DL, et al. Dynamic-robotic telepathology: Department of veterans affairs feasibility study. Hum Pathol 1997;28:8-12.  Back to cited text no. 1
    
2.Delta Mea V, Cataldi P, Pertoldi B, Beltrami CA. Dynamic robotic telepathology: A preliminary evaluation on frozen sections, histology and cytology. J Telemed Telecare 1999;5 Suppl 1:S55-6.  Back to cited text no. 2
    
3.Demichelis F, Barbareschi M, Boi S, Clemente C, Dalla Palma P, Eccher C, et al. Robotic telepathology for intraoperative remote diagnosis using a still-imaging-based system. Am J Clin Pathol 2001;116:744-52.  Back to cited text no. 3
    
4.Szymas J, Wolf G, Papierz W, Jarosz B, Weinstein RS. Online Internet-based robotic telepathology in the diagnosis of neuro-oncology cases: A teleneuropathology feasibility study. Hum Pathol 2001;32:1304-8.  Back to cited text no. 4
    
5.Chorneyko K, Giesler R, Sabatino D, Ross C, Lobo F, Shuhaibar H, et al. Telepathology for routine light microscopic and frozen section diagnosis. Am J Clin Pathol 2002;117:783-90.  Back to cited text no. 5
    
6.Kaplan KJ, Burgess JR, Sandberg GD, Myers CP, Bigott TR, Greenspan RB. Use of robotic telepathology for frozen-section diagnosis: A retrospective trial of a telepathology system for intraoperative consultation. Mod Pathol 2002;15:1197-204.  Back to cited text no. 6
    
7.Leong FJ, Nicholson AG, McGee JO. Robotic telepathology: Efficacy and usability in pulmonary pathology. J Pathol 2002;197:211-7.  Back to cited text no. 7
    
8.Singh N, Akbar N, Sowter C, Lea KG, Wells CA. Telepathology in a routine clinical environment: Implementation and accuracy of diagnosis by robotic microscopy in a one-stop breast clinic. J Pathol 2002;196:351-5.  Back to cited text no. 8
    
9.Evans AJ, Chetty R, Clarke BA, Croul S, Ghazarian DM, Kiehl TR, et al. Primary frozen section diagnosis by robotic microscopy and virtual slide telepathology: The University Health Network experience. Hum Pathol 2009;40:1070-81.  Back to cited text no. 9
    
10.Horbinski C, Wiley CA. Comparison of telepathology systems in neuropathological intraoperative consultations. Neuropathology 2009;29:655-63.  Back to cited text no. 10
    
11.Horbinski C, Fine JL, Medina-Flores R, Yagi Y, Wiley CA. Telepathology for intraoperative neuropathologic consultations at an academic medical center: A 5-year report. J Neuropathol Exp Neurol 2007;66:750-9.  Back to cited text no. 11
    
12.Kim B, Chhieng DC, Crowe DR, Jhala D, Jhala N, Winokur T, et al. Dynamic telecytopathology of on site rapid cytology diagnoses for pancreatic carcinoma. Cytojournal 2006;3:27.  Back to cited text no. 12
[PUBMED]  Medknow Journal  
13.Slodkowska J, Pankowski J, Siemiatkowska K, Chyczewski L. Use of the virtual slide and the dynamic real-time telepathology systems for a consultation and the frozen section intra-operative diagnosis in thoracic/pulmonary pathology. Folia Histochem Cytobiol 2009;47:679-84.  Back to cited text no. 13
    
14.Cai G, Teot LA, Khalbuss WE, Yu J, Monaco SE, Jukic DM, et al. Cytologic evaluation of image-guided fine needle aspiration biopsies via robotic microscopy: A validation study. J Pathol Inform 2010;1:4.  Back to cited text no. 14
[PUBMED]  Medknow Journal  
15.Kerr SE, Bellizzi AM, Stelow EB, Frierson HF Jr, Policarpio-Nicolas ML. Initial assessment of fine-needle aspiration specimens by telepathology: Validation for use in pathology resident-faculty consultations. Am J Clin Pathol 2008;130:409-13.  Back to cited text no. 15
    
16.Alsharif M, Carlo-Demovich J, Massey C, Madory JE, Lewin D, Medina AM, et al. Telecytopathology for immediate evaluation of fine-needle aspiration specimens. Cancer Cytopathol 2010;118:119-26.  Back to cited text no. 16
    
17.Heimann A, Maini G, Hwang S, Shroyer KR, Singh M. Use of telecytology for the immediate assessment of CT guided and endoscopic FNA cytology: Diagnostic accuracy, advantages, and pitfalls. Diagn Cytopathol 2012;40:575-81.  Back to cited text no. 17
    
18.Gould PV, Saikali S. A comparison of digitized frozen section and smear preparations for intraoperative neurotelepathology. Anal Cell Pathol (Amst) 2012;35:85-91.  Back to cited text no. 18
    
19.Thrall M, Pantanowitz L, Khalbuss W. Telecytology: Clinical applications, current challenges, and future benefits. J Pathol Inform 2011;2:51.  Back to cited text no. 19
[PUBMED]  Medknow Journal  
20.Jara-Lazaro AR, Thamboo TP, Teh M, Tan PH. Digital pathology: Exploring its applications in diagnostic surgical pathology practice. Pathology 2010;42:512-8.  Back to cited text no. 20
    
21.Pantanowitz L, Sinard JH, Henricks WH, Fatheree LA, Carter AB, Contis L, et al. Validating whole slide imaging for diagnostic purposes in pathology: Guideline from the College of American Pathologists Pathology and Laboratory Quality Center. Arch Pathol Lab Med 2013;137:1710-22.  Back to cited text no. 21
    
22.Lowry R. VassarStats.net. Available from: http://www.vassarstats.net/. [Last cited on 2014 Apr 01].  Back to cited text no. 22
    
23.Dunn BE, Choi H, Almagro UA, Recla DL, Krupinski EA, Weinstein RS. Routine surgical telepathology in the Department of Veterans Affairs: Experience-related improvements in pathologist performance in 2200 cases. Telemed J 1999;5:323-37.  Back to cited text no. 23
    
24.Li X, Gong E, McNutt MA, Liu J, Li F, Li T, et al. Assessment of diagnostic accuracy and feasibility of dynamic telepathology in China. Hum Pathol 2008;39:236-42.  Back to cited text no. 24
    


    Figures

  [Figure 1], [Figure 2]
 
 
    Tables

  [Table 1]


This article has been cited by
1 The Empirical Foundations of Telepathology: Evidence of Feasibility and Intermediate Effects
Rashid L. Bashshur,Elizabeth A. Krupinski,Ronald S. Weinstein,Matthew R. Dunn,Noura Bashshur
Telemedicine and e-Health. 2017;
[Pubmed] | [DOI]



 

 
Top
  

    

 
  Search
 
   Browse articles
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
   Introduction
   Methods
   Results
   Conclusions
    References
    Article Figures
    Article Tables

 Article Access Statistics
    Viewed1809    
    Printed84    
    Emailed0    
    PDF Downloaded311    
    Comments [Add]    
    Cited by others 1    

Recommend this journal