Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 384  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 




 
Table of Contents    
COMMENTARY
J Pathol Inform 2019,  10:38

Clinical-grade Computational Pathology: Alea Iacta Est


Department of Pathology, Cannizzaro Hospital, Catania, Italy

Date of Submission18-Sep-2019
Date of Acceptance18-Nov-2019
Date of Web Publication11-Dec-2019

Correspondence Address:
Dr. Filippo Fraggetta
Department of Pathology, Cannizzaro Hospital, Via Messina 829, Catania 95126
Italy
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpi.jpi_54_19

Rights and Permissions

How to cite this article:
Fraggetta F. Clinical-grade Computational Pathology: Alea Iacta Est. J Pathol Inform 2019;10:38

How to cite this URL:
Fraggetta F. Clinical-grade Computational Pathology: Alea Iacta Est. J Pathol Inform [serial online] 2019 [cited 2020 Aug 11];10:38. Available from: http://www.jpathinformatics.org/text.asp?2019/10/1/38/272632



Many things have changed in the last few years in the field of surgical pathology. It has been suggested that with the advent of the artificial intelligence (AI), we are approaching the third revolution in pathology.[1] By citing the oneiric movie Blade Runner, “I've seen things that you people wouldn't believe,” computers are now rendering a histological diagnosis, predicting genomic mutations by just analyzing whole-slide images stained with hematoxylin and eosin.[2] Nowadays, words such as computational pathology, deep learning, convolutional neural network (CNN), and AI are as common words as “differential diagnosis” and “immunohistochemistry” in our pathologist's dictionary. But, when and how to use these new words in our day-to-day practice remains unclear.

The recent article by Campanella et al.[3] shows us that computers, CNN, and AI are things that can be implemented in the pathology laboratories. Moreover, the authors give us a more stringent and a more practical definition of clinical-grade performance of AI tools in routine. In their article, the authors collected three datasets of slides: (1) a prostate core biopsy dataset; (2) a skin dataset; and (3) a breast metastasis to lymph node dataset in order to develop a deep learning model to obtain a well-accurate prediction (on the level of benign vs. malignant). The results are based on almost a total of 45,000 slides, which represents one of the largest datasets evaluated up to now. Campanella et al. propose a framework for training classification models on a very large scale without the need for pixel-level annotations. This is a step forward in the development of AI tool, thus overpassing the major limitation in obtaining consistent results due to the labor-intensive annotation process.

The results obtained are, in terms of sensitivity and specificity, very encouraging with a sensitivity of 100% (i.e., no false-negative case) for the prostate dataset. This result leads to the formalization of the concept of clinical-grade decision support systems, proposing, in contrast with the existing literature, a new measure for clinical applicability (which is one of the several reasons to implement digital pathology in routine practice).[4],[5] It is well known that current approaches gauge performance by the quantification of sensitivity, specificity, and positive predictive value, as well as measures such as the area under the receiver operating characteristic curve, the area under the precision-recall curve, and calibration.[6] However, none of these measures ultimately reflect what is most important to patients, namely whether the use of the model results in a beneficial change in patient care.[7]

The authors argue that in future, in a fully digital pathology department, all the cases or all the slides will be presented to the pathologists already sorted through an interface that would flag positive slides and disregard all benign slides.

Although this workflow could be considered “futuristic,” it has in many ways already been realized at the Department of Pathology, Cannizzaro Hospital, Catania, Italy. In this fully digital workflow, histological slides are routinely scanned and presented to the pathologists in a virtual slide tray within the laboratory information system (LIS).[8] The integration between this workflow with a commercial AI tool (INIFY™ Prostate ContextVision Sweden)[9] for the detection of prostate cancer has been validated and recently presented at the European Congress of Pathology in Nice.[10] The integration with the LIS has been validated by testing >5000 slides, resulting in a robust bidirectional HL7 connection between LIS and AI tool.

This workflow recapitulates that one described by Campanella: once slides belonging to the category of prostatic biopsies have been scanned, the AI tool is run and the pathologist is presented with the model's recommendations directly in the virtual tray. If the AI tool detects cancer, the corresponding slide in the virtual tray displayed in the LIS is flipped. The integration of the decision support tool was relatively simple and now runs automatically with no need of additional workload for technicians or pathologists. Updated version of the tool has now been implemented, showing a higher sensitivity with very low false-negative cases (belonging to a small focus of carcinoma). Moreover, in the clinical setting, pathologists may have the possibility to look at the “suspicious” areas as detected by the ContextVision algorithm and highlighted in their proprietary viewer, thus using the AI tool as a template to search and verify the eventually missed suspicious areas by pathologists. In this setting, the AI tool represents a computer-aided design tool for pathologists.

Independently by the method used in developing the algorithm of the AI tool, the article of Campanella et al. deserves and raises some comments as follows:

  1. Although digital pathology has been said to represent the third revolution in pathology,[1] the fully digital pathology laboratories are very few worldwide.[11],[12] This is probably due to several reasons, including cost, information technology requirements, and, last but not the least, the skepticism toward digitalization in general that still exists in the pathology community.[13] As a consequence, we are facing with a proliferation of applications and tools not yet adopted or integrated into the clinical workflow
  2. Although the possibility to disregard “negative” slides could be of great utility, one should ask whether in the real day-to-day practice, the pathologists are willing to discharge all these negative cases/slides as defined by the AI tool. It should be also kept in mind that the AI tool can only analyze what has been recognized by the tissue finder of the slide scanner and that the failure of this detection could represent a rare source of error impacting the diagnosis of clinically significant [14]
  3. The use of AI tool in the routine practice will not only give the possibility to screen for cancer but also to forecast molecular abnormalities and prognosis,[15] underlining the need to move to a fully digital approach [16]
  4. AI tool will change, in my opinion, even the clinical scenario of the pathologies. Future clinical studies should verify whether cases defined as false negative or false positive by the AI tool may per se represent new clinical categories, and which deserves a particular clinical approach.


“Once in a while, someone teaches you something that forces you to re-evaluate the integrity of your intellectual foundations. If proven correct, this moment of cognitive disquietude can eventually transform our understanding of the world for the better, but the initial instinct is typically more akin to fear and trembling.”[17]

It is clear that pathologists and nonpathologists will more often be faced with a “digital approach” to the disease: “alea iacta est” (we are at a point of no return).

We just need to be part of it.



 
   References Top

1.
Salto-Tellez M, Maxwell P, Hamilton P. Artificial intelligence-the third revolution in pathology. Histopathology 2019;74:372-6.  Back to cited text no. 1
    
2.
Coudray N, Ocampo PS, Sakellaropoulos T, Narula N, Snuderl M, Fenyö D, et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med 2018;24:1559-67.  Back to cited text no. 2
    
3.
Campanella G, Hanna MG, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam KJ, et al. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med 2019;25:1301-9.  Back to cited text no. 3
    
4.
Sanghvi AB, Allen EZ, Callenberg KM, Pantanowitz L. Performance of an artificial intelligence algorithm for reporting urine cytopathology. Cancer Cytopathol 2019;127:658-66.  Back to cited text no. 4
    
5.
Saito T, Rehmsmeier M. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS One 2015;10:e0118432.  Back to cited text no. 5
    
6.
Shah NH, Milstein A, Bagley, PhD SC. Making Machine Learning Models Clinically Useful. JAMA 2019;322:1351-52.  Back to cited text no. 6
    
7.
Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med 2019;17:195.  Back to cited text no. 7
    
8.
Fraggetta F, Garozzo S, Zannoni GF, Pantanowitz L, Rossi ED. routine digital pathology workflow: The Catania experience. J Pathol Inform 2017;8:51.  Back to cited text no. 8
[PUBMED]  [Full text]  
9.
Burlutskiy N, Pinchaud N, Gu F, Hägg D, Andersson M, Björk L, et al. Segmenting potentially cancerous areas in prostate biopsies using semi-automatically annotated data. Proc Mach Learn Res 2019;102:92-108.  Back to cited text no. 9
    
10.
Fraggetta F, Lionti S, Giuffrida G, Emmanuele C, Pepe P. Implementation of the ContextVision INIFY (TM) tool for the automatic detection of prostatic cancer in a fully digital routine workflow. Virchows Arch 2019;  475, S60-S60 Abstract CP-03-007.  Back to cited text no. 10
    
11.
Retamero JA, Aneiros Fernandez J, Del Moral RG. Complete digital pathology for routine histopathology diagnosis in a multicenter hospital network. Arch Pathol Lab Med in press. https://doi.org/10.5858/arpa.2018-0541-OA.  Back to cited text no. 11
    
12.
Cheng CL, Azhar R, Sng SH, Chua YQ, Hwang JS, Chin JP, et al. Enabling digital pathology in the diagnostic setting: Navigating through the implementation journey in an academic medical centre. J Clin Pathol 2016;69:784-92.  Back to cited text no. 12
    
13.
Griffin J, Treanor D. Digital pathology in clinical use: Where are we now and what is holding us back? Histopathology 2017;70:134-45.  Back to cited text no. 13
    
14.
Fraggetta F, Yagi Y, Garcia-Rojo M, Evans AJ, Tuthill JM, Baidoshvili A, et al. The importance of eSlide macro images for primary diagnosis with whole slide imaging. J Pathol Inform 2018;9:46.  Back to cited text no. 14
[PUBMED]  [Full text]  
15.
Mobadersany P, Yousefi S, Amgad M, Gutman DA, Barnholtz-Sloan JS, Velázquez Vega JE, et al. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc Natl Acad Sci U S A 2018;115:E2970-9.  Back to cited text no. 15
    
16.
Madabhushi A, Lee G. Image analysis and machine learning in digital pathology: Challenges and opportunities. Med Image Anal 2016;33:170-5.  Back to cited text no. 16
    
17.
Cho CS. Radiomics: A well intentioned leap of faith. Ann Surg Oncol 2019;26:4178-9  Back to cited text no. 17
    




 

 
Top
  

    

 
  Search
 
   Browse articles
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    References

 Article Access Statistics
    Viewed777    
    Printed33    
    Emailed0    
    PDF Downloaded210    
    Comments [Add]    

Recommend this journal