Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 360  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 




 
Table of Contents    
ABSTRACTS
J Pathol Inform 2020,  11:30

What did we expect from Porto's ECDP2020


Date of Web Publication18-Sep-2020

Correspondence Address:
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2153-3539.295359

Rights and Permissions

How to cite this article:
. What did we expect from Porto's ECDP2020. J Pathol Inform 2020;11:30

How to cite this URL:
. What did we expect from Porto's ECDP2020. J Pathol Inform [serial online] 2020 [cited 2020 Oct 26];11:30. Available from: https://www.jpathinformatics.org/text.asp?2020/11/1/30/295359



Catarina Eloy1,2, Sofia Campelos1

1Department of Pathology, Ipatimup Diagnostics, Ipatimup - Institute of Molecular Pathology and Immunology of the University of Porto, Porto, Portugal, 2Medical Faculty of Porto University, University of Porto, Portugal. E-mail: [email protected]

The European Society of Digital and Integrative Pathology (ESDIP) planned the 16th edition of the European Congress on Digital Pathology to be held in Porto, Portugal. Due to the Coronavirus pandemic worldwide situation, this edition needed to be cancelled. The core theme of the congress that served as a frame for presenting abstracts was “The Augmented Pathologist: empowering for a better patient care”, epitomizing the idea that the digital transformation of pathology is expected to contribute to strengthen the pathologist role in providing newer and better information regarding clinical management of the patients.

The abstracts submitted to be presented at ECDP2020, herein reported, were peer reviewed by the members of the Scientific Committee of ECDP2020 (António Polónia, Arvydas Laurinavicius, Gloria Bueno, Johan Lundin, Jose Aneiros, Norman Zerbe, Sofia Campelos, Vincenzo Della Mea).


   Short Abstracts Top



   Artificial Intelligence Based Rapid on Site Evaluation for Endobronchial Ultrasound-Transbronchial Needle Aspiration Top


Harshal Nishar1, Avaneesh Meena1, Dev Kumar Das1, Uttara Joshi1

1Image Processing, AIRA Matrix, Mumbai, Maharashtra, India. E-mail: [email protected]

Introduction: Endobronchial ultrasound (EBUS) guided transbronchial needle aspiration (TBNA) of mediastinal & hilar lymph nodes is an important procedure for surgical mediastinal staging of lung masses. Rapid On-Site Evaluation (ROSE) of aspirates improves adequacy rate, diagnostic yield and accuracy, however being dependent on on-site availability of expert pathologist. In routine workflow, adequacy may be reported only after 2-4 days with the diagnostic report. A repeat procedure needed for “inadequate” aspirates then increases i) turn-around time for diagnosis ii) patient morbidity iii) hospital stay and expenses. To overcome this, we propose an automated deep learning based evaluation for ROSE. Materials and Methods: 35 Papanicolaou (PAP) stained smears (20 training and 15 testing) received from three oncology hospitals were digitized using Nanozoomer XR (Hamamatsu). Semantic segmentation was performed for lymphocytes, large epithelial cells, and pigmented macrophages by training customized variant of a fully convolution network, FCN8s on training images at 40x magnification. Classification of the specimen as “adequate” or “inadequate” was based on random forest classifier using mean lymphocyte density over 10 high power fields and presence of large cells, pigmented macrophages as adequacy criteria. Results: The proposed system achieved an agreement of 0.92 (Cohen's Kappa) with adequacy reports by pathologist. An average accuracy of 83% for detection of different parameters, was achieved. Conclusions: The proposed method provides objective, accurate, and precise adequacy assessment of EBUS-TBNA with faster turnaround, also expediting diagnostic workflow by triaging specimens. The semantic segmentation output can be used by pathologist as second read improving performance of ROSE.


   An Artificial Intelligence-Based Prostate Diagnostic Tool: Performance and Detection of Missed Cancers in France's Largest Pathology Network Top


Daphna Laifenfeld1, Judith Sandbank1,2, Chaim Linhart1, Frédéric Neumann3, Lilach Bien1, Cobi Reouven1, Roei Harduf1, Mahul Amin4, Olivier Levrel3, Stéphane Rossat3, Delphine Raoux3

1Ibex Medical Analytics Ltd., Tel Aviv, Israel, 2Institute of Pathology, Maccabi Healthcare Services, Rehovot, Israel, 3MediPath, France, 4UTHSC, Memphis, TN, USA. E-mail: [email protected]

Introduction: Ibex Medical Analytics developed an AI-based prostate algorithm that detects and grades adenocarcinoma, in addition to detection of inflammation, high-grade PIN and perineural invasion. Ibex, together with Medipath, the largest network of pathology institutes in France, assessed the performance of the algorithm on slides diagnosed as benign from 5 different labs within the Medipath network. Materials and Methods: Calibration to the Medipath preanalytic processes - 150 cases (100 benign, 50 cancer), amounting to 1,140 H&E slides. Test - 100 consecutive cases (801 slides) reported as benign were scanned with a Philips UFS scanner and analyzed by the prostate algorithm. Slides that passed the threshold were sent for review by three pathologists (M.A. and O.L. and M.S.). Results: The algorithm demonstrated 98.8%/96.9% specificity/sensitivity. 25 of the study set slides passed the threshold and were sent for review. Subsequent to the independent review cancer was diagnosed in 10 unique cases by at least two reviewers. The missed cancers included both low and high grade tumors. Additional statistics on missed cancers & algorithm performance will be shown. Conclusions: The Ibex AI-based algorithm demonstrated high sensitivity and specificity in detection of adenocarcinoma. The algorithm detected cancers that had been missed in the original diagnosis, some of which were high-grade cancers. AI algorithms, such as developed by Ibex should be used in various stages of the diagnostic process, to ensure patient safety and to enable higher accuracy in a fraction of pathologist effort and time, resulting in more reliable, objective diagnosis and faster turn-around-time.


   Towards a Framework for Continuous Real-Time Image Quality Assurance Top


David Ameisen1, Julie Auger-Kantor1, Emmanuel Ameisen1

1imginIT SAS, France. E-mail: [email protected]

Introduction: Now that image quality assurance is deemed essential to the practice of digital pathology – especially since the US Food and Drug Administration's advocacy on this subject, and European Union regulation 2017/746 – the digital pathology community needs to define specifications for reliable and efficient continuous image quality assurance tools. Materials and Methods: We benchmarked 11 image processing and machine learning quality assurance methods published between 2004 and 2020. For each method, we compared focus quantification accuracy, reliability, and speed, other quality parameters assessed, handled image formats, minimum requirements, processing and memory footprint, and ease of implementation in a digital pathology workflow. Results: We found the best image processing algorithms to be faster, more specific and more reliable than the best machine learning algorithms. However, machine learning algorithms can estimate image quality without requiring a strict definition, and may even highlight and provide new image quality criteria.

By weighing the strengths and flaws of the available methods, we developed a framework for continuous real-time image quality assurance in digital pathology. Conclusions: This framework is applicable to any software and hardware architectures, image acquisition devices and laboratory workflows. Quality assurance solutions following this framework would enable faster acquisition, management and visualization systems, better laboratory workflows, and more relevant image analysis and diagnostic tools, for better patient care. Such benefits should encourage the digital pathology community to continue this work, and draft specifications in order to standardize its image quality assurance.


   Automated Prediction of Malignancy in Specimens of Melanocytic Lesions Using Weakly-Supervised Deep Neural Networks Top


Saul A. Kohn1, Ramachandra Vikas Chamarthi1, Kameswari Devi Ayyagari1, Siva Sankarapandian1, Mike J. Bonham1, Clay J. Cockerell2, Wonwoo Shon3, Julianna D. Ianni1, Rajath Elias Soans1

1Proscia Inc., Philadelphia, PA, USA, 2Cockerell Dermatopathology, Dallas, TX, USA, 3Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA. E-mail: [email protected]

Introduction: Over 70,000 people are diagnosed with melanoma each year, and the survival rate of patients with metastatic malignant melanoma is less than 20%. Unfortunately, rates of diagnostic discordance are high when discriminating between melanoma and benign melanocytic lesions. We present a deep convolutional neural network, based on Ianni et al. (2020), that identifies melanoma in unannotated whole slide images (WSIs) of Hematoxylin and Eosin stained histopathology specimens. Materials and Methods: Our dataset comprised of 1688 specimens (2330 WSIs), representing the diversity expected to be encountered in clinical practice. This dataset included: conventional (322 specimens), blue (27) and halo (3) nevi, solar lentigo (62), various subtypes of melanomas (434; in situ: 348, invasive: 86), and mimickers of melanoma such as dysplastic (823) and Spitz (17) nevi. To ensure the robustness of the model against typical variations, no WSIs were excluded based on image quality or artifacts. We trained a neural network under a multiple-instance learning paradigm to classify specimens as “likely benign” or “likely malignant”. Results: Our model achieved 91% accuracy (F1=0.89; ROC-AUC=0.96) on 254 validation specimens when distinguishing between melanoma and benign nevi. This performance is almost identical to the 23 gene expression test, predicting malignancy within minutes, rather than days. Distinguishing between melanoma and dysplastic nevi, we achieved 74% accuracy. Conclusions: By training atop the network developed by Ianni et al., our model can operate on all pathologic entities typically seen in a dermatopathology lab; such a framework could boost lab efficiency, sorting cases prior to pathologist review.


   Identification of HER2 Positive from HER2 Negative Breast Cancers Based Solely on Their Morphology Top


Azadeh Alavi1,2, David Ascher1,2

1Computational Biology and Clinical Informatics laboratory, Baker Heart and Diabetes Institute, Melbourne, Australia, 2Bio21 Molecular Science & Biotechnology Institute, University of Melbourne, Melbourne, Australia. E-mail: [email protected]

Introduction: Identification of Her2 positive breast cancer has important implications for patient treatment. However, the costs and side-effects of these targeted-treatments leads to the desire of rapid and accurate identification. As part of HEROHE-Challenge, we propose a novel approach to classify HER2 positive from negative cancers using HE-slides, without using any related external dataset. Materials and Methods: The HE-images were divided into overlapping tiles, and descriptive features extracted using transfer-learning. A CNN-architect was trained using the Cfar10 dataset, with an intermediate layer for feature extraction to describe each tile. Then for each image, all the resulting descriptive-features for each tile are clustered into 20-70 unique clusters using the Kmean algorithm. The closest tile-descriptor to each centre is then selected, and the tile descriptors concatenated to provide the final feature-vector of each image. This was used as evidence within an XGBoost machine learning algorithm to distinguish between HER2 positives and negatives. Results: The primary result on the blind test are as followed: f1_score = 0.31068, precision: 0.37209, recall:0.26667. To enhance the performance, we seek more descriptive-features by using a greater number of clusters and randomly choosing positive and negative images (centers), to computed the correlation of each tile-descriptor to centers' tile-descriptors. That improves the performance dramatically: f1_score = 0.62353, precision: 0.48182, recall: 0.88333. Conclusions: This work highlights the power of using transfer learning within the framework of characterising clinical-images, and presents valuable tools to help guide real-time screening of breast cancer Her2-status with scope for significant improvement with further training, and additional data.


   Automatic Grading of Esophageal Dysplasia Using Mass Spectrometry Imaging and Optical Microscopic Imaging Top


Manon Beuque1, Benjamin Balluff2, Marta Martin-Lorenzo2, Henry C. Woodruff1,3, Marit Lucas4, Sybren L Meijer5, Ron M. A. Heeren2, Philippe Lambin1,3

1Department of Precision Medicine, The D-Lab, GROW Research Institute, University of Maastricht, Maastricht, The Netherlands, 2Maastricht MultiModal Molecular Imaging Institute (M4I), University of Maastricht, Maastricht, The Netherlands, 3Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre, Maastricht, The Netherlands, 4Department of Biomedical Engineering and Physics, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands, 5Department of Pathology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands. E-mail: [email protected]

Introduction: Barrett's esophagus (BE) is a dysplastic condition that can lead to esophageal adenocarcinoma. Grading dysplasia is therefore of crucial prognostic value and is currently based on the visual evaluation of optical microscopic images from bioptic material. This study aims to investigate the potential of machine learning (ML) using data from mass spectrometry imaging (MSI) and optical microscopy (H&E) for an objective diagnosis of BE. Materials and Methods: The dataset consists of 176,027 tiles extracted from both MSI (50x50μm) and H&E images (96x96 pixels a 0.5μm) of tissue material from 60 patients, equally divided into non-dysplastic, low-grade non-progressive, low-grade progressive, and high-grade BE. ML models were trained on the two modalities individually, both at tile and at patient level, to distinguish tissue type, dysplastic grade, and low-grade progressors from low-grade non-progressors. Their performances were compared using the area under the curve (AUC) on the testing-set. Results: At tile level, ML models could distinguish glandular tissue from non-glandular tissue with AUCs of 0.90 (MSI) and 0.96 (H&E). Automatic grading of glandular tissue reached AUCs of 0.86 (MSI) and 0.68 (H&E). The predictions per tile did not improve upon combining MSI and H&E features. At patient level, MSI data from glandular tissue was best for grading (AUC=0.92) and predicting progression to high-grade dysplasia (AUC=0.69). Conclusions: The classifier based on H&E data gives best result for distinguishing tissue types, whereas MSI shows superior classification of dysplastic grade and progressor status. This demonstrates the complementarity of both types of data for different clinical tasks.


   Detecting Helicobacter pylori Using Deep Learning in H and E-Stained Histological Images Top


Rui Dias Quintino1, Lígia Prado e Castro 1

1DevScope – Under Coordination of Rui Quintino, 2LAP/UNILABS – Under coordination of Lígia Prado e Castro. E-mail: [email protected]

Introduction: Identifying Helicobacter pylori (H. Pylori) on single auto-focus Haemotoxylin and Eosin (H&E) + Giemsa stained whole-slide images (WSI) using digital pathology software is a challenging, costly, and labor-intensive task. Pathology experts often need to analyze whole H&E slides in detail, which is expensive in terms of time, resources, and the diagnosis assessment may differ among experts. To alleviate these issues, we present the development and evaluation of a Computer-aided diagnosis (CAD) pipeline supported by a Deep Learning (DL) algorithm. Materials and Methods: We sampled 60 WSIs from 48 different cases at 40x magnification, and using expert annotated positive regions for H. Pylori, we developed a UNET based model for segmenting H. Pylori. Among 60 H&E WSIs, 5 WSIs were selected for model training & validation, and 55 for the CAD pipeline study. Results: We evaluated our pipeline with the help of five pathology experts in 55 different cases. For each case, we created three evaluation scenarios - H&E, Giemsa, and H&E with the help of our pipeline, and finally, compared against Immunohistochemistry (IHC) stain as ground-truth evaluation criteria. Conclusions: Our work confirmed that H. Pylori diagnosis suffers from suboptimal interobserver and intraobserver variability. We show that it is possible to use DL algorithms to identify H. Pylori, significantly reducing the time required for analyzing each slide and the diagnosis variance among pathologists. Hence, an opportunity for CAD emerges, showing that it is possible to improve the diagnosis process, easing the pathologist task while ensuring good qualitative results.


   Identifying HER2 Overexpression Using Deep Learning and Nucleus-Filtering Algorithm Top


Jiwon Jung1, Jin Roh2, Chan-Sik Park3

1Asan Medical Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea, 2Department of Pathology, Ajou University School of Medicine, Suwon, South Korea, 3Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea. E-mail: [email protected]

Introduction: In breast cancer (BC), HER2 status has been associated with aggressive clinical behavior, and patients with HER2-positive BC have been expected to benefit from targeted therapy. Materials and Methods: In this study, we propose an image-analysis algorithm that identifies HER2 status, evaluating only the morphological features present on the hematoxylin and eosin (H&E) slide. We hypothesized that the histological features of HER2 overexpression are mainly present on the nuclei and nucleoli in the tumor area. Accordingly, we modeled a deep-learning architecture to learn HER2-representing features from the nucleus-filtered images. The proposed algorithm is run in three stages: tumor segmentation, stain separation, and HER2 classification of the nucleus-filtered images. In tumor segmentation, we trained a multiclass residual network using partial tissue annotations drawn by an expert pathologist. As a training set for the patch-level HER2 classifier, the nucleus-filtered images were obtained by isolating the hematoxylin channel using a popular stain-separation technique. For slide-level inference, we identified specimens as HER2-positive if a prediction score exceeded the threshold and vice versa. Results: In evaluation, our model successfully sorted tumor areas with patch-level accuracy of 0.99 and achieved patch-level HER2-classification accuracy of 0.70 and the slide-level F1 score of 0.79. Test performances were 0.46, 0.53, and 0.49 in precision, recall, and F1 score, respectively. Implementation took less than two minutes median running time per specimen. Conclusions: In a rapid, fully automated way, our identification algorithm showed promising capability to evaluate morphological features relevant to HER2 overexpression.


   Artificial Intelligence-Facilitated Quantification of Immunoprofile in Breast Cancer Using 4-Plexed Chromogenic Immunohistochemistry Top


Patricia Switten Nielsen1, Jeanette Baehr Georgsen1, Torben Steiniche1, Trine Tramm1

1Department of Pathology, Aarhus University Hospital, Aarhus, Denmark. E-mail: [email protected]

Introduction: Immunoprofiles of the tumor microenvironment are under immense investigation, e.g., to optimize cancer therapy. To preserve valuable contextual tissue information and advantages of brightfield microscopy, 4- and 5-colored immunoprofiles with conventional immunohistochemical chromogenes have been developed. Yet, stain complexity calls for AI-powered analysis. Materials and Methods: Paraffin-embedded biopsies from 234 breast cancer patients with matching tissue microarrays (TMA) were stained for CD66b (neutrophil granulocytes, brown), CD20 (B lymphocytes, blue), CD68 (macrophages, purple), and PCK (tumor cells, yellow) with automated sequential immunohistochemistry without counterstaining. The convolutional neural network, U-Net, with Adam optimization was utilized for training on 50 TMA cores. In mean, 137 (range, 82-264) objects were manually defined for each class (cells, stroma, white background). On 60 different TMA cores, areas of manual identification and AI were compared. In all full-cut patient samples, the AI application calculated percentage levels of immune cells within tumor stroma and in close connection to tumor cells. Results: The mean difference between AI and manual detection including lower and upper 95% limits of agreement was 7 (-125; 139) μm2 for CD66b, 14 (-258; 285) μm2 for CD20, -77 (-279; 125) μm2 for CD68, -165 (-1400; 1066) μm2 for PCK, 347 (-2400; 3078) μm2 for stroma, and -215 (-1600; 1184) μm2 for white background. By manual inspection of full-cut slides, AI performance was extremely convincing. Conclusions: AI provided very accurate results for this immunoprofile. Additional training, e.g., with focus on artifacts could, nonetheless, optimize results. Further analysis will reveal if the immunoprofile holds therapeutic potential.


   A Fully Automated Pipeline for Human Epidermal Growth Factor Receptor 2 Expression Prediction in Invasive Breast Cancer Top


Mustafa Umit Oner1,2, Hwee Kuan Lee1,2,3,4, Wing Kin Sung1,5

1Department of Computer Science, School of Computing, National University of Singapore, Singapore, 2Bioinformatics Institute, Singapore, 3Image and Pervasive Access Lab, Singapore, 4Singapore Eye Research Institute, Singapore, 5Genome Institute of Singapore, Singapore. E-mail: [email protected]

Introduction: Overexpression of the human epidermal growth factor receptor 2 (HER2) protein is an important predictive and prognostic marker and present in approximately 15% of early invasive breast cancer cases. In clinical practice, although HER2 expression can be evaluated by examining cell membrane staining in invasive tumor regions over immunohistochemistry (IHC) slides, this study checks the feasibility of predicting HER2 expression over hematoxylin and eosin (H&E) stained slides, which is a routine in clinical diagnosis workflow and much cheaper than IHC. Materials and Methods: A fully automated pipeline, which accepts H&E stained whole slide images (WSIs) as input and predicts HER2 expression with a confidence score, has been developed. This pipeline consists of three deep learning modules: cancerous region segmentation module, feature extractor module and HER2 expression prediction module. The first module segments out the cancerous regions in the WSI. The second module extracts features of all patches in the cancerous regions. Lastly, the third module, which is an end-to-end trainable, novel multiple instance learning based deep learning model, obtains feature distributions by employing a kernel density estimation layer on enriched features and predicts HER2 expression with a confidence score by processing the feature distributions. Results: Our pipeline has been tested in 10-fold cross validation setup on HEROHE challenge dataset with 360 H&E stained WSIs. The corresponding F1 score is 0.731±0.059. Conclusions: Our result shows that it is promising to use our pipeline on H&E stained slides to filter out some cases before IHC.


   Unsupervised Joint Clustering and Representation Learning for Survival Analysis in Colorectal Cancer Top


Christian Abbet1, Behzad Bozorgtabar1, Jean Philippe Thiran1, Inti Zlobec2

1Departement of Electrical Engineering, LTS5 – Signal Processing Laboratory 5, EPFL, Lausanne, Switzerland, 2Institute of Pathology, Translational Research Unit, Bern, Switzerland. E-mail: [email protected]

Introduction: Colorectal cancer (CRC) is one of the most common causes of cancer death worldwide. There is a need to more accurately predict patients clinical outcomes. We aim at using machine learning to learn histomorphological patterns distributions in CRC. By linking the pattern distributions to survival data, we hope to highlight relevant features that can help clinical decision making. Materials and Methods: Firstly, we propose a transfer learning solution trained using 100'000 publicly available labelled images[1] to predict and extract tumour regions on 665 in-house unlabelled whole slide images (WSIs) for a total of 377 patients with adenocarcinoma. Secondly, we propose an unsupervised clustering method that jointly learns the deep representation and cluster assignments of the histomorphological features. Clusters obtained by our approach can be used as descriptors of patients and linked to survival and hazard ratio (HR). Results: The use of external data allows us to properly isolate tumours within tissue slides. Moreover, we find 4 clusters that are statistically relevant to survival prediction. One cluster is linked to positive outcomes (HR = 0.62) and 3 negatives (HR ∈ [1.41, 1.71]). Thus, the distribution of tissue patch clusters in WSIs is an indicator of survival. Conclusions: In our work, we demonstrate that we can benefit from external datasets to locate tumours on unlabelled WSIs and thus avoid tedious annotation tasks. Moreover, we show that our model can learn, in an unsupervised fashion, features that are discriminant for survival analysis. This may give pathologists an additional tool during diagnosis.

Reference

  1. Available from: https://zenodo.org/record/1214456. [Last acessed on 2020 Aug 26].



   The BT-Hotspot Graph Dataset: Investigating the Relation of Tumor-Buds and T-Cells in Colorectal Cancer Tumor Budding Hotspots Top


Linda Studer1,2,3, John-Melle Bokhorst4,5, Francesco Ciompi4,5, Andreas Fischer1,3, Heather Dawson2

1iCoSys, University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland, 2Institute of Pathology, Faculty of Medicine, University of Bern, Switzerland, 3DIVA Research Group, Department of Informatics, University of Fribourg, Switzerland, 4Radboud University Medical Center, Nijmegen, Netherlands, 5Department of Pathology, Radboud University Medical Center, Nijmegen, Netherlands. E-mail: [email protected]

Introduction: Tumor budding at the invasive tumor front has been proposed as a biomarker for risk stratification in colorectal cancer. However, assessing for tumor budding alone may not adequately characterize the tumor-host interface. The host immune response has also been extensively examined as a protective biomarker. Graph-based representations allow us to describe these interactions in an abstract way, because they capture the geometry and relationship of the tumor-buds and T-cells in the hotspot. Materials and Methods: We collected paraffin-embedded tissue blocks from 348 patients with known pT1-colorectal cancer. Tissue slides were cut from these blocks and double-stained with AE1-AE3 pan-cytokeratin and CD8+. In every whole-slide image, a pathologist selected a hotspot (0.785 mm2) with the highest tumor-bud count according to the ITBCC standard. We used convolutional neural networks to automatically detect all tumor-buds and T-cells within these hotspots. Results: We have created a dataset containing 348 graphs. Each graph is an abstract representation of the hotspot, where the T-cells and tumor-buds are represented as nodes. Each node has three labels: type (tumor-bud or lymphocyte), and the x and y coordinates on the slide. Conclusions: Based on this dataset, a number of different graph-based representations can be derived by inserting edges. For example, tumor-buds can be connected to all T-cells within a certain radius. The edges can be labelled with the distance between the nodes. In future work, we will investigate the potential of different graph-based representations for end-point predictions, while further extending the dataset.


   Intestinal Gland Classification Using Graph Neural Networks Top


Linda Studer1,2,3, Jannis Wallau2, Heather Dawson2, Inti Zlobec2, Andreas Fischer1,3

1iCoSys, University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland, 2Institute of Pathology, Faculty of Medicine, University of Bern, Switzerland, 3DIVA Research Group, Department of Informatics, University of Fribourg, Switzerland. E-mail: [email protected]

Introduction: Graphs have become popular in the field of digital pathology and have been used for a variety of tasks, such as segmentation and classification. Graph-based image representations are able to capture the geometry and topology of the tissue and offer a smaller, more abstract representation.In previous work, we created a cell-graph dataset[1] based on H&E images of normal and dysplastic intestinal glands. The established baseline uses graph edit distance coupled with k-NN and forward search feature selection and achieves a classification accuracy of 83.3%. Recently, the notion of convolutional neural networks has been extended to graphs and we investigated such graph neural networks to improve the classification. Materials and Methods: We used two well-known graph neural network architectures GraphSAGE and Graph Convolutional Network (GCN). GraphSAGE is a spatial-based message passing network, which takes the mean of the features to aggregate the information of the local neighborhood. GCN is a spectral-based message passing network. It uses a weighted average aggregation determined by the global node degree. We trained them with and without using jumping knowledge, which adds skip-connections. Results: GCN achieved the best performance with 84.0% using the same four features as the baseline. However, using the full feature set further improved the performance to 94.3% with GraphSAGE being the best-preforming architecture. Conclusions: Going from classical pattern recognition methods to deep learning methods improved the classification accuracy by 11.0%. Furthermore, the graph neural networks can perform better using the full set of available node features.

Reference

  1. Available from: https://github.com/LindaSt/pT1-Gland-Graph-Dataset. [Last accessed on 2020 Aug 27].



   The Route to Routine for Digital Pathology? Customized Open Source Workflow with Ki-67 Top


Stefan Reinhard1, Annika Blank1, Bastian Dislich1, Matteo Montani1, Martin Wartenberg1, Inti Zlobec1, Tilman T. Rau1

1Institute of Pathology, University Bern, Bern, Switzerland. E-mail: [email protected]

Introduction: Several algorithms for digital pathology exist to replace the “eyeballing” of Ki-67 for breast cancer, mostly as costly software solutions. Open source tools like QuPath provide the flexibility for research purposes, but are not adapted to the time restraints of routine pathology. Hence, we wanted to tailor QuPath for the pathologists' needs, create a workflow from the lab to the sign out room and test the algorithm under real diagnostic conditions. Materials and Methods: The workflow consists of three parts. First, a management script to retrieve clinical data from LIS and organize new slides from the scanner. Second, a web application as an interface for the pathologist to view the status of the cases and open them in Qupath and third, a script in Qupath that guides the pathologist through the analysis and initiates the algorithm. The algorithm followed the recommendations of a trial of the Ki-67 Breast Cancer Working-Group. Results: The pathologists adopted the system and their feedback is continuously implemented. Until now, a series of 40 cases are analyzed and compared to routine data. The average difference from routine was 2.1%, the correlation coefficient (r=0.91 Spearman) was significant (p<0.0001) and the intraclass correlation coefficient (ICC) was ICC=0.92, indicating excellent agreement between manual and digital scores. Conclusions: QuPath can be used for routine diagnostics in combination with a web-based application. Continuous improvement and validation of the classifier is mandatory. Additionally, IVD regulations for medical products must be met to allow for institutional accreditation of “home-made” digital pathology solutions.


   Automated Gastric Lumen Segmentation in Giemsa Strained Image Using Deep Learning Top


Tim Ogi1, Huu Giao Nguyen1, Inti Zlobec1, Bastian Dislich1

1Translational Research Unit (TRU), Institute of Pathology, University of Bern, Bern, Switzerland. E-mail: [email protected]

Introduction: Aiming to limit the search space for the detection of the pathogenic bacterium H.pylori in Giemsa stained slide images, we target an automated segmentation of the gastric lumen. The major challenge of this task is the diversity of the lumen in size, shape and structure. Materials and Methods: In this study, we investigated a UNET architecture which is one of the most accurate and fastest convolutional networks in image segmentation. Nine Giemsa stained slides were scanned. A set of 768x768 images was cut out of the tissue region and downscaled to match the input size of our model (384x384). The lumen regions were labelled manually in all images. We created a training set with 794 images, a validation set with 169 images and a test set with 85 images. Results: To evaluate our method, we used two overlap evaluation metrics, namely Dice and Jaccard indices. We obtained results of 0.89 ± 0.07 and 0.80 ± 0.10 for Dice and Jaccard respectively, when comparing the prediction to the manual annotation. Our model detected 253 (98%) of all lumina. The proportion of lumen in the total area of the test images was 3.7%. Conclusions: We presented a tool for the automatic and effective segmentation of the gastric lumen using UNET deep learning. It deals with the restriction of the location for an automatic detection of H.pylori in Giemsa stained images. The method can be extended to detect lumina in other tissue types and stainings.


   Automated Quantification of Tumour Area and Cellularity in Nonsmall Cell Lung Cancer Digital Slides Top


Garazi Serna1, Roberta Fasani1, Sara Simonetti1, Lidia Sanchez1, Lidia Alonso1, Eloy García1, Irene Sansano2, Paolo Nuciforo1

1Molecular Oncology Group, VHIO – Vall d'Hebron Institute of Oncology, Barcelona, Spain, 2Department of Pathology, Vall d'Hebron University Hospital, Barcelona, Spain. E-mail: [email protected]

Introduction: Histopathological assessment of tumour area (TA) and cellularity (TC) in tissue biopsies are used to select the optimal sample for molecular analyses, to determine the amount of residual tumour after therapy and provide important prognostic information. Currently, TA and TC are estimated by pathologist's manually “eyeballing” on haematoxylin and eosin (H&E) stained slides. Such quantification is laborious, time- consuming and subjective which most practicing pathologists are not trained to perform. Materials and Methods: In this work, we describe two methodologies for automatic quantification of TA and TC on a tissue microarray containing 54 NSCLC cores: 1) a more traditional digital image analysis (DIA) pipeline which uses a cytokeratin mask for tumour area identification and 2) a deep learning-based (AI) approach in which features are learned automatically using H&E image data alone. Two-way comparisons between the two approaches and manual assessments of three expert pathologists were performed using intra-class correlation (ICC) coefficients. Results: The average intra-rater agreement between the study pathologists was .950 (.937-.961) for TA and .906 (.869-.927) for TC. The average agreements between the automated approaches and study pathologists were .864 (.826-.911) and .937 (.931-.945) for DIA TA and AI TA, and .831 (.792-.857) and .805 (.744-.867) for DIA TC and AI TC, respectively. Agreement between DIA and AI were .865 and .847 for TA and TC, respectively. Conclusions: Our results show strong agreements between automated and manual analyses with superior performance with AI for TA assessment, suggesting that automated scoring has a significant potential to improve the diagnostic workflow.


   Does a Scanner Affect the Results of Lymph Node Segmentation? Top


Amjad Khan1, Huu Giao Nguyen1, Annika Blank1, Heather Dawson1, Alessandro Lugli1, Jean-Philippe Thiran2,3, Inti Zlobec1

1Institute of Pathology, University of Bern, CH-3008 Bern, Switzerland, 2Swiss Federal Institute of Technology Lausanne (EPFL)-Signal Processing Laboratory (LTS5), Lausanne, Switzerland, 3Department of Radiology - Center of Biomedical Imaging, Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland. E-mail: [email protected]

Introduction: Histopathology slides prepared under various staining conditions and scanned with different digital scanners exhibit variability in terms of stains and contrast. Here, we investigate the impact of such variability in histopathology data used for lymph node (LN) segmentation. Materials and Methods: 450 H&E-stained slides containing LN from 69 colon cancer patients were scanned with three scanners (e.g. Scanner1, Scanner2 and Scanner3) and included 188 positive (p) and 1146 negative (n) LNs. An unsupervised segmentation method was developed to segment all LNs within each slide for further computational analysis. Initial seeds were generated by applying a threshold on the hematoxylin channel separated by stain deconvolution. The seeds were further fed to morphological active contouring to expand contours of the seeds to the boundaries of each LN on a grayscale version. The segmentation results were compared by Dice coefficient scores. Results: Average Dice scores for LN segmentation on data from Scanner1, Scanner2 and Scanner3 are 0.972±0.017 [pLNs:0.954±0.008, nLNs:0.980±0.014], 0.919±0.123 [pLNs:0.965±0.011, nLNs:0.899±0.154] and 0.974±0.023 [pLNs:0.979±0.005, nLNs:0.972±0.030] respectively. Scanner3 slides are computationally very expensive (30 minutes/slide) as compared to Scanner2 (7 minutes) and Scanner1 (2 minutes). The segmentation performed less well on slides from Scanner2 where blood vessels are larger than LNs. Conclusions: This preliminary study on the same slides scanned with three different scanners for LN segmentation presents overall outperformance by Scanner1 and Scanner3. In particular, these findings also show that different scanners can affect results, leading to a need for a normalization prior to segmentation.


   A Color Accurate Extended Depth of Field Method for Automated Digital Cytology Top


Alexandre Bouyssoux1,2, Riadh Fezzani2, Jean-Christophe Olivo-Marin1

1BioImage Analysis Unit, CNRS UMR 3691 – Institut Pasteur, Paris, France, 2VitaDX International, Paris, France. E-mail: [email protected]

Introduction: In bright-field digital microscopy, images are frequently acquired with a depth of field narrower than the objects of interest. To avoid loosing to much information, a solution is to acquire multiple images at different focal planes and use multi-focus image fusion (MFIF) algorithms to recover an “all-in-focus” image. Among the commonly used MFIF algorithms, spatial domain approaches preserve color fidelity but produce artifacts or fail to retrieve information in overlapping transparent objects. The transformation based methods often recover well information even in presence of overlapping objects but require a color reconstruction and suffers from a low color fidelity. Materials and Methods: This study present an Extended Depth of Field method belonging to the transformation based approaches, relying on the Stationary Wavelet Transform (SWT) and a new coefficient selection strategy. It allows a precise information fusion and a high color fidelity. Results: A comparative evaluation of details recovery shows that the proposed method produces few artifacts and allows a good recovery of details in the volumes even with overlapping transparent cells. Besides, a color accuracy assessment shows the proposed method color fidelity is close to spatial domain methods, higher than other transformation based approaches. Conclusions: Based on an automatic segmentation experiment using synthetic volumes of thick cellular material, the necessity of volume analysis in cytology to achieve precise analysis was demonstrated. In this context, the good performances of the proposed approach as such volume analysis method is asserted, along with the importance of color fidelity in fused images.


   Quantifying the Clonal Evolution of Gastric Cancer Precursors through Single Cell Segmentation and Three-Dimensional Modelling Top


Panagiotis Barmpoutis1,2, William Waddingham2, Joshua Jaffe2, Abdallah Abbas2, Marnix Jansen2

1Department of Computer Science, Centre for Medical Image Computing, University College London, London, England, UK, 2Department of Research Pathology, Cancer Institute, University College London, London, England, UK. E-mail: [email protected]

Introduction: Prolonged exposure to exogenous carcinogens drives the evolution of adaptive tissue phenotypes better suited to the harsh environment imposed by tissue damaging agents. These metaplastic tissue responses also constitute the initial step in the progression to cancer provoked by chronic carcinogen exposure. How carcinogen exposure drives adaptation and clonal selection remains incompletely understood. Materials and Methods: Here we investigate the origin and evolution of gastric intestinal metaplasia in the Helicobacter-infected stomach using bespoke single cell segmentation and 3D reconstruction techniques. We employ a dataset of prospectively collected en face embedded mucosal specimens from gastric resection specimens. CDX2 detects patches of precancerous gastric intestinal metaplasia. After CDX2 immunolabelling of consecutive sections, images are filtered using a Gaussian filter and differences between nuclei and background are enhanced by applying histogram equalization to the filtered image and using the top-hat transform and bottom-hat transformations. We then model each cell using a Gaussian distribution and an improved ellipsoidal fitting model to split overlapping or merged nuclei. The registration of the consecutive images is based initially on a rigid alignment and then on a non-rigid B-spline transformation of multiple grid sizes. Finally, a cubic interpolation method is used for 3D reconstruction and modelling. Results: Our 3D reconstructions reveal that metaplastic intestinal lineages clonally emerge from stem cells in chronically inflamed gastric niches and follow neutral drift dynamics. Conclusions: These results demonstrate that adaptation and selection drive clonal expansion of precancerous intestinal metaplasia.


   Virtual Gross Pathology Specimens: A Cheap and Easy Protocol Top


Adrian Vaduva1, Alis Dema1

1Department of Microscopic Morphology, Discipline of Morphopathology, Victor Babes University of Medicine and Pharmacy Timisoara, Timisoara, Romania. E-mail: [email protected]

Introduction: Pathology museum collections across the world are facing a decline in the rate of renewal and enrichment of their specimen collections, correlated with the decrease in the number of autopsies performed, changes in legislations and monetary issues. On the upside, there is an increased interest in creating virtual pathology museums. The aim of the present study was to design a cheap and easy protocol of acquiring new virtual pathology gross specimens, replicas of resection specimens submitted to the pathology labs. Materials and Methods: We photographed resection specimens submitted to the Pathology Department of the Emergency County Hospital Pius Branzeu Timisoara, using a commercially available iPhone 7. Afterwards, the datasets were processed in 3dFlow Zephyr photogrammetry software, in order to create the virtual specimens. Results: We successfully managed to create high resolution 3D reconstructed replicas of resected colon specimens, using the free version 3dFlow Zephyr, which is limited to datasets of 50 images of the subject. The replicas can be used as individual objects that one can spatially manipulate (pan, tilt, zoom), as a subject to create orbital cinematic shots or base representations for 3D printed models. Conclusions: We successfully created a very cheap and easy to use protocol to digitally store and 3D reconstruct image datasets of gross pathology specimens. The reconstructed virtual specimens can further be used as teaching materials in both traditional and digital, formalin-free, pathology museums.


   Nontumor Segmentation to Improve Tumor Detection and Analysis Using Modified U-NET Network Top


Auranuch Lorsakul1, Margaret Zhao1, Kien Nguyen1

1Roche Tissue Diagnostics, Imaging and Algorithms, Digital Pathology, Santa Clara, CA, USA. E-mail: [email protected]

Introduction: In digital pathology, wholeslide analysis with positive and negative tumor cells needs pathologists to initially provide tumor annotations that exclude non-target regions, such as normal tissue. Often, it is difficult to exclude “Lymphoid Aggregate Regions (LARs),” which are clusters of immune cells and their morphology is frequently similar to group of negative tumor cells. As a result, image-analysis algorithms often provide false detection results for these LARs. In this study, we propose a deep-learning approach to automatically detect and mask out LARs before performing standard algorithms on wholeslide analysis. Therefore, this proposed method can improve accuracy and reduce false non-tumor detection. Materials and Methods: A modified version of U-NET model was used, which its novelties included reduction of channel numbers, spatial drop out, and learning rate schedule by step decay. The network was trained using 2,762 patches (patch size 256x256) from 28 wholeslide images, splitting to 80% training set and 20% testing set, respectively. The binary cross-entropy loss function was used with 100 epochs, batch size of 1, and learning rate of 1x10e-5 with Adam optimizer. Two levels of image resolution, 20X and 10X, were used to optimize the network parameters. Results: The testing results achieved average intersection-over-union (IoU) scores of 0.97 across the tested resolution levels, where the 20X image resolution provided better results. Therefore, our method improves the classification results by substantial reducing false positive detection of LARs. Conclusions: Our proposed method locates and identifies LARs to improve tumor classification tasks. This approach is not limited to segmenting LARs in tissue, but it easily be adapted to other non-tumor areas such as narcosis, scanner artifacts, and tissue folds.


   Tissue Imaging with Differential Ion Mobility Spectrometry and Laser Sampling Top


Maiju Lepomäki1, Anna Anttalainen2, Artturi Vuorinen3, Markus Karjalainen2,3, Anton Kontunen2,3, Teemu Tolonen4, Antti Vehkaoja3, Niku Oksala1,2,5, Antti Roine1,2

1Surgery, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland, 2Olfactomics Ltd, Tampere, Finland, 3Sensor Technology and Biomeasurements, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland, 4Department of Pathology, Fimlab Laboratories, Tampere, Finland, 5Department of Vascular Surgery, Tampere University Hospital, Tampere, Finland. E-mail: [email protected]

Introduction: Pathological examination of clinical tissue samples is time-consuming and does not allow comprehensive analysis of the specimen. A tissue pre-mapping method could enhance sampling. DMS is a rapid and affordable technology for complex gas mixture analysis. In this study an automated tissue laser analysis system (ATLAS), which utilizes a computer-controlled laser evaporation coupled with DMS, was applied to create and analyze smoke samples from porcine tissues and a series of human breast carcinomas. The aim is to present a novel system for automated tissue imaging with DMS in an animal model and to demonstrate feasibility in human breast cancer imaging. Materials and Methods: Fresh tissue samples from 9 landrace pigs including skeletal muscle, adipose tissue and normal breast tissue were incised with a laser beam based on a pre-designed matrix (spatial resolution 1-3 mm). The produced smoke was analyzed with DMS. An analogous procedure was applied to demonstrate the feasibility of ATLAS for human breast cancer imaging in 3 carcinomas. Results: Porcine skeletal muscle (n=849), adipose tissue (n=1194) and normal breast tissue (n=235) were identified with 88% out-of-sample accuracy with shrinkage linear discriminant analysis (sLDA). The sensitivity and specificity for skeletal muscle were 89% and 96%, adipose tissue 91% and 91%, and breast tissue 72% and 95%, respectively. The device is demonstrated with human breast specimen along with corresponding histology. Conclusions: Porcine breast tissue can be identified with ATLAS. This study presents a viable method for automated tissue imaging in an animal model and lays foundation for human breast cancer sampling.


   Comparison of Two Digitalization Systems Applied to Cytological Slides Top


Angel Estebanez-Gallo1, M. Antonia Revuelta1, Carmen Azpiazu1, Irene Hernández-Alconchel1, M. Luisa Cagigal1

1Department of Pathology, University Hospital Marqués de Valdecilla, Santander, Spain. E-mail: [email protected]

Introduction: The implementation of digital pathology systems is a reality. In our environment there are few Pathological Departments that have replaced the microscope in daily practice. The advantages in handling specimens and the improvement in the management of second opinions are two of its advantages. The characteristics of the cytological samples compared to the histological preparations mean that the digitalization process sometimes does not give the expected results. Materials and Methods: A comparison of two digitalization systems applied to cytological specimens has been made. The Vision Cyto® Pap Pro equipment developed by West Medica and the Roche Ventana DP 200 has been used. The Vision Cyto® Pap Pro scanner is specifically designed for cytological samples. Performs a continuous focus for each of the images that will form the digital preparation The Ventana DP 200 unit performs a dynamic focus on certain points. Results: A total of 50 cytological preparations was digitalized, Twenty five slides have been processed processed with each equipment. On the West Medica scanner, 7.7% of images with a focus defect and 23.1% of images with focus problems in three-dimensional groups are observed. These percentages increase in the case of the equipment developed by Roche up to 60% and 20% respectively. Conclusions: In our opinion, cytological samples have characteristics that recommend the use of systems with continuous focus. This method offers a good result and optimizes image storage against solutions such as Z-stack scan method.


   Image Analysis-Based Assessment of Perfluorooctanoic Acid-Induced Liver Pathology in Piscine Model Top


Maurizio Manera1, Bahram Sayyaf Dezfuli2, Giuseppe Castaldelli2, Luisa Giari2

1Faculty of Biosciences, Food and Environmental Technologies, University of Teramo, Teramo, Italy, 2Department of Life Sciences and Biotechnology, University of Ferrara, Ferrara, Italy. Email: [email protected]

Introduction: Perfluorooctanoic acid (PFOA) is an emerging pollutant in waters and fish, as aquatic representative vertebrate, may serve as model to assess its toxicity both in environmental monitoring programs and in translational biomedical research. Materials and Methods: Ultrathin sections from 5 specimens of common carp (Cyprinus carpio) for each PFOA exposure group (ctr, unexposed; low dosage, 200 ng L−1 PFOA; high dosage, 2 mg L−1 PFOA) were assessed at light and transmission electron microscopy, for box-counting fractal analysis and ultrastructural investigation, respectively. Fractal dimension and lacunarity of the cytoplasm outline were evaluated, comprehensive of the interface between glycogen-rich cytoplasm and remnant perinuclear, organelle-rich cytoplasm. Numeric results were statistically analyzed through ANOVA and Linear discriminant analysis. Results: Ctr vs. low dosage and Ctr vs. high dosage showed significant difference only for lacunarity (ANOVA; p< 0.01), whereas low vs. high dosage only for fractal dimension (ANOVA; p< 0.01). Linear discriminant analysis resulted in the correct classification of 100% of the original data and 73.3 % of both the cross-validated and jackknifed data set. Sensitivity was 100% (no false negative case), whereas specificity was 71.4% (2 false positive, low dosage misclassified cases). At ultrastructural level, a relative increase of the perinuclear organelle-rich area, mitochondria alteration, enlargement of cisternae of endoplasmic reticulum, autophagosomes and myelin figures were observed according to treatment. Conclusions: Fractal analysis and ultrastructural investigation could asses PFOA exposure even at ecologically relevant concentrations (low dosage), where previous sensitive, chemical-based methods failed to discriminate low dosage exposed from unexposed fish.


   Feedback on Digital Undergraduate Pathology Course at University of Tartu Top


Ave Minajeva1

1Institute of Biomedicine and Translational Medicine, University of Tartu, Tartu, Estonia. E-mail: [email protected]

Introduction: In the last two years the undergraduate pathology course for the third year medical students at University of Tartu has been fully based on digital microscopic slides. Materials and Methods: The digital slides were created by using 3D Histech slide scanners, subsequently converted into OpenSlide format and uploaded into the university server. The on-line study at the university is generally based on Moodle open-source learning management system. In Moodle the database modules were created comprising systematic information for each particular slide, including the diagnosis, link to the digital slide, description and figures. Also, each slide in the database had a link to the tutorial video created in the Panopto recording system. Starting from the 2nd year, a computer class was installed. Results: The feedback from 87 (70% of 124) students studying in Estonian and 16 (73% of 22) from the English groups was collected. 97% of Estonian and 100% of the English students agreed digital slides provided better overview than learning under microscope. The tutorial videos and the possibility to study the material at home were most highly appreciated. Over half of the students reported they had been preparing at home to actively discuss the digital slides in the class. Their feedback revealed they'd prefer more case or problem based learning to fill the leftover class time. Conclusions: Digital microscopy slides are highly welcome among the students, facilitate independent learning in undergraduate pathology course and leave more time for case-based discussions in the classes.


   Can Artificial Intelligence Help Cervical Cytopathologist to Detect High-Grade Squamous Intraepithelial Lesions on the Atrophic Background? Top


Ilknur Centinasalan Turkmen1, Gizem Dursun2, Ufuk Ozkaya2, Abdulkerim Capar3, Bahar Muezzinoglu1, Beheet Ugur Toreyin3

1Department of Pathology, Istanbul Medipol University, Istanbul, Turkey, 2Department of Electrical and Electronics Engineering, Suleyman Demirel University, Isparta, Turkey, 3Informatics Institute, Istanbul Technical University, Istanbul, Turkey. E-mail: [email protected]

Introduction: Cervical cytology is one of the most useful -if not best- screening test for cancer prevention. Early digital pathology and artificial intelligence (AI) studies were carried out in cervical cytology era leading to first commercially available AI product. Although screening programs are heading towards the Human Papilloma Virus (HPV) testing, Papanikolaou (PAP) test is still widely used as a screening tool and cervical biopsy is the gold standard for diagnosis of dysplasia and cancer. Diagnosis of dysplasia is challenging in the atrophic background, as the morphology of both entities has similarities. It will be very useful for pathologists and of course for patients if we can set an automatic screening program that has the ability to differentiate atrophy and High Grade Squamous Intraepithelial Lesion (HSIL). Materials and Methods: In this study we aimed to differentiate HSIL from atrophy with the use of machine learning. For this implementation, 1238 atrophy and 1832 HSIL images from 9 patients were used. All data is divided into three sets as 70-15-15% randomly for training, validation and testing steps respectively. Furthermore, test datasets consisting of 100 atrophy and a different number of HSIL ranging from 1 to 10 were created. Results: Results show that every HSIL is differentiated from atrophy successfully. However, 1.5% of atrophy cells are misclassified as HSIL. Conclusions: Our study is the preliminary study for an automated WSI screening program to increase the accuracy of HSIL detection in atrophic background.


   MISS - Minimal Information about Slides and Scans Top


Markus Plass1, Robert Reihs1,2, Roxana Merino Martinez3, Heimo Müller1,2

1Medical University Graz, Institute of Pathology, Graz, Austria, 2BBMRI-ERIC - Biobanking and BioMolecular researches Research Infrastructure - European Research Infrastructure Consortium, Graz, Austria, 3Department of Laboratory Medicine, Karolinska Institute, Solna, Sweden. E-mail: [email protected]

Introduction: High quality metadata and provenance information are essential to support product quality in almost all areas of digital pathology. Especially when datasets are used in computational pathology, we need the appropriate information to document the technical and medical validation and to support the regulatory approval process. There are several standards available covering dedicated parts, e.g. MIABIS for sample and donor metadata and DICOM or vendor specific attributes for file formats and scanning metadata. Our aim is not to propose yet another metadata standard, but to describe a small and minimal dataset across different standardization activities and initiate a community driven approach to collect and harmonize existing ontologies. Materials and Methods: MISS was defined within the use cases of a large scale digitization effort for machine learning. Through several cycles with stakeholders from biobanking and machine learning we generated a first proposal. Results: The minimal information about glass slides and their scanned representation is divided into three parts: Pre Scanning (Slide) Metadata: e.g. metadata from biobanks, glass slide labeling, cleaning; Scanning Metadata: e.g. technical parameters, resolutions and focus points; Post-Scanning (File) Metadata: e.g. image quality indicators. A first version of MISS and examples can be found at https://github.com/human-centered-ai-lab/MISS/wiki. Conclusions: We invite the digital pathology community to comment on and contribute to the MISS github repository and to provide examples of their scanning metadata in specific application scenarios.


   Evaluation of Automatic Tumor Cell Detection in Ki-67 Stained Neuroendocrine Tumors of the Gastrointestinal Tract Top


Nazanin Mola1, Erlend Hodneland2, Valentin Krasontovitsch3, Sehine Leh1

1Department of Pathology, Haukeland University Hospital, Bergen, Norway, 2Norwegian Research Centre, University of Bergen, Bergen, Norway, 3Department for Medical Genetics, Haukeland University Hospital, Bergen, Norway. E-mail: [email protected]

Introduction: We trained Aiforia™, a cloud based machine-learning platform, to detect tumor cells in Ki-67 stained gastrointestinal neuroendocrine tumors. To evaluate its performance, the number and coordinate of automatic detection was compared with the manual counting. Because even if automatic detection returns the same number of cells as the manual counting, yet there is no guarantee those are identical cells. Materials and Methods: Automatic detection on the test dataset was compared with manual counting, considered as ground truth. In the manual counting, an experienced pathologist labelled the tumor cells in ImageJ. The coordinates of detected tumor cells were extracted from ImageJ and Aiforia. To compare these two sets of coordinates, we globally scaled the coordinates since automatic detection was performed on whole slide images, while the manual counting was conducted on cropped slide images. The assignment method used K-means clustering and vector quantization to return matching cell pairs with corresponding inter-distance. Results: In one sample, Aiforia detected 260 tumor cells, while the manual counting found 249. After matching the coordinates, we saw that Aiforia detected 235 true tumor cells, 25 false (non-tumor) cells, and missed 14 true tumor cells. This corresponds to a performance of overall 90% precision and 94% sensitivity. Conclusions: The evaluation method was a valuable tool for measuring performance of the automated cell detection, and will be followed for future samples in our test dataset. Results from this pilot study showed a good concordance between manual and automatic tumor cell detection.


   Assessing Student Learning Performance when Switching to Virtual Microscopy Top


Adrian Vaduva1, Costela Lacrimioara Serban2, Codruta Lazureanu1, Remus Cornea1, Octavia Vita1, Adelina Gheju1, Aura Jurescu1, Ioana Mihai1, Emilian Olteanu1, Vlad Lupu1, Cristina Avram1, Marioara Cornianu1, Anca Muresan1, Sorina Taban1, Alis Dema1

1Department of Microscopic Morphology, Discipline of Morphopathology, Victor Babes University of Medicine and Pharmacy, Timisoara, Romania, 2Department of Functional Studies, Discipline of Medical Informatics and Biostatistics, Victor Babes University of Medicine and Pharmacy, Timisoara, Romania. E-mail: [email protected]

Introduction: Virtual microscopy is gaining more ground in medical universities, replacing the traditional microscopy classes. We aimed to assess the impact of changing from the traditional PowerPoint (PPT) to virtual slides (VS), as teaching materials in the tutorial part of our medical students' labs. Materials and Methods: 3rd year medical students from the University of Medicine and Pharmacy Victor Babes from Timisoara were recruited to participate in this study and were randomized into two groups: PPT and VS groups. Two tutorial delivery methods were used: continuous and split (presentation of a lesion followed by individual slide review, then the next lesion and corresponding review, and so forth). We assessed student learning performance on questionnaires regarding information presented in the tutorial part of the microscopy lab (maximum score=25). Results: A total of 610 students were recruited. We identified no significant differences (p=0.053) in student overall performance, when comparing PPT group (mean score=20.0, n=351) with VS group (mean score=20.8, n=259). Interestingly, when evaluating the effect of the tutorial type on student performance, we identified significant improvement (p<0.001) in the split (mean score=21.5, n=286) versus continuous delivery method (mean score=19.3, n=324). Conclusions: The lack of significant variation in student performance when using VS versus PPT tutorial ensures that VS can safely be used as teaching materials without negatively altering student performance. Moreover, VS in conjunction with split delivery method of the tutorial part seems to be a better approach in teaching microscopy labs.


   CADIA: Intelligent Tumor Characterization from WSIs at a Health System Scale Top


Rodrigo Cilla1, Maria Jesus García-Gonzalez1, Karen Lopez-Lineares1,2, Blanca Zufiria-Gerboles1, Iván Macía-Oliver1,2

1Fundación Vicomtech, Basque Research and Technology Alliance, Spain, 2BioDonostia Health Research Institute. E-mail: [email protected]

Introduction: CADIA project has created an intelligent WSI processing system for the Sistema Galego de Saude (Galician Regional Health System) to locate and characterize tumor regions in routinary H&E stained tissues from breast biopsies. The system classifies the regions into Fibroadenoma, Ductal Carcinoma In-Situ, Lobular Carcinoma In-Situ. Ductal Carcinoma Invasive or Lobular Carcinoma Invasive. Materials and Methods: A large dataset of slides has been retrospectively collected from Galician biobanks to obtain a significative representation of tumor phenotypes. We have scanned the slides at multiple zoom levels and stored them in a custom PACS server based on Orthanc. Each slide has been independently annotated by at least two expert pathologists. Custom convolutional neural networks have been trained with TensorFlow from pairs of WSIs and tumor annotations. The networks navigate over WSI zoom levels to discard image regions where tumors are not likely to appear, reducing false-positive rates while increasing processing speed. The system has been evaluated with annotated WSIs no employed for training achieving large Intersection-over-union metric scores. Results: Preliminary evaluation shows that the produced system examines WSI images at a clinical-level grade with a large intersection-over-union score achieved. Conclusions: CADIA project shows that innovative deep learning tools produce intelligent WSI analysis systems ready to be deployed at a Health System scale. After further clinical validation, the CADIA system has the potential to be incorporated into multiple reading slide assessment protocols to reduce the effort dedicated by Pathological Anatomy professionals.

This work has been funded by FEDER.


   Validation and Performance of Digital Microscopy in the Histopathological Evaluation of Tumoral and Nontumoral Ovary Surgical Specimen Top


Gabriela Izabela Bălţătescu1,2, Mariana Aşchie1,3, Madalina Boşoteanu1,3, Manuela Enciu1,3, Anca Mitroi1,2, A. Nicolau1,2, Lucian Petcu2,4, Nicolae Dobrin2,5, Marina Deacu1,3

1Clinical Service of Pathology, “Sf. Apostol Andrei” Emergency County Hospital, Constanţa, Romania, 2Center for Research and Development of the Morphological and Genetic Studies of Malignant Pathology, “Ovidius” University of Constanţa, Constanţa, Romania, 3Department of Pathology, Faculty of Medicine, “Ovidius” University of Constanţa, Constanţa, Romania, 4Department of Biostatistics and Biophysics, Faculty of Dental Medicine, “Ovidius” University of Constanţa, Constanţa, Romania, 5TEM Laboratory, Faculty of Medicine, “Ovidius” University of Constanţa, Constanţa, Romania. E-mail: [email protected]

Introduction: Digital pathology represents a research field constantly expanding and used more frequently in daily practice and for immunohistochemical analysis. The aim of our study is to analyze the performance of WSI and to validate it in our department according to the CAP guidelines. Materials and Methods: Our study includes 80 cases of oophorectomies; H&E-stained glass slides were scanned with HuronTissueScope4000XT. All slides were analyzed in two stages by two pathologists: first- H&E-stained slides on conventional microscopy (CM); secondly- WSI, after a period of 3 weeks from the first step (”washout period”). Our validation study is based on the intra-observer variability as the primary method of assessment, between CM and digital microscopy (DM), by applying Cohen κ statistics (SPSS20.0). Results: The results demonstrated an excellent agreement as the kappa coefficient was more than 0.8 for both pathologists. Although both doctors would have preferred a CM diagnosis, mostly due to their habit, though they appreciated the clarity and whole perspective view of a slide, with easily additional measurements which can be done. There were identify two major pitfalls regarding differential diagnostic (lack of immunohistochemistry was considered as a source of disagreement) and three minor discrepancies (lack of image clarity, inadequate routine laboratory processes). Conclusions: The current study brings new proofs regarding the high performance of DM in a primary pathological diagnosis for oophorectomies. WSI diagnose is a reliable method.

Acknowledgments

This work is supported by the project ANTREPRENORDOC, Human Resources Development Operational Program2014–2020, financed: European Social Fund, number 36355/23.05.2019HRDOP/380/6/13–SMISCode:123847.


   An Automatic Patch-Based Approach for HER-2 Scoring in Immunohistochemical Breast Cancer Images Top


Sergio Ossamu Ioshii1,2, Caroline Quadros Cordeiro2, Milena Massumi Kozonoe1, Lucas Ferrari Oliveira2

1Graduate Program on Health Technology, Pontifical Catholic University of Parana, Curitiba, PR, Brazil, 2Department of Informatics, Federal University of Parana, Curitiba, PR, Brazil. E-mail: [email protected]

Introduction: Most classical approaches for automatic evaluation of HER-2 scores include segmentation, which is known to be the cause of errors in the following steps of analyses. To reduce these errors, we propose an automatic algorithm based on different several types of features, fully automated, segmentation free system to score HER-2 using WSI samples. Materials and Methods: Samples of breast carcinoma, stained by immunohistochemistry for HER-2 expression, were scanned to WSI. We divided the analysis in image and patient levels. For image level a subset of training patches classes was created (feat_tr). At this level, ten features vectors and four classic classifiers were employed. Among these features vectors we used color and textures descriptors and others that were extracted from CNNs. Moreover, we adopted two approaches for classes determination - clinical decision and HER-2 scoring. For the clinical decision the classes are 'negative', 'borderline' and 'positive'. The HER-2 scoring differs 0, 1+, 2+ and 3+ classes. Results: promising results were obtained in Warwick's dataset, 90.20% of accuracy. Our method avoids segmentation and do not need manual intervention, different of several works reviewed. Besides, it is fully automated and can easily works in simple desktop computers. Conclusions: the findings presented in this work support the idea of cheap techniques to help pathologists in routine work. Furthermore, we propose additional research to compare different sizes of patches and use all a CNN including the classification task.


   Multi-Task Learning Using Point Label for Nuclei Detection and Segmentation in Immunofluorescence Image Top


Yao Nie1, Alireza Chaman Zar1,2

1Digital Pathology, Roche Tissue Diagnostic, Santa Clara, CA, USA , 2Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA, USA. E-mail: [email protected]

Introduction: In multiplexed immunofluorescence image analysis, nuclei detection and segmentation is fundamental for cell localization and morphological characterization. However, obtaining pixel-level ground truth for training cell segmentation algorithm is extremely labor intensive. To overcome this challenge, we developed an end-to-end deep learning algorithm to perform both nuclei detection and segmentation using only point label annotations. Materials and Methods: 232 field-of-view images of size 256×256 from the DAPI channel of a 5plex immunofluorescence panel were used in the experiment. The dataset was split into training (80%), validation (10%) and testing (10%). Data augmentation was performed which resulted in ~3000 small images for training. We treated nuclei detection and segmentation as separate tasks and trained the model through multi-task training process. Meanwhile, we employed multiple point label encoding methods, i.e., Voronoi transformation, local pixel clustering and repel coding, to generate task oriented pixel-level labels that facilitate the multi-task training. We used U-Net architecture where the encoder layer was pre-trained convolutional layers of ResNet50. Results: For the segmentation task, the pixel-level accuracy and the object-level Dice score are 93.2% and 0.76, respectively. For the detection task, the detection precision and recall are 76.2% and 73.7%, respectively. Conclusions: The proposed method achieves good detection and performance using only point labels annotations. The high pixel-level accuracy indicates it handles well the large variations of nucleus-to-background contrast. The suboptimal performance of detection indicates that additional highly clustered nuclei should be included in the training data to further improve the model for separating touching nuclei.


   Computer-Aided Identification of Nasal and Paranasal Tumors using Endoscopic Images Top


Panagiotis Barmpoutis1, Konstantinos Geronatsios2, Tania Stathaki3, Ai Bo3, Spyridon Gougousis2

1Department of Computer Science, Centre for Medical Image Computing, University College London, London, England, UK, 2Department of Otolaryngology, Papanikolaou General Hospital, Thessaloniki, Greece, 3Department of Electrical and Electronic Engineering, Imperial College London, London, England, UK. E-mail: [email protected]

Introduction: Inflammatory sinonasal polyposis and sinonasal inverted papilloma are common lesions of the nasal cavity and paranasal sinus. Sinonasal polyposis is a benign inflammatory disease, while sinonasal inverted papilloma is a benign sinonasal tumor, with similar macroscopic characteristics to nasal polyps. To this end, the diagnosis of the two lesions is accurate only after histopathologic examination of tissue specimens. Thus, the development of AI identification assisting tools will play a critical role in early cancer detection, as inverted papilloma occasionally can undergo malignant transformation. Materials and Methods: Initially, the proposed methodology includes the removal of noise and specular reflection areas from the endoscopic images through the conversion of their color-space to HSV. The noisy aeras are detected based on saturation and value channels and a bilateral filter is used, replacing the values of noisy pixels with the average of the neighboring pixels' values, in order to effectively remove the noise. Subsequently, an energy minimization technique based on graph cuts is applied for the identification of the potentially affected areas by lesions within the nasal cavity and paranasal sinus. Finally, for nasal and paranasal tumors' classification, the identified regions of interest are divided into blocks of size 32×32, feeding a Convolutional Neural Network that is built according to the VGG-19 architecture. Results: Evaluating the proposed method, we achieved a classification rate 82.4% in a dataset of 820 images that was created using nasal endoscopes and data augmentation techniques. Conclusions: Experimental results show that the proposed approach can be effectively used as diagnostic assistance to clinicians.


   Deep Learning-Based Quantification of Calcification in Femoral Plaques Reveals an Association of the Area Proportion of Nodular Calcification with the Severity of the Peripheral Arterial Disease Top


Mae Azeez1,2, A. Inkeri Lokki1,2,3, Mirjami Laivuori4, Johanna Tolva1, Nina Linder5, Johan Lundin5,6, Mikko I. Mäyränpää7, Marja-Liisa Lokki1, Juha Sinisalo2

1Department of Pathology, Transplantation Laboratory, University of Helsinki, Helsinki, Finland, 2Department of Cardiology, Heart and Lung Center, Helsinki University Hospital, University of Helsinki, Helsinki, Finland, 3Research Programs Unit, Translational Immunology Research Program, University of Helsinki, Helsinki, Finland, 4Department of Vascular Surgery, Helsinki University Hospital, University of Helsinki, Helsinki, Finland, 5Institute for Molecular Medicine Finland, HILIFE, University of Helsinki, Helsinki, Finland, 6Department of Public Health Sciences, Global Health/IHCAR, Karolinska Institute, Stockholm, Sweden, 7Pathology, Helsinki University Hospital, Helsinki University, Helsinki, Finland. E-mail: [email protected]

Introduction: Calcification categories and their clinical association were previously analyzed based mainly on their presence and absence. We aim to quantify nodular calcification (NodCa) in femoral plaque sections and to analyze its association with the patient's characteristics and clinical profile. Materials and Methods: Longitudinal sections of common femoral endarterectomy plaques (n=90), stained with Hematoxylin and Eosin were digitized as whole slide images on a deep learning platform. A deep learning algorithm was developed to localize and calculate NodCa relative area of the sectioned plaque. As an indicator of the rate of the plaque progression, maximum internal elastic vessel diameter (IEVD) of the obstructed or ≥ 90% stenosed vessels was calculated using the platform's measurement tool. Clinical characteristics were retrieved from patients' records. Results: The area percentage of NodCa correlated with toe pressure readings (R=0.647, P<0.001). Furthermore, this area percentage is significantly smaller in urgently operated patients when compared to the electively operated (0.142±0.126 versus 0.253±0.138, respectively, P <0.001). In the obstructed and ≥ 90% stenosed samples, a positive correlation was also noticed between NodCa area percentage and each of the IEVD (R=0.711, P<0.001) and the plaque section area (R=0.582, P<0.001). Conclusions: The amount of NodCa is negatively associated with the severity of the disease, possibly attributed to the observed slower progression. Its association with larger plaque section area and IEVD in the obstructed and semi-obstructed samples may confirm the slow disease progression and may implicate an associated effective compensatory vascular remodeling mechanism.


   Standardization of Quantitative Immunohistochemistry with Nano-Fluorescence Particles: An Activity in ISO/TC 229/WG 5 Top


Hiroki Nakae1

1JMAC Japan bio Measurement and Analysis Consortium, Tokyo, Japan. E-mail: [email protected]

Introduction: Quantification of biomolecules, mainly proteins, is a fundamental technology for generalization of digital pathology. Considering the expanding emerging technologies including AI application additionally, quality of the quantified values is a key factor for digital pathology platform for future of pathology laboratories. Conventionally, various fluorescent dyes, including FITC (fluorescent isothiocyanate), have been used for immunohistochemical staining to identify localization of target biomolecules as qualitative analyses. They are also applied to quantitative analysis in combination with various algorithm for calculating signal strength corelated to the quantity of the target biomolecules. For reliable measurement results, performance of fluorescent material is a key factor for whole quantification system. Materials and Methods: In this research, fluorescent nanoparticles has been highlighted to be used for immunofluorescence. They generally show higher brightness and longer photobleaching time than the conventional fluorescent dyes. Their characteristics should be an advantage for quantitative analysis by immunohistochemical methods, combined quantification algorithm. In order to fulfill strong needs for standardization to ensure the compatibility of compatibility of signals from various systems, standardization activity has been started in ISO/TC 229 “Nanotechnology”, WG 5 “Products and applications”. Results: After the application of PWI proposal, the preliminary work item has been registered as PWI 23366. It has been intended to describe minimum requirements for performance evaluation of products and application of fluorescent nanoparticles and discussed in the working group. Conclusions: Standard with tentative title “Nanotechnologies - Performance Evaluation of Quantification Methods of biomolecules using fluorescent nanoparticles” has been started with a first step.


   Preview Station: A Device for Provenance Documentation in Slide Digitization Workflows Top


Robert Reihs1,2, Marks Plass1, Birgit Pohn1, Kurt Zatloukal1, Heimo Müller1,2

1Medical University Graz, Institute of Pathology, Graz, Austria, 2BBMRI-ERIC - Biobanking and BioMolecular researches Research Infrastructure - European Research Infrastructure Consortium, Graz, Austria. E-mail: [email protected]

Introduction: Weakly supervised machine learning algorithms demand a huge amount of scanned histopathological slides. Such collections are only available in biobanks which have collected slides over a very long period, in our case since 1983. However, the digitization of large historical collections also entails a number of slide quality challenges, such as e.g. dirt, contamination, handwritten labels, and ink-markers. Materials and Methods: We built a “preview station” with two light sources to capture both the label and the entire tissue area. In slide digitization the “preview-station” supports the following steps a) documentation of the slides as received from the archive b) registering and label transcription c) application of a study barcode d) documentation of slide cleaning. With the help of the “preview station”, the full provenance information (original/barcoded label, ink-markers, slide defects) is documented in a machine readable format. Results: We generated up until now ~600.000 preview images and trained ML models for semi-automatic metadata generation. Here we worked on the automatic classification of specimen type (biopsy, operational-sample, …), tissue type (tumor, lymph-node, ...) and slide staining. The resolution of the preview image is 4208 x 3120 pixels, and for the generation of 2 preview images and label transcription we need on average 13 seconds per slide. Conclusions: With our solution the full provenance of the digitization workflow can be captured. This increases efficiency and documentation while reducing error rates. The preview station can also be used to generate glass slide catalogues in biobanks without scanning.


   Reducing the Annotation Workload with Transfer Learning: a Feasibility Study for Intestinal Gland Segmentation Top


Linda Studer1,2,3, Sven Wallau2, Inti Zlobec2, Heather Dawson2, Andreas Fischer1,3

1iCoSys, University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland, 2Institute of Pathology, Faculty of Medicine, University of Bern, Switzerland, 3DIVA Research Group, Department of Informatics, University of Fribourg, Switzerland. E-mail: [email protected]

Introduction: Convolutional neural networks require large amounts of training data to achieve a good performance. However, acquiring this data can be very time consuming and costly. There are some publicly available datasets which can help overcome this problem by using them for pre-training and transfer learning. In this study, we investigate the feasibility of using the publicly available GlaS dataset to pre-train our model for intestinal gland segmentation, in order to reduce the manual annotation workload. Materials and Methods: We created a dataset of 165 images with similar specifications as the GlaS dataset using 16 slides from pT1 colorectal cancer patients. Using a state-of-the-art U-Net architecture for image segmentation, three training scenarios are compared: a) pre-training on the GlaS dataset, b) pre-training followed by fine-tuning on 30 images of our dataset, and c) training from scratch on 85 images of our dataset. The remaining images are used to optimize hyper-parameters and to evaluate the final performance. Results: The model trained only on the GlaS dataset achieved a Jaccard index of 57.5%. Fine-tuning further improves this performance to 88.3%. In contrast, the model trained only on our data achieved a performance of 87.4%. Conclusions: Fine-tuning the GlaS pre-trained network helped to adjust the network to the local cohort and achieves the same performance as training on just the local cohort. We also showed that using publicly available datasets considerably decreases the number of annotated data needed. Making datasets publicly available thus is of great help to the research community.


   Visualization of the “Decision Making Path” as a Tool for Training and Education in Digital Pathology Top


Birgit Pohn1, Robert Reihs1,2, Markus Plass1, Marie-Christina Mayer1, Farah Nader1, Helrnut Denk1, Kurt Zatloukal1, Andreas Holzinger3, Heimo Müller1,2

1Medical University Graz, Institute of Pathology, Graz, Austria, 2BBMRI-ERIC - Biobanking and BioMolecular researches Research Infrastructure - European Research Infrastructure Consortium, Graz, Austria, 3Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Graz, Austria. E-mail: [email protected]

Introduction: Medical training is based on the acquisition of theoretical and practical skills. The latter is often accomplished in the direct transfer of knowledge between experts and beginners. Classical teaching methods usually concentrate on explicit knowledge, that can be identified and conveyed easily. Materials and Methods: Our work deals with the detection of implicit expert knowledge by tracking the diagnosis finding process in pathological examinations via recording microscopic examinations on video, which allow further analyses: the pathologist's navigation path through the tissue sample can be visualized by comparing the observed areas with the whole slide image (WSI). This shows an expert's route that has been taken to come to a diagnosis. Meta information can be derived from panning, zooming and the observation time – every event serves as a “landmark” on the route to a decision, annotated by the recorded audio comments. Results: The tracking of examination routes enables applications for medical training in the form of virtual mentoring like “stepping-into-someone-shoes” by consuming the recordings or simulating a “flight” through the WSI. Furthermore, the visualization of the pathologic diagnostic path on the WSI highlights areas that led to a diagnosis. Experts' routes can be used as a reference for medical training to teach best practices and serve as comparison model for perspective specialists. Conclusions: The approach has shown that traceability of diagnosis finding processes has immense potential for medical education by analyzing explicit and implicit knowledge. Moreover, the data can be used as a basis for machine learning (ML) and artificial intelligence (AI) in workflows of digital pathology.


   Summary of 1-Year Operation of WSI Telepathology and Telecytology Diagnosis of Vietnamese Health Evaluation Center from Japan Top


Ichiro Mori1, Ryosuke Matsuoka1, Takayuki Shiomi1

1Department of Pathology, School of Medicine, International University of Health and Welfare, Japan. E-mail: [email protected]

Introduction: Our university opened a health check facility in Ho-Chi-Minh city on 2018 serving the Japanese standard health check system in Vietnam. Every medical image of radiology and pathology were supervised from Japan. We report the result of one-year operation of WSI telepathology and telecytology. Materials and Methods: We cover the endoscopic biopsy of G-I tract and cytology from uterine cervix. We use NanoZoomer S210 as WSI scanner, and WebPath as Pathology information system. Cytology specimens are prepared using Liqui-PREP LBC system. To scan the cytology slides, we are using 3-layer Z-stack with 2 μm thickness. Vietnamese pathologists who got training in our Mita hospital made primary diagnosis using conventional microscope, then WSI were observed from Japan, and final diagnosis were made. Results: Until the end of November 2019, we performed pathology diagnosis to 245 biopsy and 573 cytology specimens. The concordance rate was fairly good between Vietnamese and Japanese pathologists. We already found 7 cancers out of 245 G-I tract biopsies (2.9%). The ratio of NILM in cervical cytology was about 98%. Conclusions: The WSI quality was fairly good. Compared to Japan, the positive ratio of G-I tract biopsies was extremely high while the positive ratio of cervical cytology was almost similar. About endoscopic examination, different from the concept of health check facility which basically healthy people come to check their health condition, there is a possibility that people with some symptoms are coming to try the Japanese style medical service.






 

 
Top
  

    

 
  Search
 
   Browse articles
  
    Similar in PUBMED
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
   Short Abstracts
    Artificial Intel...
    An Artificial In...
    Towards a Framew...
    Automated Predic...
    Identification o...
    Automatic Gradin...
    Detecting Hel...
    Identifying HER2...
    Artificial Intel...
    A Fully Automate...
    Unsupervised Joi...
    The BT-Hotspot G...
    Intestinal Gland...
    The Route to Rou...
    Automated Gastri...
    Automated Quanti...
    Does a Scanner A...
    A Color Accurate...
    Quantifying the ...
    Virtual Gross Pa...
    Nontumor Segment...
    Tissue Imaging w...
    Comparison of Tw...
    Image Analysis-B...
    Feedback on Digi...
    Can Artificial I...
    MISS - Minimal I...
    Evaluation of Au...
    Assessing Studen...
    CADIA: Intellige...
    Validation and P...
    An Automatic Pat...
    Multi-Task Learn...
    Computer-Aided I...
    Deep Learning-Ba...
    Standardization ...
    Preview Station:...
    Reducing the Ann...
    Visualization of...
    Summary of 1-Yea...

 Article Access Statistics
    Viewed449    
    Printed10    
    Emailed0    
    PDF Downloaded477    
    Comments [Add]    

Recommend this journal