Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 364  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 

Table of Contents    
J Pathol Inform 2020,  11:1

Pathology Vision 2019

Date of Web Publication20-Jan-2020

Correspondence Address:
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/2153-3539.276115

Rights and Permissions

How to cite this article:
. Pathology Vision 2019. J Pathol Inform 2020;11:1

How to cite this URL:
. Pathology Vision 2019. J Pathol Inform [serial online] 2020 [cited 2022 Jul 6];11:1. Available from:

   Oral and Poster Abstracts Top

Attendees of the Pathology Visions 2019 (PV19) meeting of the Digital Pathology Association (DPA) were welcomed in Orlando where they spent two days learning about the Evolution and Revolution of Digital Pathology!

With the FDA approval of two systems for clinical use of digital pathology in the USA, we now are seeing progress in the application of digital pathology that will form the basis for advanced computational technologies. PV19, the annual meeting of the DPA, celebrates its 10th year as the leading event dedicated to advancing the field of digital pathology. This conference brings pathologists, scientists, technologists, administrators and industry partners together, sharing the cutting-edge knowledge of digital pathology applications in healthcare and life sciences.

This year, participants heard from keynote presenter Anil Parwani who discussed “Digital Pathology and Artificial Intelligence: The Evolution towards Digital Disruption of Diagnostic Pathology”. Plenary presenter Thomas Fuchs brought us up to date on “Medical Artificial Intelligence at Scale: Changing Clinical Practice One Petabyte at a Time”. A special lecture by DPA founder Dirk Soenksen took us “Walking Down Memory Lane” on “A Historical Perspective of Digital Pathology and the Founding of the DPA”. These highlights paved the way for many more timely presentations and workshops by distinguished speakers in the simultaneous Clinical Track and the Education & Research Track. Additional presentations included preconference and breakfast workshops. The distinguished presenters came from all over the world including the United States, Canada, Europe and Asia. Travel award recipients and poster award winners were recognized at PV19; please join us in congratulating them.

2019 Travel Award Recipients:

  • Kingsley Ebare, MD, MPH, Zucker School of Medicine/Northwell Health at Staten Island University Hospital
  • Jennifer Jakubowski, BA, CT(ASCP), Drexel University
  • Simon Thomas, BSc MBioinf, Institute for Molecular Bioscience

2019 Poster Award Winners:

  • Clinical: How Thin is a ThinPrep® Slide? Challenges of Achieving High Focus Quality on a Digital Whole Slide Imaging System – presented by Dr. Shaoqing Peng, Hologic Inc.
  • Education: Transition of Medical Pathology Rounds: From the Microscope to Digital Pathology – presented by Mary Melnyk, DynaLIFE Medical Labs
  • Research: Lessons Learned from Validation of an Image Search Engine for Histoptahology presented by Dr. Shivam Kalra, University of Waterloo
  • Image Analysis: Machine learning for real-time search and prediction of disease state to aid pathologist collaboration on social media presented by Dr. Andrew Schaumberg, Weill Cornell Graduate School of Medical Sciences
  • Resident: Automated Diagnosis of Lymphoma with Digital Pathology Images Using Deep Learning presented by Hanadi El Achi, UTHealth.

This conference offered a wide range of topics including whole slide imaging, image analysis, and deep learning for clinical diagnosis, education and research. PV19 provided attendees with the opportunity to meet with experts and peers in digital pathology through networking events including two receptions, refreshment breaks and lunches that also provided opportunities to meet leading industry vendors who are excited to exhibit their newest and best products. Attendees were impressed by how much technology has advanced. Connect-a-thon, round-table discussions, regulatory and standards updates, and timely hot topics rounded out the program.

Pathology Visions provides an excellent archive of learning material. Following the conference, recorded presentations are posted to the DPA website. Oral and poster presentation abstracts are published in this edition of the Journal of Pathology Informatics.

Together, we are leading the evolution of the field of digital and computational pathology as we revolutionize the practice of pathology! A special thanks to this year's Program Committee:

Sylvia Asa, MD, PhD, University of Toronto (Co-Chair)

Liron Pantanowitz, MD, UPMC (Co-Chair)

Ulysses Balis, MD, University of Michigan

Marilyn Bui, MD, PhD, Moffitt Cancer Center

Junya Fukuoka, MD, PhD, Nagasaki University School of Medicine

Eric Glassy, MD, Affiliated Pathologists Medical Group

Mike Isaacs, Washington University School of Medicine

Lisa Manning, MLT, BSc. (Hon), Shared Health

Anil Parwani, MD, PhD, MBA, Ohio State University

Christopher Ung, MSc, MBA, HistoGenex

Bethany Williams, MBBS BSc, Leeds Teaching Hospitals NHS Trust

Mark Zarella, PhD, Drexel University College of Medicine

   Oral Abstracts Top

   Prostate Cancer Grading on Gigapixel Microscopy Images Using Machine Learning Top

Wenchao Han1, 2, 3, Carol Johnson1, Mena Gaed4, Jose Gomez-Lemus4, Madeleine Moussa4, Joseph Chin5,6, Stephen Pautler5,6, Glenn Bauman3,5, Aaron Ward1, 2, 3, 5

1Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada,2Lawson Health Research Institute, London, Ontario, Canada,3Department of Medical Biophysics, Western University, London, Ontario, Canada,4Department of Pathology and Laboratory Medicine, Western University, London, Ontario, Canada,5Department of Oncology, Western University, London, Ontario, Canada,6Department of Surgery, Western University, London, Ontario, Canada.

E-mail: [email protected]

Introduction: Radical prostatectomy surgical pathology interpretation is typically qualitative and subject to intra- and inter-observer variability. Identification of high-grade cancer is important since the presence of high-grade cancer usually leads to very different prognostic outcomes and influences post-surgical treatment. We developed and validated a software platform that can automatically grade cancerous regions on whole-mount digital histopathology images to support quantitative and visual reporting. Methods: Our study used hæmatoxylin- and eosin-stained digital histology images scanned at 0.5 μm/pixel from 245 mid-gland tissue sections from 70 radical prostatectomy patients, comprising 27,228 480 μm×480 μm regions-of-interest (ROIs) covering all high- and low-grade cancerous regions. Each cancerous region was annotated and graded by an expert genitourinary pathologist. We used our previously proposed algorithm to label each pixel as nuclei, lumen, stroma/other to generate tissue component maps (TCMs). We used 7 different machine learning approaches (3 non-deep learning and 4 deep learning) to classify each cancerous ROI as high- or low-grade with validation against the expert annotations using leave-one-patient-out cross-validation. Results: Fine-tuned AlexNet with ROIs of raw images yielded the best results in grading high (Gleason-Pattern 4 (G4)) vs. low (G3), with area-under-the-receiver-operating-characteristic-curve (AUC) of 0.934. Fine-tuned AlexNet with ROIs of TCMs yielded the best results in grading high (G4 and G5 involved tissue) vs. low (G3) with an AUC of 0.923. Conclusion: Deep learning-based approaches outperformed non-deep learning-based approaches for prostate cancer grading. TCMs provide the primary cues for prostate cancer grading. The system is ready for multi-center validation and user study toward translational application.

   Predicting Response to Neoadjuvant Chemotherapy using Machine Learning Models Integrated with Image-Based Tumor Microenvironment Features, Biomarkers and Clinical Features in HER2-Positive Breast Cancers Top

Zhi Huang1, 2, 3, Zhi Han2, 4, †, Kun Huang2, 4, 5, Anil V. Parwani6, Zaibo Li6

1Department of Electrical and Computer Engineering, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA,2Department of Medicine, Indiana University School of Medicine, Indianapolis, IN, USA,3Department of Electrical and Computer Engineering, Indiana University - Purdue University Indianapolis, Indianapolis, IN, USA,4Regenstrief Institute, Indianapolis, IN, USA,5Health Science Center, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University, Shenzhen, China,6Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, OH, USA.

E-mail: [email protected]

Background: Pathologic complete response (pCR) to anti-HER2 neoadjuvant chemotherapy (NAC) is a presumptive surrogate for disease-free survival in breast cancer (BC) patients. Potential factors associated with pCR have been investigated separately. We aimed to develop machine learning models integrating image-based tumor microenvironment features, biomarkers and clinical features to predict the response to NAC. Methods: Sixty-four HER2-positive BC patients treated with anti-HER2 NAC and subsequent resection were included. A multiplex immunohistochemistry (IHC) simultaneously detecting PD-L1, CD8 and CD163 was performed on pretreatment biopsies before NAC to evaluate tumor microenvironment. Various image features were extracted from H&E and IHC whole slide images (WSI). Multiple machine learning models integrated with image-based tumor microenvironment features, biomarkers and clinical features were performed. Results: Machine learning models using image-based tumor microenvironment features can predict NAC response, and the predictive power was further improved by integrating biomarkers and clinical features. LASSO-regularized logistic regression outperformed other machine learning models and demonstrated pCR was positively associated with HER2/CEP17 ratio, CD163, CD8 and tumoral PD-L1 expression; while negatively associated with age, PR, ER, and the ratio of stromal area to tissue area. The algorithm is freely available at as an online web tool. Conclusion: Our data demonstrated machine learning models integrated with image-based tumor microenvironment features, biomarkers and clinical features can predict NAC response in HER2-positive BC patients, and the accuracy was improved when integrating all features.

   Machine Learning Algorithms as an Adjunct Tool for Prostate Cancer Diagnosis in Core Needle Biopsy Top

Patricia Raciti1, Jillian Sue1, Christopher Kanan1, Rodrigo Ceballos1, Ran Godrich1, Victor Reuter1,2, Leo Grady1, David Klimstra1,2, Thomas Fuchs1,2

1Paige.AI, New York, New York, USA,2Department of Pathology, Memorial Sloan-Kettering Cancer Center, New York, New York, USA.

E-mail: [email protected]

Background: Prostate cancer is the second most common cancer among men in the United States, the gold standard diagnosis of which is prostate needle core biopsy. Diagnosis can be challenging, however, especially on small, well differentiated foci. Deep learning algorithms can be used as a tool to facilitate the review and diagnosis of prostate cancer on core needle biopsy. Methods: Three AP-board certified pathologists were each given 8 hours to access the same collection of digitized H&E-stained prostate needle core biopsy slides in each phase of two phases. Ground truth was established from corresponding pathology reports from the diagnosing institution. In Phase I, pathologists classified images as cancerous or benign. After approximately 4 weeks, in Phase II, pathologists received the same instructions, however, the digitized images they assessed were prescreened by an algorithm which identified possible cancerous foci. The AI algorithm was a variant of the weakly-supervised algorithm method presented in Campanella et al and was trained on 36,644 WSI (7514 had cancerous foci). Results: Without the aid of the AI, the pathologists had an average sensitivity of 74% and an average specificity of 97%. With AI assistance, the average sensitivity of the pathologists increased to 90% while their specificity was 95%. With AI assistance, pathologists more often correctly classified smaller, lower grade tumors and spent less time analyzing each image. Conclusions: Machine learning algorithms trained on large datasets can predict cancerous slides with high sensitivity and specificity. While this study is small, it demonstrates that machine learning decision support systems can increase the sensitivity of pathology diagnosis and decrease the time it takes to analyze a slide for the presence or absence of cancer.

   CytoProcessor: A New Cervical Cancer Screening System for Remote Diagnosis Top

Elizabeth F. Crowell1, Cyril Bazin1, Romain Brixtel1, Yann Caillot1, Matthieu Toutain1, Francois Saunier2, Joelle Depardon3, Arnaud Renouf1

1R&D, DATEXIM, Caen, France,2Regulatory Affairs, SOFOS, Charbonnieres Les Bains, France,3CytoPathology, Technipath Laboratory, Limonest, France.

E-mail: [email protected]

Current automated cervical cytology screening systems depend heavily on manipulation of glass slides. We developed CytoProcessor, which takes advantage of virtual slide technology and artificial intelligence to detect, to classify and sort  Pap smear More Details cells in order to increase sensitivity while saving time in diagnosis making. A gallery of abnormal cell thumbnails is presented to the user in a web application. He can then interact with it to visualize the given cells in the whole slide image. We set out to compare CytoProcessor and the ThinPrep Imaging System. A representative population of 1352 cases was selected from the routine workflow for diagnosis by both methods. All discordances were resolved by a consensus committee. CytoProcessor significantly improves diagnostic sensitivity without compromising specificity. If the CytoProcessor diagnosis had been used, 1.5% of patients would have been missed. In contrast, 4% of patients were missed with the ThinPrep Imaging System (2.6-fold decrease). CytoProcessor is better at cases where abnormal cells are isolated, specifically on LSIL lesions with koïlocytes. With CytoProcessor, 2.2 hours of human resources are saved every 100 slides thanks to the completely digitized workflow and its computer assisted screening tool. Besides, the first pathologist using the solution in a private laboratory in routine testified to go 6 times faster. CytoProcessor is the first of the new generation of remote and automated screening systems, demonstrating improved sensitivity and gains in time. Moreover, the fully digital nature of this solution allows to make diagnosis remotely.

   What Can We Learn from Human Visual Interaction with Digital Pathology Media? Top

Sharon Elizabeth Fox1,2, Beverly Faulkner-Jones3,4

1Department of Pathology and Laboratory Medicine Service, Southeast Louisiana Veterans Healthcare System, New Orleans, LA, USA,2Department of Pathology, LSU Health Sciences Center, New Orleans, LA, USA,3Department of Pathology, Beth Israel Deaconess Medical Center, Boston, MA, USA,4Quantum Pathology, Waltham, MA, USA. E-mail: [email protected]

Background: Data from human eye-tracking is a valuable tool in our understanding of human-image interaction, as well as how humans learn to efficiently extract information for morphology-based diagnosis. Analytical methods for understanding complex features of eye-tracking data can be applied to explore the influence of human gaze path visualizations and provides a means for further incorporating this type of information into computer presentation of relevant image features. Methods: Eyetracking data collection utilizing a Tobii X2-60 eyetracker was performed on pathologists and trainees, who were allowed to view various forms of digital pathology media alone, with guidance from an instructor, and finally with guidance augmented by gaze path visualization of the instructor. Raw eye-tracking data was processed through several novel methods to explore different clustering, trajectory simplification, and map construction techniques to analyze similarity between trainee and expert gaze patterns utilizing these alternate forms of training. Gaze patterns associated with novel forms of image presentation were also explored. Results: Similarity of gaze paths occurred most frequently, and rapidly, following training on digital pathology media augmented by gaze path visualization. In addition, trainees expressed greater confidence in their understanding following the didactics utilizing gaze visualization. Conclusions: Visualization of expert gaze patterns upon medical images can lead to efficient adoption of similar search patterns among trainees, and improved analytical methods on eye-tracking data can allow us to improve convergence of gaze patterns in education, as well as understand the potential for human-computer interaction in the educational setting.

   Artificial Intelligence 101 Top

Stanley Cohen1, 2, 3, 4

1Department of Pathology, Rutgers-NJMS, Newark, NJ, USA,2Department of Pathology, University of Pennsylvania, Philadelphia, PA, USA,3Department of Pathology, Jefferson University School of Medicine, Philadelphia, PA, USA,4Department of Pathology, Northwestern School of Medicine, Chicago, IL, USA. E-mail: [email protected]

It is now well-accepted that artificial intelligence's role in pathology is rapidly expanding. However, there is still limited knowledge among pathologists as to exactly what artificial intelligence is and how it is implemented. The purpose of this presentation is to bridge the gap between pathologists and computer scientists by providing a guide for the former to the underpinnings of machine learning with only minimal reference to the underlying mathematics involved. The difference between artificial intelligence and deep learning will be framed in terms of such basic issues as classification, detection, inference and prediction. We will see that since machines can currently “learn” rather than “think”, the computer will become a silicon pathology assistant rather than a replacement for the pathologist. The basic concepts and strategies covered include (1) general varieties of machine learning, (2) “shallow learning” via geometric, probabilistic, and stratification models, and (3) “deep learning” via fully connected and convolutional neural networks. Issues relating to unsupervised and transfer learning, ensemble, and hybrid learning, as well as practical considerations relating to dimensionality and over fitting will also be discussed.

   Establishing an Image Analysis Lab: Developing Quantitative Image Analysis for CD8 Cells in Tissue Top

Douglas J. Hartman1

1Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA. E-mail: [email protected]

Background: Numerous papers have published about the clinical significance of CD8 cells within various tumor types. Our clinical colleagues have been requesting that we begin to provide an assessment of the quantity of CD8 cells within tissue. By light microscopy, this is generally performed in a semi-quantitative fashion. We set out to create an automated method for the quantification of CD8 cells within tissue sections for Head and Neck Squamous cell carcinoma cases. Methods: Using the Leica image analysis platform, we optimized a cytoplasmic image analysis algorithm. This was optimized by manually counting all of the CD8 cells (as identified by a CD8 immunostain) within multiple cores within a lung cancer tissue microarray. Once the algorithm was optimized, we tested it out on 74 cases of Head and Neck Squamous cell carcinoma with known outcome. We then created a quality control system to ensure that the system is performing correctly. Results: We went LIVE with reporting CD8 cell density for Head and Neck Squamous cell carcinomas in the September 2018. Feedback from numerous stakeholders was necessary throughout the process. Through this process, insight about the deployment of image analysis to a diagnostic pathology department was learned. Conclusions: Establishing a clinically directed image analysis required engagement with numerous stakeholders to ensure that the image algorithms were properly performed and satisfied the clinical needs. Performing ongoing quality control is necessary to ensure that the performance of image analysis is robust and reliable and can be used for clinical care.

   Importance of Pre-Imaging Factors in the Histology Laboratory Top

Liron Pantanowitz1, Lisa Manning2

1Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA,2Department of Pathology, Shared Health Diagnostics, Winnipeg, Manitoba, Canada. E-mail: [email protected]

With the adoption of Digital Pathology and promise of Artificial Intelligence more labs are moving towards primary diagnosis and utilizing image analysis. As a result, we need to focus on producing high quality glass slides that are optimized for digitization. Whole slide imaging is an enabling tool, but introducing this technology into the pathology workflow has shed light on the fact that the tissue sections, staining and cover lipping of glass slides that are produced in histology labs are not always consistent and often contain artifacts (e.g. folds, holes and debris). Ensuring that pre-analytical handling practices in the histology laboratory are optimal and that labs are producing high quality work is essential through all stages of the process including grossing, processing, embedding, microtomy, staining and cover slipping. This will become a new paradigm shift for labs everywhere that are undergoing a digital transformation. The need to standardize histology lab practices is imperative for optimal whole slide imaging. To help address some of the issues histology labs can now partake in proficiency testing. Sites can assess their whole slide imaging histology quality by participating in the new CAP NSH Whole Slide Image Quality Improvement Program. The program evaluates the quality of H&E sections cut and stained in a lab and the WSI scanned from that slide. This talk will cover all of the aforementioned pre-imaging aspects of digital pathology from the perspective of a histotechnologist and pathologist.

   AI and Digital Pathology: Regulatory Perspective Top

Esther Abels1, Shyam Kalavar2

1VP Regulatory Affairs, Clinical Affairs and Strategic Business Development, MA, USA,2Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, MD, USA. E-mail: [email protected]

Oversight and regulation of health care AI systems are required and remain based on benefit risk ratio, which accounts for factors such as intended use; evidence of safety and efficacy. To begin to address these issues, the DPA is continuing the discussions with FDA on regulatory pathways for AI and working closely with DICOM WG26 as well as starting a temporary alliance with MDIC and FDA, and many other stakeholders to move DP and precision medicine forward. During this presentation, we will discuss the current regulatory landscape in US.

   Implementation of Large-Scale Routine Diagnostics using Whole Slide Imaging in Sweden: Continuing Digital Pathology Experiences 2011–2019 Top

Anna Bodén1,2, Sofia Jarkman1,2, Stina Garvin1,2, Jesper Molin3, Claes Lundström4, Darren Treanor5, 6, 7

1Department of Clinical Pathology, Region Östergötland, Linköping, Sweden,2Department of Clinical and Experimental Medicine (IKE), Linköping University, Linköping, Sweden,3Sectra AB, Linköping, Sweden,4Center for Medical Image Science and Visualization, Linköping University, Linköping and Sectra AB, Linköping, Sweden,6Department of Clinical Pathology, Region Östergötland, Linköping, Sweden,6Department of Clinical and Experimental Medicine (IKE), Linköping University, Linköping, Sweden,7Department of Cellular Pathology, St. James's University Hospital, Leeds, UK.

E-mail: [email protected]

Background: The pathology department in Linköping, Sweden has been a leader in digital pathology since 2011. Objectives: This presentation will define (i) digital diagnostic workflow (ii) the components of a digital pathology system integrated with the LIS and (iii) how pathologists adopt the digital system. Results: The complete histopathology production, about 1000 slides each day, is digitized to whole slide images (WSIs) together with macroscopic images to support grossing. Locally 21 consultants and 7 residents review WSIs for routine diagnosis, consultations and tumor boards and 6 remote pathologists support workload. To date, 9 pathologists use the digital system exclusively for primary diagnosis, most use it for other activities, including image analysis. The challenges have been to create a smooth integration with the LIS and to do validation while carrying heavy workload, to ensure a secure transformation from one diagnostic modality to another. User logs that track activity on cases and digital slides show an up to 95% user rate and adoption rate for primary diagnosis of 60%. Conclusions: Every pathologist in Linköping University Hospital uses digital pathology, the majority to provide primary diagnosis and MDT presentations, most to access the digital archive for previous relevant cases, and many for access to digital image tools. The residents are a digitized generation; they have more confidence using a digital system than the conventional microscope. Lessons learned include awareness of potential artifacts and limitations and recognizing the need for new workflow triggers when the slide is no longer the “master”.

   Peering into the “Black Box” : A Systems-Based Approach to Understanding Convolutional Neural Networks Top

Mark D. Zarella1, Anthony Knesis2, David E. Breen3, Fernando U. Garcia4

1Department of Pathology and Laboratory Medicine, Drexel University, Philadelphia, PA, USA,2Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA,3Department of Computer Science, Drexel University, Philadelphia, PA, USA,4Department of Pathology and Laboratory Medicine, Eastern Regional Medical Center, Cancer Treatment Centers of America, Philadelphia, PA, USA. E-mail: [email protected]

Background: Deep learning has been responsible for significant growth in computational pathology, in which artificial neural networks are trained to identify and process features with prognostic potential that may be present in histologic images. However, one of the major shortcomings of deep learning is that it produces classifiers that are not easily interpretable; that is, they are considered “black boxes.” As a result, it may be difficult to inspire confidence in novel AI algorithms in pathology. Methods: A number of methods have been devised to analyze deep networks and the image features that contribute to a decision although, at present, few have been applied specifically in pathology. We employed methods commonly used in general image classification to characterize deep networks trained on three distinct pathology tasks. We also introduce a novel method that uses established insights from tumor histology to probe network activations by relating synthetic modulations at the input to the activations generated at the output. We independently modulated color/staining, nuclear geometry, and histologic organization to relate these complex feature spaces to network outputs. Results: We found that conventional network visualization methods, when not constrained by known histologic parameters, reveal artificial features with arguably little value to pathologists. By using synthetic modulation of histologic images, we identified output activations strongly dependent on staining and histologic content. Using an occlusion test, we confirmed that the presence and absence of these features, even in a very small portion of the image, can alter classification. Conclusions: By characterizing the features used by a deep network, a more transparent understanding of AI can be achieved. However, these techniques also reveal the impact of what may be considered spurious features. Nevertheless, these insights can have an impact on tuning deep learning approaches in the future, including promoting new data augmentation strategies, regularization techniques, and identifying potential shortcomings in the training data set.

   Walking Down Memory Lane: A Historical Perspective of Digital Pathology and the Founding of the Digital Pathology Association Top

Dirk Soenksen1

1Ceresti Health, Carlsbad, CA, USA. E-mail: [email protected]

For the last decade, the Digital Pathology Association (DPA) has been instrumental in advancing the adoption of digital pathology by helping establish best practices and standards. This presentation will provide a historical perspective of digital pathology and the motivations for establishing the DPA. On overview of accomplishments and opportunities will be integrated into a walk down memory lane.

   The Critical Role of the Pathologist in Digital Pathology and AI Top

Liron Pantanowitz1, Mohamed Salama2

1Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA,2Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA. E-mail: [email protected]

With digital pathology being adopted by many labs around the world there is much interest in AI. Not surprisingly, funding in digital health has escalated, there has been a shift in vendors towards developing AI tools, and a noticeable uptick in AI startup companies developing AI algorithms. This talk will discuss the critical role of pathologists in the emerging era of AI and explain why their engagement at all stages including AI design, development, and validation is important. This talk will also explain how pathologists can help answer the following fundamental questions facing the field of AI: What are the right tasks for AI in Pathology? What are the right data for AI? What is the right evidence standard for AI? and What are the right approaches for integrating AI into clinical care? As medical center-industry partnerships are key to completing the innovation cycle in AI it is important for stakeholders to be aware of the rules of engagement. This talk will therefore also address important issues related to conflict of interest, intellectual property, transparency and professionalism for pathologists engaged in developing AI tools.

   Workshop on Deep Learning (Primer for Beginners) Top

Metin Gurcan1

1Center for Biomedical Informatics, Wake Forest School of Medicine, Winston Salem, NC, USA. E-mail: [email protected]

Deep learning is a very popular these days because of its recent successes in medical and non-medical applications. In this workshop, we will provide a primer to deep learning and how it could be useful for pathology. The attendees will be able to understand the differences between artificial intelligence, machine learning and deep learning. We will also discuss answers to the question: whether artificial intelligence is going to replace pathologists. 1) Basics of image analysis for pathology 2) Artificial intelligence and deep learning for pathology 3) Demos and discussion.

   Implementing Digital Pathology and AI Dollars and Sense Top

Anil Parwani1, Sylvia L. Asa2, Anna Bodén3,4, Mohamed Salama5, Giovanni M. Lujan6

1Department of Pathology, Wexner Medical Center, Ohio State University, Columbus, OH, USA,2Department of Pathology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA,3Department of Clinical Pathology, Region Östergötland, Linköping, Sweden,4Department of Clinical and Experimental Medicine (IKE), Linköping University, Linköping, Sweden,5Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA,6Gastrointestinal Pathology, Inform Diagnostics, Irving, TX, USA.

E-mail: [email protected]

The practice of pathology is rapidly undergoing a transformation in which multiple tools such as digital imaging, advanced algorithms, and computer-aided diagnostic techniques are being linked with molecular pathology resulting in increased diagnostic power based on the ability of modern-day pathologists to use these new tools and to interpret the data generated by them. Automated whole slide imaging (WSI) scanners are now rendering diagnostic quality, high-resolution images of entire glass slides and combining these images with innovative digital pathology tools that are making it possible to integrate imaging into all aspects of pathology reporting including anatomical, clinical and molecular pathology. The recent approval of two WSI scanners for primary diagnosis by FDA has paved the way in the United States for starting to incorporate this exciting technology for use in primary diagnosis and many other clinical applications. The C suite and many administrators responsible for the bottom-line are interested in knowing the costs and benefits of these new tools. The business case of going digital is still not clear to many organizations. The focus of this workshop and panel discussion is to bring many different perspectives to the attendees about using and implementing these tools for pathology practices and really getting to practical aspects of dollars involved and outline the benefits with objective and real-world data. This is an especially exciting time in pathology at OSU as these systems will become an integral component of our pathology practice and will serve as a platform for innovations and advances in anatomical and clinical pathology in the future. In summary, this workshop will highlight some key financial and economic issues for organizations and pathologists to help them understand the value proposition of going digital.

   Using Whole Slide Images for Pathology Education Top

Marilyn M. Bui1, Rajendra Singh2, Eric F. Glassy3

1Department of Pathology, Moffitt Cancer Center, Tampa, FL, USA,2Department of Pathology and Dermatology, Mt. Sinai School of Medicine, New York, NY,3Affiliated Pathologists Medical Group, Rancho Dominguez, CA, USA.

E-mail: [email protected]

With advancements in digital pathology, whole slide images (WSI) and web-based technologies are gradually being incorporated in pathology education beyond simply sharing digital images. This workshop (1.5 hours) is intended to provide the participants a practical and interactive experience to explore the utility of WSI in pathology education including the Digital Surgical Pathology Academy developed by the DPA which targets pathology residents and fellows, mobile pathology apps, social media platforms, publications, and new methods of e-learning. In the first hour (CME-earning), faculty will discuss commonly encountered issues in implementation and offer solutions or recommendations. In the last 30 mins (non-CME-earning), 2 vendors will share with the participants their experience in establishing web-based training programs for their pathologist customers. Objectives: 1. Raise awareness of the digital surgical pathology academy by DPA. 2. Discuss the challenges, approaches and opportunities in establishing WSI and web-based educational tools in pathology. 3. Be familiar with other resources in WSI and web-based pathology education. 4. Explore new social media platforms that integrate WSI into the comment stream 5. Explore the future of pathology education and e-learning.

   LIS Integration for Enterprise Digital Pathology and Whole Slide Imaging Top

J. Mark Tuthill1

1Department of Pathology and Laboratory Medicine, Henry Ford Health System, Detroit, MI, USA. E-mail: [email protected]

We integrated a wide variety of digital pathology solutions (Grossing, Milestone; Xrays, Faxitron; Camera on stick based systems), with our LIS (Sunquest CoPath 6.3.2) using an enterprise media management solution (Apollo EPMM® v9.4.3) as well as working with our whole slide imaging vendor partners (Mikroscan, Ventana-Roche) to create a suite of integrated solutions that leverage standardized pathology work. Key to this is integration of bar code labeling solutions that can be used on any platform and device interfaces. Database servers were deployed in our data center with client software installed on workstations including histology, pathologists offices, grossing areas, and autopsy. Most recently we have enabled integration to seamlessly view WSI through integration with the LIS.

   Similar Image Search for Histopathology Top

Narayan Hegde1, Jason D. Hipp1, Yun Liu1, Michael E. Buck2, Emily Reif1, Daniel Smilkov1, Michael Terry1, Carrie J. Cai1, Mahul B. Amin3, Craig H. Mermel1, Phil Q. Nelson1, Lily H. Peng1, Greg S. Corrado1, Martin C. Stumpe1

1Google AI Healthcare, Mountain View, CA, USA,2Avoneaux Medical Institute, Baltimore, MD, USA,3Department of Pathology and Laboratory Medicine, University of Tennessee Health Science Center, Memphis, TN, USA. E-mail: [email protected]

This growing adoption of “digital pathology” provides opportunities to create digital archives of pathology images. Similar image retrieval can help effectively search and understand these large histopathology datasets. SMILY tool uses deep neural network to understand similar between histopathology images. In our work, we show SMILY ability to retrieve images which share clinical features with query image. Such tool can be used for education, research and diagnosis. We also show the importance of human computer interaction tools to bridge the semantic gap by giving interactive refinement tools that empower end-users to guide what similarity means on-the-fly.

   Multimodal Data Fusion for Pathology Applications Top

Faisal Mahmood1,2, Richard Chen1,2

1Department of Pathology, Harvard Medical School, Boston, Massachusetts, USA,2Brigham and Women's Hospital, Boston, Massachusetts, USA.

E-mail: [email protected]

Subjective clinical diagnosis is often based on multimodal information from microscopic and molecular information as well as information from patient and familial histories. Most recent work in objective pathology image analysis does not take into account additional information that often acts as an important source of diagnostic and prognostic cues. In this talk we will present a variety of different computational paradigms to fuse information from microscopic images of tissue biopsies, corresponding genomic data as well as patient and familial histories. We demonstrate that fusing multimodal information significantly improves survival prediction, characterization, and prognostication. We further demonstrate that such a multimodal fusion paradigm can be used to identify new biomarkers and morphological features that can lead to the development of new grading schemes.

   Leeds Digital Pathology Workshop Top

Bethany Williams1,2, Darren Treanor1, 2, 3, 4

1Department of Histopathology, Leeds Teaching Hospitals NHS Trust, Leeds, UK,2Department of Pathology, Faculty of Medicine and Health, University of Leeds, Leeds, UK,3Department of Clinical Pathology, and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden,4Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden. E-mail: [email protected]

Digital pathology is a transformative technology which is set to revolutionize the way in which pathology services are delivered across the globe, offering significant quality, safety and efficiency improvements to patients and clinicians. Pathologists and laboratory managers need to ensure they understand the advantages and challenges of digital workflows and reporting in order to maintain professional standards. In this session, we will build on our experience on the Leeds 100% digitization project, offering guidance (and support!) for anyone undertaking a laboratory deployment, or switching to digital diagnosis.

   Clinical-Grade Artificial Intelligence: Hype or Hope for Cancer Care Top

Thomas Fuchs1

1Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, New York, USA. E-mail: [email protected]

Artificial Intelligence is revolutionizing healthcare and will fundamentally transform how cancer patients are diagnosed and treated. One of the disciplines most impacted is pathology which is in the midst of evolving from a qualitative to a quantitative science. This transformation is driven by machine learning in general and computer vision and deep learning in particular. In this talk, we will analyze what it takes to build a clinical-grade artificial intelligence, how deep learning at petabyte-scale enables new ways of cancer diagnosis and how these systems impact clinical practice and the work of medical doctors. We will try to separate the vast potential of machine learning in healthcare from the current hype and address crucial issues of ethics and patient privacy. Finally, we will take a look into the future to medicine and how artificial intelligence can impact and hopefully improve cancer care for patients.

   Poster Abstracts Top

   Image Analysis Strategy for Multiplexed Immunofluorescence Image: A Case Study Top

Yao Nie1, Auranuch Lorsakul1, Konstanty Korski2, Irina Klaman2, Gabriele Hoelzlwimmer2, Natascha Rieder2, Claudia Ferreira2

1Digital Pathology, Roche Tissue Diagnostics, Santa Clara, CA, USA,2EBDO, pRED Oncology, Roche Innovation Center Munich, Penzberg, Germany. E-mail: [email protected]

Background: Multiplex immunofluorescence (IF) is an effective and efficient way to simultaneously identify specific immune cell types, spatial distributions and state of activations. Due to the complexity of biomarkers presented in the multiplexed images, it is desired to design the image analysis pipeline that utilizes the stain morphology and co-localization information to enhance the analysis efficiency and accuracy. In this study, we present our approach to process images from a 5-Plex IF panel. Method: A 6 channel immunofluorescence (IF) panel was used to detect 5 biomarkers with three types of staining patterns, i.e., T-cell membrane markers (CD3, CD8 and PD1), tumor cell cytoplasmic marker (cytokeratin), and proliferating cell nucleus marker (Ki67). DAPI was used to stain all the cell nuclei. Considering that tumor cell marker does not co-localize with T-cell markers; while Ki67 can co-localize with any marker, the image analysis pipeline was designed as follows: Step 1. Preprocessing is applied to all channels to remove the auto fluorescence (AF) signal. This is based on the assumption that the relative strength profile of pure AF signal in all the image channels is consistent for the panel and can be assessed from unstained tissue images. At each pixel, the channel that has the lowest signal contains the pure AF signal, using which the AF signal in other channels can be estimated based on the relative AF strength profile. Step 2. An enhanced cell boundary image is derived by applying Frangi filter[1] to the cytokeratin, Ki67 and T-cell channels and taking the maximum response from the filter outputs. This enhanced cell boundary image is subtracted from DAPI channel to better separate the nuclei, on which nuclei detection and segmentation are performed for all cells. Step 3. Tumor cell identification is performed by extracting intensity, texture and context features from the DAPI, cytokeratin and a combination of the T-cell channels for each cell. These features are used to train a classifier to differentiate tumor cells from non-tumor cells. Note that even cytokeratin channel is sufficient to identify tumor cells, it is important to include T-cell channel information in this classifier to make sure tumor infiltrating T-cells are not misclassified as tumor cells. Step 4. For the non-tumor cells, since CD3, CD8 and PD1 can co-localize as pairs or triples, image features are extracted from the DAPI and separate T-cell channels to train a classifier for differentiating the following mutual exclusive phenotypes: CD3+ single, CD3+CD8+, CD3+PD1+ and CD3+CD8+PD1+. Thus a single classifier can handle all types of T-cells. Step 5. Finally, since Ki67 is a nucleus biomarker, Ki67 channel intensity within each segmented nucleus is used to identify Ki67+ cells by applying a predefined threshold. Results: Please see the visual assessment in [Figure 1] and quantified results in [Table 1]. Conclusion: Understanding the biomarker morphology and co-localization is important for designing image analysis pipeline for multiplexed immunofluorescence images. In this case study, we show that membrane, nucleus and cytoplasmic stain can be utilized to enhance cell boundary in the nuclei channel. Biomarkers which are not co-localized in the same cell can be processed independently, but care need to be taken if they can appear adjacent to each. Co-localized biomarkers can be identified by a single classifier by defining mutual exclusive phenotypes, but a dedicated classifier for each biomarker can be more desired if some co-localization phenotypes are very rare.
Figure 1: Example of multiplexing immunofluorescence image channels and the output of Frangi filter. (a)Sample IF Image (b) DAPI (c) T-cell Combined (d) Frangi_T-cell (e) Cytokeratin (f) Frangi_Cytokeratin (g) Ki67 (h) Frangi_Ki67 (i) Cell boundary enhanced DAPI (j) Cell detection results

Click here to view
Tabel 1: Verification Results On 548 Field of View Images Covering 24 Indications

Click here to view


  1. Frangi AF, Niessesn WJ, Vincken KL, Viergever MA, Multiscale vessel enhancement filtering, Proceedings of the 1st International Conference on Medical Image Computing and Computer Assisted Intervention; 1998 Oct 11-13; MA USA.

   Establishing the Need for Artificial Intelligence Applications in Clinical Pathology Microscopy Based Tests Top

Jessica Kohan1, Mark Astill1, Adam Barker1,2, Orly Ardon1,2

1Digital Diagnostics, ARUP Institute for Clinical and Experimental Pathology, Salt Lake City, UT, USA,2Department of Pathology, University of Utah, Salt Lake City, UT, USA. E-mail: [email protected]

Background: Digital pathology and artificial intelligence (AI) based tools can improve the interpretation accuracy, precision and reproducibility of microscopy slides that result in better patient care. As innovations in digital pathology applications are becoming more accepted in anatomic pathology clinical use, the adoption of artificial intelligence and digital microscopy based testing in clinical pathology is lagging behind. There are several reasons including the lack of resources, expertise and an obvious return on investment (ROI). One practical attribute of clinical pathology testing is the relatively high complexity and variability that requires investment in extensive resources for diagnostic-test development. On the other hand, the ROI for the development of historically low-priced clinical pathology tests is not as evident in comparison with other, high-revenue microscopy-based tests. The lack of commercially available AI based tools for our clinical pathology microscopy tests and our need to develop these tests initiated a feasibility study to prioritize internal test development. Methods: Analysis of microscopy based test operational and financial metrics at ARUP was done for the period of July 2019-June 2019. Results: ARUP's test menu has over 300 microscopy based tests. These tests are part of operations in about 30 labs that are grouped into 6 divisions. Candidate tests for machine learning based test augmentation were identified based on operational needs of ARUP as well as financial feasibility studies. Metrics used included cost of labor, test volume, market demand, and impact on patient care. Conclusions: AI based microscopy tests can help labs improve their productivity and competitiveness and adjust to some of these market trends by increasing capacity and driving down costs. These can result in increased efficiency and ability to respond to the price pressures of the competitive diagnostic testing market and healthcare expense containment. For labs facing technologist shortage, it is essential to develop AI capabilities to bridge the need to maintain their operations standards. With most development of AI based microscopy tools targeting the anatomic pathology market, labs need to identify and focus on developing AI based tools for their specific internal test needs. With limited available resources, a careful study of multiple test metrics is required to determine specific investments in development and to determine ROI for the new AI based tools.

   Small Round Cell Tumors: How Small, How Round, How Blue? A Classification Problem Top

Christopher M. Chandler1, Jonathan C. Henriksen1, Nicholas Reder1, Robert Ricciotti1

1Department of Pathology, University of Washington, Seattle, WA, USA.

E-mail: [email protected]

Background: Small round cell tumors (SRCTs) are diverse neoplasms that are morphologically similar having basophilic nuclei, scant cytoplasm, and crowded small cells often described as “small, round and blue.”[1] Immunohistochemical staining and molecular studies are frequently necessary to arrive at a diagnosis.[2] We hypothesized that we could use machine learning techniques to separate SRCTs using features extracted from hematoxylin and eosin stained slides alone. Methods: Four cases each of alveolar rhabdomyosarcoma, round cell liposarcoma, olfactory neuroblastoma, Ewing sarcoma, and mesenchymal chondrosarcoma were scanned at 40x resolution using an Aperio ScanScope AT2 Whole Slide Scanning platform. Regions of interest (N=20) were annotated using QuPath and watershed cell detection was used to define cell boundaries [Figure 1].[3] Features including nuclear area, circularity, eccentricity, and optical density were measured (total of 33). The data was used to train supervised machine learning algorithms to create a classification model using MatLab. The model was tested using holdout validation (33% of cells held out). Results: A Gaussian support vector machine algorithm achieved 88.5% overall accuracy in classifying cells from different SRCTs [Figure 2]. The algorithm classified round cell liposarcoma cells most accurately with a 95% true positive rate, followed by alveolar rhabdomyosarcoma (89%), mesenchymal chondrosarcoma (83%), olfactory neuroblastoma (80%), and Ewing sarcoma (24%). Conclusions: Despite similarities, some SRCTs can be successfully classified using machine learning algorithms trained with morphologic features, a cost-effective alternative to molecular assays. Neoplasms with significant morphologic variability, such as Ewing sarcoma, may not be well suited to such approaches.
Figure 1: Representative region of interest (ROI, left pane) and segmentation of round cell liposarcoma (middle, with cell and nucleus outlines, and right, without)

Click here to view
Figure 2: Confusion matrix showing results of Gaussian support vector machines model in classifying SRCT types

Click here to view


  1. Callaghan RC, Carda C, Peydró-Olaya A, Triche T, Llombart-Bosch A. Small round cell tumors of bone and soft tissue. A morphometric and stereometric comparative analysis of 119 cases. Anal Quant Cytol Histol 1995;17:374-82.
  2. Thompson LD. Small round blue cell tumors of the sinonasal tract: A differential diagnosis approach. Mod Pathol 2017;30:S1-26.
  3. Bankhead P, Loughrey MB, Fernández JA, Dombrowski Y, McArt DG, Dunne PD, et al. QuPath: Open source software for digital pathology image analysis. Sci Rep 2017;7:16878.

   Automated Diagnosis of Lymphoma with Digital Pathology Images using Deep Learning Top

Hanadi El Achi1, Tatiana Belousova1, Lei Chen1, Amer Wahed1, Iris Wang1, Zhihong Hu1, Zeyad Kanaan2, Adan Rios2, Andy N.D. Nguyen1

1Department of Pathology & Laboratory Medicine, University of Texas HSC at Houston - Texas, USA,2Department of Internal Medicine, University of Texas HSC at Houston - Texas, USA. E-mail: [email protected]

Introduction: Due to subtle differences in histologic findings between various types of lymphoma, initial microscopic assessment often presents a challenge to pathologists. Automated diagnosis applying machine learning algorithms to digital images would be helpful to assist pathologists in daily screening. Recent studies have shown promising results in using Deep Learning to detect malignancy in whole slide imaging, however they were limited to just predicting positive or negative findings for a specific neoplasm.[1] Objective: We attempted to use Deep Learning to build a lymphoma diagnostic model for four diagnostic categories. Methods: Deep Learning using a convolutional neural network (CNN) algorithm[2] was used to build a lymphoma diagnostic model for four diagnostic categories: (1) benign lymph node, (2) diffuse large B-cell lymphoma, (3) Burkitt lymphoma, and (4) small lymphocytic lymphoma. The coding language used for our algorithm was Python. We obtained digital whole-slide images of Hematoxylin and Eosin stained slides of 128 cases including 32 cases for each diagnostic category. Four sets of 5 representative images, 40 x 40 pixels in dimension, were taken for each case. A total of 2,560 images were obtained from which 1,856 were used for training, 464 for validation, and 240 for testing. For each test set of 5 images, the predicted diagnosis was combined from prediction of five images (majority voting). Results: The test results showed diagnostic accuracy of 100% for set-by-set prediction [Table 1] and 95% for image-by-image prediction [Table 2]. Conclusion: Our study expanded on prior studies and included more tumor types achieving diagnostic accuracies nearing 100%. The inclusiveness and accuracy of our model provides pathologists a reliable and practical tool in their daily practice.[3] This preliminary study provides proof of concept for incorporating automated lymphoma diagnostic screening into future pathology workflows to enhance productivity. Due to the generic nature of the CNN algorithm, the results from this study are readily extensible to histopathology images of other malignancies.
Table 1: Accuracy in predicting diagnoses for sets of 5 images using majority voting

Click here to view
Table 2: Accuracy in predicting diagnoses using one single image at a time

Click here to view


  1. Al-Janabi S, Huisman A, Van Diest PJ. Digital pathology: Current status and future perspectives. Histopathology 2012;61:1-9.
  2. Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems. Vol. 25. Edited by F. Pereira and C.J.C. Burges and K.Q. Weinberger: Curran Associates, Inc.; 2012. p. 1106-14.
  3. Fauzi MF, Pennell M, Sahiner B, Chen W, Shana'ah A, Hemminger J, et al. Classification of follicular lymphoma: The effect of computer aid on pathologists grading. BMC Med Inform Decis Mak 2015;15:115.

   Preliminary Findings of QuPath Digital Immunohistochemical Analysis of Placental Tissue Top

Ana Yuil-Valdes1, Maheswari Mukherjee2, Ashley L. Hein1, Jesse Cox1, Annelisse Santiago Pintado1, Mariam A. Molani1, Geoffrey A. Talmon1, Corrine K. Hanson3, Elizabeth Lyden4, Tara Nordgren5, Aunum Akhter6, Ann Anderson-Berry6

1Department of Pathology and Microbiology, University of Nebraska Medical Center, College of Medicine, Omaha, NE, USA,2University of Nebraska Medical Center, Cytotechnology Education, College of Allied Health Professions, Omaha, NE, USA,3University of Nebraska Medical Center, Medical Nutrition, College of Allied Health Professions, Omaha, NE, USA,4Department of Biostatistics, University of Nebraska Medical Center, College of Public Health, Omaha, NE, USA,5Division of Biomedical Sciences, School of Medicine, University of California Riverside, Riverside, CA, USA,6Department of Pediatrics, University of Nebraska Medical Center, College of Medicine, Omaha, NE, USA.

E-mail: [email protected]

Background: The field of pathology is becoming increasingly computerized with the use of digital image analysis (DIA) of immunohistochemistry (IHC) stained slides. The reference standard for determination of IHC staining continues to be visual scoring.[1],[2],[3] One problem with visual scoring is that the data produced is affected by human sources of both cognitive and visual bias.[4] Use of DIA software reduces potential sources of human errors leading to reduced inter-observer variability and increased accuracy of scoring. DIA also allows for the creation of quantitative instead of ordinal data,[5],[6] reduced cost of analysis, and reduced time of analysis.[7] QuPath is an open source software for whole slide image analysis.[8] Among DIAs, it is notable for its user-friendly design and ability to create a hierarchical “object-based” data model which allows users to perform more complex evaluation through scripting.[8] While QuPath has been applied in over 100 pubications since it was first released in 2016, it has never before been used in the study of placental tissue. Herein we report the preliminary findings of our study, where we compared G-protein coupled receptor 18 (GPR18) IHC staining intensity found using QuPath digital analyzer to traditional visual scoring in vascular smooth muscle (VSM) and extravillous trophoblast (EVT) in placental tissue. We hypothesized that IHC scoring using QuPath analysis provides similar accuracy compared to visual scoring in placental tissue. Methods: Twenty formalin-fixed and paraffin-embedded placental tissues obtained from third trimester placentas were used in this study. Once the traditional glass slides were prepared by obtaining four-micron sections from the above-mentioned tissues, IHC staining for GPR18 (polyclonal; Thermo Fischer Scientific; 1:75 dilution) was performed using standard autostaining protocols on a Ventana Discovery Ultra autostainer. The IHC stained glass slides were then digitized at single focal plane level and 40x magnification using VENTANA iScan HT scanner. The digital images (DI) were stored in an encrypted and password-protected external hard drive. Among 20 DI, VSM were annotated (digitally marked) in ten DI and EVT were annotated in ten DI using image viewer software. Four participants (two pathologists and two pathology residents) performed visual analysis of all 20 DI. Four participants (one pathologist, one cytotechnologist, one pediatrics resident, and one medical student) performed digital analysis of all 20 annotated DI using QuPath software. Both groups analyzed GPR18 staining as percentages in the low, medium, and high intensity staining categories. A linear mixed model with random effects for participant and image and a fixed effect for method (QuPath or visual) was used to analyze differences in method. Model adjusted means and standard errors were used to summarize the percentage of staining classified by the participants. Statistical analyses were performed using SAS 9.4. Results: As seen in [Figure 1]a, VSM visual analysis mean and standard error of low, medium and high intensity were, 46.3 (9.2), 39.4 (5.8), and 14.4 (4.5), respectively. VSM QuPath analysis mean and standard error of low, medium and high intensity were, 35.0 (9.2), 53.2 (5.8), and 11.8 (4.5), respectively. As seen in [Figure 1]b, EVT visual analysis mean and standard error of low, medium and high intensity were, 25.0 (6.6), 45.1 (5.4), and 29.9 (9.3), respectively. EVT QuPath analysis mean and standard error of low, medium and high intensity were, 21.9 (6.6), 40.3 (5.4), and 38.2 (9.3), respectively. Comparison of QuPath and visual scoring means between staining intensity categories produced p-values >0.05 for all categories, although VSM medium staining level had a borderline statistically significant difference (p=0.079). Representative images of QuPath analysis of VSM and EVT using object tool are shown in [Figure 2]. Conclusions: There was no statistically significant difference between QuPath and visual scoring mean intensities within any of the staining intensity categories. Our study results demonstrated that QuPath accuracy, as represented by mean staining intensity, were similar to that obtained through traditional visual scoring. Furthermore, QuPath's user-friendly design, ease of access, and ability to reduce sources of bias demonstrates potential for this software in future research and pathology practices. Additional study is needed with data collected from more participants to further evaluate reproducibility of results and increase the power of this study.
Figure 1: Comparison of staining mean (±SE) between visual scoring and QuPath by staining category. (a) VSM analysis. Comparison of scoring methods between low, medium, and high intensity staining produced P-values of 0.383, 0.079, and 0.689, respectively. (b) EVT analysis. Comparison of scoring methods between low, medium, and high intensity staining produced P-values of 0.446, 0.345, and 0.288, respectively

Click here to view
Figure 2: Representative images of QuPath analysis of GPR18 in placental tissue using object tool. Cell segmentation with intensity expression into: Negative (blue), low (yellow), medium (orange), and high (red). (a) EVT analysis (b) VSM analysis

Click here to view


  1. Fleming MG. Pigmented lesion pathology: What you should expect from your pathologist, and what your pathologist should expect from you. Clin Plast Surg 2010;37:1-20.
  2. Daunoravicius D, Besusparis J, Zurauskas E, Laurinaviciene A, Bironaite D, Pankuweit S, et al. Quantification of myocardial fibrosis by digital image analysis and interactive stereology. Diagn Pathol 2014;9:114.
  3. Varghese F, Bukhari AB, Malhotra R, De A. IHC profiler: An open source plugin for the quantitative evaluation and automated scoring of immunohistochemistry images of human tissue samples. PLoS One 2014;9:e96801.
  4. Aeffner F, Wilson K, Martin NT, Black JC, Hendriks CLL, Bolon B, et al. The gold standard paradox in digital image analysis: Manual versus automated scoring as ground truth. Arch Pathol Lab Med 2017;141:1267-75.
  5. Taylor CR, Levenson RM. Quantification of immunohistochemistry – issues concerning methods, utility and semiquantitative assessment II. Histopathology 2006;49:411-24.
  6. Bankhead P, Fernández JA, McArt DG, Boyle DP, Li G, Loughrey MB, et al. Integrated tumor identification and automated scoring minimizes pathologist involvement and provides new insights to key biomarkers in breast cancer. Lab Invest 2018;98:15-26.
  7. Meyerholz DK, Beck AP. Principles and approaches for reproducible scoring of tissue stains in research. Lab Invest 2018;98:844-55.
  8. Bankhead P, Loughrey MB, Fernández JA, Dombrowski Y, McArt DG, Dunne PD, et al. QuPath: Open source software for digital pathology image analysis. Sci Rep 2017;7:16878.

   Remote Access for Whole Slide Imaging: Resident Group Experience Top

Ifeoma Ndidi Onwubiko1, Rand Abou Shaar1, Ashish Mishra1, J. Mark Tuthill1

1Department of Pathology and Laboratory Medicine, Henry Ford Hospital Health System, Detroit, Michigan, USA. E-mail: [email protected]

Background: The introduction of digital pathology slides produced from scanning conventional glass slides also referred to as Whole Slide Imaging (WSI) in the late 1990s has gradually gained more acceptance by pathologists. Most modern WSI instruments are capable of producing high-resolution digital slides within minutes. WSI compared to static digital images are preferred for diagnostic, educational, research purposes providing an opportunity to expand user tools including digital annotation, rapid navigation, magnification, viewing and analysis. At Henry Ford Health System, residents in the department of Pathology and Laboratory Medicine have successfully utilized WSI in tumor board preparation, multidisciplinary team meeting presentations, unknown conferences, Performance Improvement Program (PIP) presentation, gross conferences, frozen section, autopsy conferences, digital gross conferences and research projects. Despite its extensive usage, residents performed all WSI functions with the hospital, increasing residents' duty hours. In this study, we proposed providing remote access to WSI to all residents by providing VPN enabled secure remote access to WSI. Methods: We surveyed all residents (n=14) [Table 1] at the Department of Pathology and Laboratory Medicine, Henry Ford Hospital prior to granting VPN enabled remote access to WSI. Results: Analysis of data collated revealed 100% resident use of WSI digital pathology in daily work flow. 100% of the residents indicated that remote access to WSI is perceived to improve their time management with digital pathology slide review. All 14 residents used WSI for several functions including: unknown teaching slides (79%, n=11), tumor boards presentations (64%, n=9), research projects (43%, n=6), picture taking (57%, n=8) and for other educational purposes not specified (43%, n=6) [Figure 1]. 57% expressed frustration with making extra-trips to hospital for slide review [Figure 2]. 79% of the residents spent additional time to review slides after duty hours out of which to 21% of the residents spent more than two hour per weekend visit review [Figure 2]. Conclusion: We anticipate that providing residents remote access to WSI will reduce after duty hours spent on work related activities, resident frustration and improve time management and wellbeing. The overall usage of the system is projected to significantly reduce resident on site work hour. The typically highest usage of the system was for unknown educational slides.
Table 1: Resident composite by year of training

Click here to view
Figure 1: Dissatisfaction from slide review outside regular duty hours. Reference Onwubiko. 2019

Click here to view
Figure 2: Dissatisfaction from slide review outside regular duty hours. Reference Onwubiko. 2019

Click here to view

   Performance Assessment of Various Digital Pathology Whole Slide Imaging Systems in Real-Time Clinical Setting Top

Sathyanarayanan Rajaganesan1, Rajiv Kumar1, Vidya Rao1, Trupti Pai1, Neha Mittal1, Ayushi Shahay1, Santosh Menon1, Sangeeta Desai1

1Department of Pathology, Tata Memorial Centre, Mumbai, Maharashtra, India. E-mail: [email protected]

Introduction: Increasing interest in the validation of available digital-pathology systems (DPSs), before adopting in clinical setting is observed recently. However, limited information on comparative performance-assessment of these DPSs is available. In order to make the suitable decisions and judicious investments related to appropriate hardware and software implementation, it would be prudent to undertake a comprehensive evaluation of the various whole slide imaging (WSI) platforms. Aims and Objective: 1) To perform the real-time comparative evaluation of various DPSs to assess their technical performances 2) To evaluate the compatibility of DPSs to handle different type of pathology specimen i.e., Biopsy, Resection specimen, Frozen, IHC & Cytology 3) To evaluate the diagnostic accuracy, inter-observer and intra-observer concordance 4)To identify which technologies (software and hardware) were associated with the effective use of digital imaging. Materials and Methods: We performed a comprehensive real-time comparative evaluation of 4 different DPSs (Anonymized as DPS:1, 2, 3 & 4) using a total 240 cases (604 glass slides) comprising of 60 cases in each specimen categories (i.e. Biopsy, Resection specimen, Frozen & Cytology) and assessed by 7 pathologists (Two specialist and Five general). Cases of four organ systems i.e. Breast, Thoracic, Gastrointestinal tract (GIT) and Genito-urinary tract (GUT) were included in this evaluation. Each platform was evaluated after a minimum wash-off period of 2 weeks. Results: A total 2376 digital images were generated using 4 DPSs (excluding 40 failed scans) and a total of 15,575 image reads [(OM and WSI) were evaluated and subsequent results were recorded as: 1. Onsite technical evaluation of digital scanner's capability: The technical specifications onsite evaluation was performed as follows: a) Slide Scanning Performance: The first time successful scanning rate for all specimen types except cytology followed the sequence (Maximum to minimum): Scanner 1> Scanner 4> Scanner 2> Scanner 3. Besides scanner 1, all other scanners had difficulty in handling of the cytology slides especially scanner 2(41% failure rate). b) Scanning time: The mean scanning time per slide followed this sequence (minimum to maximum): Scanner4> Scanner1 > Scanner 2> Scanner 3 c)Storage space: Overall digital image output from the scanner 3 occupied least space, across all specimen type, followed by scanner 2 > scanner 1> scanner 4. Interestingly, among the specimen type, cytology slides took more time to scan and for storage as opposed to the H&E and IHC slides. Further, the mean time to scan and for storage for IHC slides was significantly less when compared to the corresponding H&E slides. 2. Diagnostic accuracy for WSI versus OM: Overall diagnostic accuracy when compared with reference standard for OM and WSI was 95.44% and 93.32% respectively. The discordance rate for OM was 4.56% (including 2.48% minor and 2.08% major discordances) and for WSI was 6.68(including 4.28% minor and 2.4% major discordances). Both inter as well as intra observer agreement between WSI and OM for primary diagnosis of biopsy, resection and frozen specimen was substantial to near perfect agreement. WSI was inferior to OM for the primary diagnosis of cytology specimens. Diagnostic assessment time required for OM was less as opposed to WSI across all specimen types. Assessment of digital image quality and level of confidence: a) The overall image quality was best in scanner1. No statistically significant correlation between the number of discrepancies and image quality of particular scanner could be established. b) Colour Variation in WSI: The scanner 1 and 2 were almost consistent in reproducing the original colour of the glass slides c) Digital artifacts: Mean digital image artifacts rate was 6.8% (163/2376 digital images) across all the scanners. Maximum number of digital artifacts were noted in scanner 2(n=77) followed by scanner 3(n=36). Common artifacts were out of focus images (either focal or diffuse); observed in H& E slides on scanner 4 and 3 and stitching errors; in cytology/H&E slides on scanner 2. d)Image viewer software: Most of the pathologists preferred viewing software of scanner 1 and scanner 2, as the pattern of case arrangement and display resembled like routine OM reporting. e) Level of confidence: Based on the mean score of the participating pathologist, level of confidence was highest for scanner 1 followed by scanner 2 for biopsy, resection and frozen section cases. Overall level of confidence for cytology evaluation was average irrespective of the scanner type. Discussion and Conclusion: We performed a unique comprehensive validation study (comprising of 240 cases) to endorse each and every component of WSI including technical assessment as well as diagnostic capabilities on various available DPSs, as per CAP recommendation for adopting digital pathology in clinical use. Based on the results of this assessment, it can be concluded that WSI was non-inferior to OM for the primary diagnosis of biopsy, resection and frozen section specimens, as the mean difference in the diagnostic accuracy between WSI and OM in comparison with reference standard was <4% and can be safely adopted for reporting of biopsy, resection and frozen section specimens. Further, training and improvements are required for handling of cytology specimen by WSI. The results of our study are in concordance with published literature. Each scanner had its own pros and cons. Based on the overall performance, DPS1 closely emulate the real-world clinical environment, when compared with conventional OM. Hence, results of this study, provides comprehensive solution to overcome the challenges faced during adoption and implementation of digital pathology in the routine diagnosis.


  1. Pantanowitz L, Sinard JH, Henricks WH, Fatheree LA, Carter AB, Contis L, et al. Validating whole slide imaging for diagnostic purposes in pathology: Guideline from the College of American Pathologists Pathology and Laboratory Quality Center. Arch Pathol Lab Med 2013;137:1710-22.
  2. Goacher E, Randell R, Williams B, Treanor D. The diagnostic concordance of whole slide imaging and light microscopy: A systematic review. Arch Pathol Lab Med 2017;141:151-61.
  3. Mills AM, Gradecki SE, Horton BJ, Blackwell R, Moskaluk CA, Mandell JW, et al. Diagnostic efficiency in digital pathology: A comparison of optical versus digital assessment in 510 surgical pathology cases. Am J Surg Pathol 2018;42:53-9.
  4. Campbell WS, Lele SM, West WW, Lazenby AJ, Smith LM, Hinrichs SH. Concordance between whole-slide imaging and light microscopy for routine surgical pathology. Hum Pathol 2012;43:1739-44.
  5. Bauer TW, Schoenfield L, Slaw RJ, Yerian L, Sun Z, Henricks WH. Validation of whole slide imaging for primary diagnosis in surgical pathology. Arch Pathol Lab Med 2013;137:518-24.

   Transition of Medical Pathology Rounds: From the Microscope to Digital Pathology Top

Brenda Galbraith1, Mary Melnyk2, Roland Maier1

1Science and Technology, DynaLIFE Medical Labs, Edmonton, Alberta, Canada,2Medical Affairs, DynaLIFE Medical Labs, Edmonton, Alberta, Canada.

E-mail: [email protected]

Background: Medical pathology rounds provides an opportunity to collaborate with pathologist peers and residents while reviewing interesting and challenging pathology cases. Cases were traditionally viewed using a multi-head microscope with limited viewing stations to review glass slides[1] and case details. More recently cases were viewed using a microscope with a mounted camera and a large HD monitor[2] in a meeting room setting to accommodate more participants. Now with the availability of Digital Whole Slide Imaging (WSI), pathologists can submit cases for WSI scanning and present cases during a digital slide conference. Here we describe our transition to presenting rounds in a digital pathology platform. Methods: Medical pathology rounds participants were the medical staff pathologists, anatomical/general pathology residents and pathology assistants. The learning activities and goals for rounds sessions were: to review interesting, rare or difficult diagnosis cases with a question and answer period, to review problem stains or fixation issues in the lab, and to review Performance Improvement Program (PIP) slides and material from the College of American Pathologists (CAP). Pathology Rounds via Microscope: Equipment used: Olympus BX51 microscope with mounted DP72 12.8MP high resolution camera, Olympus Cell-Sens camera software and a 55 inch HD television. The microscope with mounted camera and monitor were located in a meeting room. The digital camera computer software was used to translate the microscope field of view to a large TV monitor for observation. The TV monitor image needed constant fine focus adjustment to be viewed clearly by the group. Pathology Rounds via Digital Pathology : Equipment used: Aperio ScanScope CSO scanner (Leica Biosystems Imaging, Vista CA), eSlide Manager Database software Version 12.3, computer monitors, keyboards, mouse, conference phone with 2 microphones and an overhead projector and wall surface. All staff and pathology residents were provided log-in credentials for the eSlide Manager database. Presenting Pathologists provided glass slides for WSI scanning prior to a rounds session. Pathologists presented cases using the Digital Slide Conference function of eSlide Manager [Figure 1]. Participants used individual computer stations located in the computer lab with a conference phone or from their remote office location and calling into the conference number [Figure 2]. Results: Staff participated in their first Digital pathology rounds session in December 2017. The consensus was the new digital platform was far superior to using the microscope and TV monitor. Pathologists began to scan all ancillary slides for discussion during the rounds presentation. This lead to a more detailed and educational review of cases. Review [Table 1] for comparison of results between medical pathology rounds presentations using a microscope versus medical pathology rounds using digital pathology. Conclusions: Medical pathology rounds have become an integral part of the collaboration between pathology peers and an excellent opportunity for pathology residents to participate in the learning activities. The use of digital pathology has also significantly increased the number of participants at every session. Using digital pathology for pathology rounds has given staff pathologists a familiarity and comfort of scanning and viewing diagnostic tissue in a digital platform. As the uptake of digital pathology grows in the pathology community, these pathologists will be strong advocates for this technology.
Figure 1: eSlide manager digital slide conference

Click here to view
Figure 2: Pathologists using computer lab for rounds

Click here to view
Table 1: Comparison of results between a Microscope versus Digital Pathology in Medical Rounds Presentations

Click here to view


  1. Fung KM, Hassell LA, Talbert ML, Wiechmann AF, Chaser BE, Ramey J, et al. Whole slide images and digital media in pathology education, testing, and practice: The Oklahoma experience. Anal Cell Pathol (Amst) 2012;35:37-40.
  2. Fung K, Hassell L. Digital pathology for educational, quality improvement, research & other settings. In: Pantanowitz L, Parwani A, editors. Digital Pathology [S.I.]: American Society for Clinical Pathology; Chicago, IL, 2017. p. 95.

   Google AutoML versus Apple CreateML for Histopathologic Cancer Diagnosis: Which Algorithms are Better? Top

Andrew A. Borkowski1,2, Catherine P. Wilson1, Steven A. Borkowski1, L. Brannon Thomas1,2, Lauren A. Deland1, Stefanie J. Grewe2, Stephen M. Mastorides1,2

1 Department of Pathology, James A. Haley VA Hospital, Tampa, Florida, USA,2Department of Pathology and Cell Biology, University of South Florida, Morsani College of Medicine, Tampa, Florida, USA.

E-mail: [email protected]

Background: Artificial Intelligence is set to revolutionize multiple fields in the coming years. One subset of AI, machine learning, shows immense potential for application in a diverse set of medical specialties, including diagnostic pathology. In this study, we investigate the utility of Apple CreateML and Google Cloud AutoML, two machine learning platforms, in a variety of pathological scenarios involving lung and colon pathology. Materials and Methods: We evaluate the ability of the platforms to differentiate normal lung tissue from cancerous lung tissue; to accurately distinguish two subtypes of lung cancer (adenocarcinoma from squamous cell carcinoma); to differentiate colon adenocarcinoma from normal colon tissue; to evaluate cases of colon adenocarcinoma for the presence or absence of KRAS mutation; and to examine the ability of the Apple and Google platforms to differentiate between adenocarcinomas of lung origin versus adenocarcinoma of colon origin. Results: In our trained models for lung and colon cancer diagnosis, both the Apple and Google machine learning algorithms performed very well individually and with no statistically significant differences found between the two platforms. However, some critical factors set them apart. Apple CreateML can be used on local computers but is limited to an Apple ecosystem. Google AutoML is not platform specific but runs only in Google Cloud with associated computational fees. Conclusions: Both platforms are excellent machine learning tools that have great potential in the field of diagnostic pathology, and which one to choose would depend on personal preference, programming experience, and available storage space.

   Integrating Cytology into Routine Digital Pathology Workflow: An Appraisal from the Nagasaki-Kameda DP Network Top

Andrey Bychkov1,2, Takashi Hori1, Yoko Masuzawa1, Akira Yoshikawa1, Junya Fukuoka1,2

1Department of Pathology, Kameda Medical Center, Kamogawa, Japan,2Department of Pathology, Nagasaki University Graduate School of Biomedical Sciences, Nagasaki, Japan. E-mail: [email protected]

Background: Despite the recent advances in digital imaging, adoption of digital cytology is challenging due to technical limitations. To date, there are few reports on successful implementation, mainly dealt with remote ROSE. Nagasaki-Kameda Digital Pathology Network achieved a 100% digital workflow for biopsies and surgical specimens in 2018. Herein, we describe our early experience with the adoption of digital cytology. Methods: Kameda Medical Center with approximately 17.000 histopathology and 22.000 cytopathology cases annually served as a model institution. The routine cytologic workflow included two-step screening (by junior and senior cytotechnologists, respectively) followed by a sign out by pathologist. Available equipment suitable for digital cytology were digital microscope with live video output (Olympus), robotic microscope (Sakura), Panoptiq microscopic digital imaging platform, and slide scanner with Z-stack mode (Motic). Results: Switch to LBC was an initial step. Several directions for the digital cytology were selected and maintained during the last years. First is a sign out of cytologic cases using live digital microscope operated by cytotechnologist and reviewed remotely by pathologists via video streaming (1134 cases in 10 months). Second is providing cytologic correlation to support the WSI-based remote primary sign out of histopathological specimens (2087 cases in 26 months) and for a daily pathology-radiology conference (125 cases in 10 months). This was particularly helpful for lung biopsies and body fluids. Finally, we archived positive cytology cases for integration into LIS and for prospective AI studies (1112 cases in 26 months). Conclusions: While the adoption of digital mode for the primary cytodiagnosis is limited, we recommend other case uses of digital cytology for practical and educational purposes, which proved to be successful in our settings.

   Automated De-Identification of Digital Pathology Data: The Honest Broker for Bioinformatics Technology Top

Luke Geneslaw1, Jennifer Samboy1, Dig Vijay Kumar Yarlagadda1, Matthew Hanna1, Evangelos Stamelos1, Thomas Fuchs1,2

1Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA,2Computational Biology and Medicine, Weill Cornell Graduate School of Medical Sciences, New York, NY, USA. E-mail: [email protected]

Background: At Memorial Sloan Kettering Cancer Center, we have incorporated digital pathology into clinical workflows and grown an archive of digital images in the process. This creates an opportunity for computational pathology research projects, which often require extremely large datasets to build effective deep learning models. Curating these datasets poses a challenge, as we must be prepared to de-identify terabytes of image data and associated labels. Methods: We have developed an Honest Broker for BioInformatics Technology (HoBBIT) which serves as a de-identification pipeline between the Laboratory Information System and investigators. Specifically, we employ (1) a SQL Server database for storing pathology reports in identified and de-identified form and (2) a Python application for de-identifying and transferring pathology reports and digital images to IRB-approved research projects. Results: The HoBBIT database contains de-identified pathology reports from over 300,000 cases and 1,000,000 digital images, adding 10,000 cases each month. The HoBBIT application can de-identify images and link them to pathology reports on demand. Conclusion: By building an automated de-identification pipeline attached to our Laboratory Information System, we have enabled investigators at Memorial Sloan Kettering Cancer Center to engage in large-scale computational pathology research projects.

   How thin is a ThinPrep® Slide? Challenges of Achieving High Focus Quality on a Digital Whole Slide Imaging System Top

Shaoqing Peng1, Sid Mayer1

1Diagnostic Instrument Engineering, Product Development, Hologic Inc., Marlborough, Massachusetts, USA. E-mail: [email protected]

Background: The emergence of digital whole slide imaging (WSI) systems is set to revolutionize the fields of pathology and cytology. The ability to obtain high quality whole slide images quickly will be a vital step in a successful clinical workflow, especially for high volume screening applications like the Pap test. ThinPrep liquid-based cytology slides present a near monolayer visually to the reviewer, but cytology is inherently 3-dimensional. These slides can be challenging[1] for WSI due to the focal depth of closely juxtaposed material that can be an order of magnitude higher than the Depth of Field (DOF) of a high-power microscope objective. For this reason, cytology slides are more challenging to image than histologic tissue slides. Slides with film coverslips can also add to the scanning depth requirement due to curvature across the slide cell spot region. Most current WSI systems require repeated scans to cover multiple focal planes in order to acquire quality images, greatly increasing imaging time. In this study, we will present the key challenges of obtaining high focus quality images for ThinPrep Pap test slides, including maps of focal depth and data on its statistical distribution. We will also present our methods for efficiently scanning ThinPrep slides with a novel methodology using a digital imaging system in development by Hologic, including analysis and discussion of the advantages and limitations of these methods. What is ThinPrep Technology?: ThinPrep technology includes a state of the art cell separation and deposition technique that dramatically reduces problems[2] in the traditional Pap smear. It produces near monolayer slides for diagnostic review that significantly increase sensitivity of detecting high-grade squamous intraepithelial lesion (HSIL) and cancer (from 77.8% and 90.9% to 92.9% and 100% respectively).[2] What are the Challenges?: Because cells may be suspended in the mounting medium and may also stack up, cytology slides are inherently 3-dimensional. Since a microscope objective has such a small depth of field (DOF), cells cannot all be captured in focus in a single image. In fact, individual cells may be thicker than a single DOF. A 40X microscope objective lens with NA 0.75 has a depth of field of less than 2 microns. Methods: 23 ThinPrep Pap slides were scanned on a computer-controlled microscope with a digital camera (Hologic Integrated Imager) to gather cell preparation depth data, using the following procedure:

  • Scan the cell spot area using the computer-driven XY stage
  • At each location, capture a stack of images at a wide range (> 40 microns) of Z height
  • Divide every image into small regions (35 microns square) and evaluate the Brenner focus score metric for every level in the Z stack. Determine the optimal focus for that tile
  • By focusing on the fiducial marks printed on the slide, determine the overall slide glass plane and subtract that from the focus data to determine relative heights of cell content.

Results: Our data show the average cell depth of ThinPrep slides is 11.09 microns for slides with a glass cover slip and 23.6 microns for slides with film cover slips. In some cases, cell depth can be greater than 40 microns, far exceeding some of the earlier findings.[3] A surface plot focus map for each slide was created and reviewed. [Table 1] presents a summary of the data. Volumetric Scanning: Most current WSI systems scan a single focal plane at a time. A typical scanner completes a 15 x 15 mm scan in 1 minute.[4] At this rate, scanning a circular ThinPrep cell spot region to a depth of 14 focal planes would take at least 26 minutes. Using a tilt-plane volumetric scanning method (in development) significantly decreases the acquisition time for scanning the full cell content area. A ThinPrep Pap slide can be completed in approximately 2.5 minutes. The imaging optics and camera are tilted with respect to the slide. The region of the image at one edge of the camera frame acquires images closer to the slide glass than the region at the other edge of the camera frame. A tilt angle of 48 milliradians and an image frame width at the slide of 0.5mm provides a scan depth of 0.5 x sin(0.048), which is 24 microns. Discussion: One of the key challenges in moving to digital cytology is obtaining high focus quality WSI images in the time frame demanded by today's high throughput lab workflow. The volumetric scanning system described provides a practical solution meeting the needs of digital cytology and pathology WSI, where a ThinPrep Pap slide can be scanned and processed within 2.5 minutes. The speed of slide scanning is limited primarily by the camera rate and image processing speed. As hardware and software technologies advance, still faster slide scan rates will become possible.
Table 1: Summary of cell depth results

Click here to view


  1. Higgins C. Applications and challenges of digital pathology and whole slide imaging. Biotech Histochem 2015;90:341-7.
  2. Andy C, Turner LF, Neher JO. Clinical inquiries. Is the thinPrep better than conventional Pap smear at detecting cervical cancer? J Fam Pract 2004;53:313-5.
  3. Fan Y, Bradley AP. A method for quantitative analysis of clump thickness in cervical cytology slides. Micron 2016;80:73-82.
  4. Evans AJ, Salama ME, Henricks WH, Pantanowitz L. Implementation of whole slide imaging for clinical purposes: Issues to consider from the perspective of early adopters. Arch Pathol Lab Med 2017;141:944-59.

   Probing Intra- and Inter-Tumor Variability in Image Analysis Quantification of Immune Cell Infiltration: Implications for Preclinical Immuno-Oncology Studies Top

Alan Opsahl, Sepideh Mojtahedzadeh, Germaine Boucher, Dingzhou Li, Dusko Trajkovic, Joan Aguilar, Timothy Affolter, Timothy Coskran, Shawn P. O'Neil, Sripad Ram

Drug Safety Research and Development, Pfizer Inc., La Jolla, USA.

E-mail: [email protected]

Background: Digital image analysis (DIA) of immunohistochemistry (IHC) assays is routinely performed to quantify immune cell infiltration in the tumor microenvironment for immuno-oncology projects. A high degree of variation is frequently observed for DIA estimates of immune cell density among treatment groups. This has implications for study design, particularly for determining the group size in each treatment arm to achieve statistically meaningful results from IHC analyses. Moreover, it also impacts the interpretation of results. Methods: To understand the sources of variation in our IHC DIA estimates, we designed a study in which IHC was performed for several immune-cell antigens using CT26 tumor blocks. Tumors were collected at necropsy, immersion-fixed in 10% neutral buffered formalin for 24-48 hours and processed to paraffin blocks. IHC was performed on a Lecia Bond-III automated IHC instrument, using 5 μm sections of tumor tissue and established internal IHC protocols for each antigenic target. Slides were scanned on an Aperio AT2 whole slide digital scanner using the 20X setting and images were analyzed using custom algorithms created in Visiopharm software and optimized for each IHC antigen of interest. Results: For each of the target antigens, we assessed: 1) IHC assay variability by repeating the IHC protocol on 15 serial sections, 2) intra-animal variability by performing the IHC protocol on 10 step sections (100 microns/step), and 2) inter-animal variability by comparing DIA results among different CT26 tumor blocks. For each scenario, the variability of the DIA endpoint was evaluated by analysis of the coefficient of variation (CV) for the series of measurements for each antigen. The magnitude of each source of variability was benchmarked against CV = 20% which is typically used to assess the performance of IHC assays. Our preliminary analysis reveals that inter-animal variability may contribute to significant variance in the results observed for the immune-cell antigens that we evaluated in the CT26 tumor model. We also performed a power analysis to understand the effects of group size (# animals/group) and CV on the predictive power for detecting a statistically meaningful fold-change in immune cell density among groups. Conclusion: Our results reveal the complex relationship between intrinsic biological variability, pre-analytic variables and sample size, and how this could impact the interpretation of DIA results.

   Validation of Digital Pathology for Secondary Diagnosis in a Consultative Pulmonary Pathology Practice Top

Taofic Mounajjed1, Marie-Christine Aubry1, Vera J. Suman1, Jennifer M. Boland1, Joseph J. Maleszewski1, Anja C. Roden1, Eunhee S. Yi1, Charlene L. Brown1, Mark Norman1, Thomas J. Flotte1

1Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA. E-mail: [email protected]

Background: Although studies[1] have shown whole slide imaging (WSI) non-inferiority to microscopy for primary diagnosis in surgical pathology, large studies[2] evaluating WSI for secondary consultation are limited. We aim to compare WSI to microscopy in subspecialized secondary consultation practice. Methods: As part of a large multi-specialty validation study of digital pathology in consultation practice encompassing 11 specialties, 101 consultative cases directed to the section of pulmonary pathology (5 pathologists) were included (50 consecutive cases and 51 cases from targeted categories selected to capture case types of uncommon histology or poor inter-evaluator agreement). 1555 slides were scanned using Aperio AT Turbo scanner with ×40 power scanning. Each case was reviewed by 2 pathologists; each pathologist reviewed a case twice, once by light microscopy and once digitally (Aperio eSlide Manager Software) using scanned images from the glass slides on a medical-grade color-calibrated BARCO Coronis Fusion 6MP LED display (30.4 inch screen size, 3280 x 2048 pixel resolution, 178 degree viewing angle (H and V), 500 cd/m2 DICOM-calibrated luminance [720 cd/m2 maximum luminance], 1000:1 contrast ratio). Pathologists were randomized to which modality they would use first. There was a minimum of a 2-week washout period between glass and digital reviews. Identical resources (stains/opinions) were available for either review modality; if during evaluation with a given modality, the pathologist ordered additional studies, only the specified studies would be provided. Diagnoses were evaluated for disagreement; disagreements that would significantly alter the patient's treatment or prognosis were considered major. Results: Each pathologist performed 37 to 46 reads (median = 40). Additional studies were obtained on 16 cases (16%), including 16 times during glass review and 3 times during digital review. A second opinion was obtained from a colleague in 21.6% of glass cases and 13.4% of digital cases. Intra-pathologist agreement (glass vs. digital) was 96% (range: 89% to 100%); there were 5 minor and 3 major disagreements (major disagreement rate: 1.5%). Inter-pathologist agreement was 99% for glass (one major disagreement) and 93% for digital diagnosis (3 minor and 4 major disagreements; major disagreement rate: 4%). Six of the eight major diagnostic disagreements involved lymphoproliferative disorders. Conclusions: Digital review of pulmonary consultative cases showed high concordance with glass (1.5% major disagreement rate). Although digital inter-observer agreement was lower than glass, major disagreement rate was stil low (4%). These findings suggest that digital review of pulmonary consultative cases is non-inferior to glass review; a limitation is diagnosis of lymphoproliferative disorders.


  1. Mukhopadhyay S, Feldman MD, Abels E, Ashfaq R, Beltaifa S, Cacciabeve NG, et al. Whole slide imaging versus microscopy for primary diagnosis in surgical pathology: A multicenter blinded randomized noninferiority study of 1992 cases (Pivotal study). Am J Surg Pathol 2018;42:39-52.
  2. Zhao C, Wu T, Ding X, Parwani AV, Chen H, McHugh J, et al. International telepathology consultation: Three years of experience between the University of Pittsburgh Medical Center and KingMed Diagnostics in China. J Pathol Inform 2016;6:63.

   Role of Digital Pathology in Drug Development Process Top

Aleksandra Zuraw1

1Department of Pathology, Charles River Laboratories, Montréal, Canada. E-mail: [email protected]

Digital pathology as a medical discipline is most visible and receives the most press in the context of clinical diagnostics, however it is also highly relevant in all phases of the drug development process. The aim of this poster is to review and present the role of digital pathology in the pharmaceutical industry. With the increasing capabilities of pharmaceutical companies and contract research organizations to rapidly perform whole slide imaging and convert glass slide content into digital images there is a great potential to unlock the benefits of digital pathology for the drug development process. Pathologists are crucial members of drug development teams and are engaged in every step of the process, contributing significantly to the discovery, preclinical and clinical phases. Implementation of digital pathology workflows within and across organizations empowers them and benefits the drug development process in multiple ways. Often pathologists involved in pharmacological studies are geographically dispersed. The use of digital pathology allows them to communicate fast and perform slide consultation in real time regardless of location,[1] which increases the efficiency of their work. Multiple pathologists can view, annotate and comment on the same slide simultaneously. The digitization of slides enables image analysis-powered quantitative measurements of biomarkers,[2] lesions and abnormalities, which increases the throughput, reproducibility and objectivity of pathologists' scoring.[3],[4] Traditionally hand-crafted, hard-coded computer algorithms were and still are utilized to provide these enhancements but as technology advances digital pathology starts to benefit from artificial intelligence applications.[5] As tissue research is cross-disciplinary, access to digital pathology for different groups involved in drug development, especially the DVM and MD pathologists, helps them work more collaboratively and better understand each other's contributions. Pharmaceutical companies also work closely with contract research organizations and image analysis companies. For each project thousands of slides are transferred between these organizations or remote data access points are created such as cloud slide repositories.[6] In the era of outsourced pre-clinical development and multi-site clinical trials digital pathology is becoming indispensable for streamlining the drug development process. [Figure 1] shows the areas in which digital pathology plays a role in drug development. Despite many advantages, the adoption of digital pathology in the pharmaceutical industry has been slow. This technology is still considered novel, and in many institutions its implementation is met with skepticism. Nevertheless, making it accessible and more user friendly for pathologists, standardizing scoring algorithms, and applying it broadly across organizations will undoubtedly accelerate and advance drug development. Additionally, technology maturation, inclusion of digital pathology in pathologists' training curriculums and decreasing costs will increase acceptance and adoption, which will help to harness the full potential of digital pathology within the pharmaceutical industry.
Figure 1: Role of digital pathology in drug development. IA – image analysis; AI – artificial intelligence

Click here to view


  1. Webster JD, Dunstan RW. Whole-slide imaging and automated image analysis: Considerations and opportunities in the practice of pathology. Vet Pathol 2014;51:211-23.
  2. Hamilton PW, Bankhead P, Wang Y, Hutchinson R, Kieran D, McArt DG, et al. Digital pathology and image analysis in tissue biomarker research. Methods 2014;70:59-73.
  3. Van Eycke YR, Allard J, Salmon I, Debeir O, Decaestecker C. Image processing in digital pathology: An opportunity to solve inter-batch variability of immunohistochemical staining. Sci Rep 2017;7:42964.
  4. Aeffner F, Wilson K, Martin NT, Black JC, Hendriks CLL, Bolon B, et al. The gold standard paradox in digital image analysis: Manual versus automated scoring as ground truth. Arch Pathol Lab Med 2017;141:1267-75.
  5. Tizhoosh HR, Pantanowitz L. Artificial intelligence and digital pathology: Challenges and opportunities. J Pathol Inform 2018;9:38.
  6. Aeffner F, Adissu HA, Boyle MC, Cardiff RD, Hagendorn E, Hoenerhoff MJ, et al. Digital microscopy, image analysis, and virtual slide repository. ILAR J 2018;59:66-79.

   An Automated Image Analysis Pipeline for Plasma Cell Quantitation and Multiple Myeloma Prognostication Top

Vahid Azimi1, Ngoc Tran2, Guillaume Thibault3, Young Hwan Chang3, 4, 5, Eva Medvedova5, Kevin R. Turner6, Philipp W. Raess6

1Department of Pathology, School of Medicine, Oregon Health and Science University, Portland, OR, USA,2Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, OR, USA,3Department of Biomedical Engineering and OHSU Center for Spatial Systems Biomedicine (OCSSB), Oregon Health and Science University, Portland, OR, USA,4Department of Computational Biology, Oregon Health and Science University, Portland, OR, USA,5Department of Hematology/Oncology, Knight Cancer Institute, Oregon Health and Science University, Portland, OR, USA,6Department of Pathology, Oregon Health and Science University, Portland, OR, USA. E-mail: [email protected]

Background: Diagnosing plasma cell myeloma (PCM) and differentiating it from monoclonal gammopathy of undetermined significance (MGUS) requires precise quantitation of plasma cells (PC's) in bone marrow biopsies. Currently, the determination of PC percentage and clonality is performed via visual estimation of CD138 immunohistochemistry (IHC) and kappa/lambda in-situ hybridization (ISH). Additionally, many recent studies have shown that computationally-derived quantitative histologic features from H&E digital whole-slide images (WSI's) have been associated with clinical outcomes and can serve as potential prognostic and predictive biomarkers in breast and lung cancer.[1],[2],[3] To the best of our knowledge, no study to date has demonstrated that computer-assisted PC quantification is more strongly associated with clinical outcomes compared to conventional visual estimation by a pathologist in PCM. Furthermore, most existing software for quantifying PC percentage from bone marrow biopsy WSI's requires significant user input, and is thus not practical from the perspective of clinical implementation. Additionally, no study to date has demonstrated automated quantitation of PC clonality via analysis of kappa/lambda ISH WSI's. Development of an automated workflow to determine PC percentage and clonality and extract quantitative histologic features from bone marrow biopsy WSI's could standardize quantitation of PC's and more accurately predict PCM prognosis. Methods: We used a previously published nuclei segmentation method based on an unsupervised machine learning algorithm to identify and extract quantitative features from H&E WSI's of PCM bone marrow biopsies.[4],[5] Additionally, we extended this method to perform segmentation on a small number of CD138 IHC and kappa/lambda ISH images. Results: Our fully automated pipeline extracts 380 morphological features from H&E bone marrow biopsy WSI's. Additionally, our pipeline identifies PC's from CD138 IHC WSI's and identifies clonality from K/L light-chain ISH WSI's. Conclusions: We demonstrate a novel automated quantitative image analysis pipeline capable of extracting quantitative features from H&E images and determining PC percentage and clonality from IHC and ISH WSI's. To the best of our knowledge, this is the first report of a fully-automated pipeline for quantitatively determining PC percentage and clonality. The pipeline described here does not require any manual input from a pathologist and requires no manual tuning of parameters. Future work will focus on three aims: 1) determining the accuracy of the pipeline described here by comparing the cell segmentation output with annotated ground-truth images, 2) comparing the prognostic significance of automated PC and PC clonality quantification with pathologists' visual estimation, and 3) using extracted image features from algorithms described herein to develop improved models predictive of clinical outcomes for patients with PCM.


  1. Beck AH, Sangoi AR, Leung S, Marinelli RJ, Nielsen TO, van de Vijver MJ, et al. Systematic analysis of breast cancer morphology uncovers stromal features associated with survival. Sci Transl Med 2011;3:108ra113.
  2. Mobadersany P, Yousefi S, Amgad M, Gutman DA, Barnholtz-Sloan JS, Velázquez Vega JE, et al. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc Natl Acad Sci U S A 2018;115:E2970-9.
  3. Yu KH, Zhang C, Berry GJ, Altman RB, Ré C, Rubin DL, et al. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nat Commun 2016;7:12474.
  4. Azimi V, Chang YH, Thibault G, Smith J, Tsujikawa T, Kukull B, et al. Breast cancer histopathology image analysis pipeline for tumor purity estimation. Proc IEEE Int Symp Biomed Imaging 2017;2017:1137-40.
  5. Young Hwan Chang, Thibault G, Azimi V, Johnson B, Jorgens D, Link J, et al. Quantitative analysis of histological tissue image based on cytological profiles and spatial statistics. Conf Proc IEEE Eng Med Biol Soc 2016;2016:1175-8.

   Implementation of Digital Pathology for Primary Diagnosis at the CHUM Top

Bich N. Nguyen1, Natalie Dion1, Jean-François Pomerol2, Dominique Trudel1

1Pathology Service, Centre hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada,2TRIBVN Healthcare, Paris, France.

E-mail: [email protected]

Modern pathology services are currently under pressure to provide efficient and high-quality diagnoses while managing increasing workloads and diagnostic complexity. Recent studies[1],[2],[3],[4] have shown the utility of digital pathology (DP) in pathology services and in improving laboratory workflow and efficiency. In partnership with the private sector, we initiated the conversion of our entire service to use a DP platform for primary pathology diagnosis. To achieve this goal, we implemented an incremental roll-out for DP and used the CaloPix software solution to interpret the scanned images, starting with small (biopsies) to larger tissue sections (surgical specimens). The project consists of 3 major phases: collection of data before DP implementation (Pre-I), implementation of DP (I) and post-implementation (Post-I). Expected rates of adoption from our pathologists are 25% in the first year, rising progressively to 90% over a 5-year period. Daily records of issues/problems as well as successful correctives are documented. We measure the human cost-effective value of DP by recording personnel's time spent on DP as well as pathologist's productivity and turn-around time. The collected data of Post-I phase will be compared to the Pre-I data. The implementation, in parallel with the pathologists' adjustment to on-screen analysis of digital images, required significant laboratory workflow changes. Our experience highlights the importance of pathologists' and personnel training, integrated software and prompt technical assistance, as well as significant investment from our healthcare organization. We lastly emphasize the importance of effective communication in engaging the stakeholders for a successful transition to DP.


  1. Baidoshvili A, Bucur A, van Leeuwen J, van der Laak J, Kluin P, van Diest PJ. Evaluating the benefits of digital pathology implementation: Time savings in laboratory logistics. Histopathology 2018;73:784-94.
  2. Baidoshvili A, Stathonikos N, Freling G, Bart J, 't Hart N, van der Laak J, et al. Validation of a whole-slide image-based teleconsultation network. Histopathology 2018;73:777-83.
  3. Griffin J, Treanor D. Digital pathology in clinical use: Where are we now and what is holding us back? Histopathology 2017;70:134-45.
  4. Ho J, Ahlers SM, Stratman C, Aridor O, Pantanowitz L, Fine JL, et al. Can digital pathology result in cost savings? A financial projection for digital pathology implementation at a large integrated health care organization. J Pathol Inform 2014;5:33.

   A Novel Deep Learning Approach to Quantifying Intratumoral Histologic Heterogeneity Top

Drew F. K. Williamson1, Fei Dong1

1Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA. E-mail: [email protected]

Background: With the proliferation of cancer genomics and interest in single-cell technologies, there has been a parallel increase in the appreciation of and interest in intratumoral heterogeneity, i.e. how cancer cells within the same tumor may be clustered into discrete populations based on differences from one another. These populations may be based on genomic, transcriptomic, or phenotypic differences, for example.[1] In anatomic pathology, intratumoral heterogeneity in histology has long been recognized and shifts in tumor cell cytology and histology are commonly observed. Groups have shown these changes in histology may correlate with the spatial distribution of mutations.[2] A reproducible method for quantifying this intratumoral histological heterogeneity (IHH) has not yet emerged and there is an unmet need for this quantification in order to correlate IHH with molecular definitions of intratumoral heterogeneity and elucidate the connections between the two. Methods: We have developed a deep learning-based algorithm that leverages the ability of a convolutional autoencoder to identify meaningful information from an image and discard unnecessary noise. A convolutional autoencoder is composed of two pathways: first, an encoder pathway that accepts an image as input and returns a compressed representation; and second, a decoder pathway that accepts a compressed representation and attempts to reconstruct the input image, with a loss function that penalizes the difference between the input image and the reconstruction. Such models have been used in a variety of areas including unsupervised radiology image analysis[3] and image denoising.[4] One can quantitate how similar two images passed through the autoencoder are by comparing their latent space representations--if their encodings are similar, then the two images must share a similar structure. We utilize this feature of autoencoders to compare patches of tumors to one another and develop a metric based on this comparison. A schematic flowchart of the algorithm is presented in [Figure 1]. We quantify the total amount of heterogeneity in the slide by aggregating distance between encodings across all patches. Slides demonstrating IHH having a larger average distance in the latent space than those with homogenous tumors. Results: We applied our algorithm to whole slide images from The Cancer Genome  Atlas More Details, using diffuse large B-cell lymphomas (DLBCL) as a relatively histologically homogenous population and squamous cell carcinomas of the lung (LUSC) as a relatively histologically heterogeneous population. After training the autoencoder on one set of images, we applied the model to produce encodings of patches from an entirely separate set of 20 DLBCL and 20 LUSC cases. We found a statistically significant difference in mean encoding distance between LUSC and DLBCL groups (absolute difference: 3.3, p-value: 0.003, 95% CI: [1.2, 5.4]), with the mean value for LUSC (17.9 ± 3.5) being greater than that for DLBCL (14.6 ± 2.8) indicating that, on average, the LUSC tumors displayed greater IHH. Conclusions: Our novel deep learning-based method quantifies IHH in the expected fashion for a dataset composed of relatively histologically homogenous DLBCL and heterogeneous LUSC cases. Based on this validation, we plan to apply our model to other cancer types and histologies and determine how IHH may correlate with genomic measures of intratumoral heterogeneity. This has potential to increase our understanding of the biology underlying these tumors and the heterogeneity of their response to therapy.
Figure 1: Flowchart for the IHH quantification algorithm. (1) Whole slide images are divided into non-overlapping patches at ×20 magnification. (2) Patches from across the tumor are fed into a pre-trained autoencoder. (3) The autoencoder outputs encodings of each patch. (4) The distance between the encodings quantitates how similar they are

Click here to view


  1. Stenman A, Hysek M, Jatta K, Bränström R, Darai-Ramqvist E, Paulsson JO, et al. TERT promoter mutation spatial heterogeneity in a metastatic follicular thyroid carcinoma: Implications for clinical work-up. Endocr Pathol 2019;30:246-8.
  2. Dietz S, Harms A, Endris V, Eichhorn F, Kriegsmann M, Longuespée R, et al. Spatial distribution of EGFR and KRAS mutation frequencies correlates with histological growth patterns of lung adenocarcinomas. Int J Cancer 2017;141:1841-8.
  3. Chen M, Shi X, Zhang Y, Wu D, Guizani M. Deep features learning for medical image analysis with convolutional autoencoder neural network. IEEE Transactions on Big Data; 2017 Jun 20.
  4. Gondara L. Medical image denoising using convolutional denoising autoencoders. 2016 IEEE 16th International Conference on Data Mining Workshops. 12-15 December, 2016. Barcelona, Spain, New York: Curran Associates; 2017.

   Quantitative Nuclear Feature is Effective for Discrimination of Dysplastic Nodule and Well Differentiated Hepatocellular Carcinoma in Liver Top

Kyoungbun Lee1, Won-Ki Jeong2, Hyungjoon Jang2, Choyeon Hong3, Sunggyu Min3

1Department of Pathology, Seoul National University Hospital, Seoul, Republic of Korea,2School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea,3Department of Pathology, Seoul National University Hospital, Seoul, Republic of Korea.

E-mail: [email protected]

Background: “Regenerating nodule (RN)-low grade dysplastic nodule (LGDN)-high grade dysplastic nodule (HGDN)-early HCC-HCC” sequence is multistep carcinogensis which was established by the gradual change of histology and accumulation of molecular alteration.[1] Diagnosis of dysplastic nodule mainly depends on histologic diagnosis by pathologist, but reproducibility of diagnosis is not so high as overt HCC because diagnostic criteria is clearly described but the changes between each steps are too subtle to make the constant evaluation by pathologists. The objective of this research is to develop diagnostic prediction algorithm of premalignant lesion using image-based convolutional neural network (CNN) analysis and cell-based machine learning based on the histologic criteria and compare the two approaches.[2],[3] Methods: 78 hepatocellular neoplasms including 22 grade 1 HCCs, 33 HCC arising in high grade dysplastic nodules (HCC-DN), 14 high grade dysplastic nodules (HGDN), 3 low grade dysplastic nodules (LGDN), and 6 regenerating nodules (RN) were enrolled for cell-based image analysis and 50 HCCs were used for image-based CNN analysis. All slides were scanned by Leica APERIO AT2 scanner at X20 magnification and stored as SVS format and whole slide images. The data obtained diagnostic consensus from ten pathologists. Model 1 was CNN for Whole Slide Image which used ResNet101 model as binary patch classifier. Also, the model was trained to distinguish between tumor and non-tumor and used 256x256 pixel-sized patches. Model 2 was feature-Based Machine Learning. 78 WSIs were segmented by QuPath and 19 quantitative nuclear features were extracted and used for machine learning.[4] Results: Model 1 was CNN model for WSI. Overall accuracy was average 78% and all HCCs and HCC-DN were classified as HCC and average Jaccard index were 56.0%, and 38.3% in HCC and HCC-DN [Table 1]. Accuracy on HGDN, LGDN, and RN were 14%, 33%, and 50%. Jaccard index of non-HCC groups (HGDN, LGDN, RN) was lower than HCC or HCC-DN. In the case of Model 2, it was Feature-based machine learning. Nine nuclear features including cellularity, mean and standard variation of nuclear size, variation of circularity, mean of eccentricity, intensity of hematoxylin and nuclear to cytoplasmic ratio were selected for modeling and both Supporting Vector Machine and Random Forest method had high predictions accuracy 85% [Table 1] and [Table 2]. Conclusions: Extracted nuclear feature had achieved high prediction accuracy than pathologists' agreement in premalignant lesion mimicking normal liver. CNN model for HCC was effective for prediction for HCC but could not effectively exclude non-HCC group (e.g. HGDN, LGDN, RN). Automatic quantification of well-known histologic features is a good approach to increase the diagnostic consistency of ambiguous lesions with low diagnostic agreement. Whole-image based CNN model was effective for training single classifier, but had lower performance in negative cases than cell-based machine learning. This study is preliminary study and many limitation for simply comparing two approaches, such as small number of cases, different training set, and different validation group. However automatic quantification of well-known histologic features is a good approach to increase the diagnostic consistency of ambiguous lesions with low diagnostic agreement.
Table 1: Results of model 1: Convolutional neural network and model 2: Feature-Based Machine Learning

Click here to view
Table 2: Confusion Matrix of model 2 (a test dataset including 78 images)

Click here to view


  1. Shen Q, Eun JW, Lee K, Kim HS, Yang HD, Kim SY, et al. Barrier to autointegration factor 1, procollagen-lysine, 2-oxoglutarate 5-dioxygenase 3, and splicing factor 3b subunit 4 as early-stage cancer decision markers and drivers of hepatocellular carcinoma. Hepatology 2018;67:1360-77.
  2. Araújo T, Aresta G, Castro E, Rouco J, Aguiar P, Eloy C, et al. Classification of breast cancer histology images using convolutional neural networks. PLoS One 2017;12:e0177544.
  3. Yamada M, Saito A, Yamamoto Y, Cosatto E, Kurata A, Nagao T, et al. Quantitative nucleic features are effective for discrimination of intraductal proliferative lesions of the breast. J Pathol Inform 2016;7:1.
  4. Yang HD, Eun JW, Lee KB, Shen Q, Kim HS, Kim SY, et al. T-cell immune regulator 1 enhances metastasis in hepatocellular carcinoma. Exp Mol Med 2018;50:e420.
  5. Bankhead P, Loughrey MB, Fernández JA, Dombrowski Y, McArt DG, Dunne PD, et al. QuPath Ver. 0.2.0-m2; 2017. Available from:

   Laboratory Information System Leads Whole Slide Image Diagnosis: Integration Digital Pathology with Paperless Pathology Workflow Top

Kyoungbun Lee1, Jinwook Choi2, Eunsung Kim3, Peom Park4, Sunggyu Min5, Choyeon Hong6

1Department of Pathology, Seoul National University Hospital, Seoul, Republic of Korea,2Department of Biomedical Engineering, Seoul National University Hospital, Seoul, Republic of Korea,3Digital Pathology Development Team, HuminTec Co., Ltd. Suwon, Republic of Korea,4Industrial Engineering, HuminTec Co., Ltd. Suwon, Republic of Korea,5,6Department of Pathology, Seoul National University Hospital, Seoul, Republic of Korea. E-mail: [email protected]

Background: Whole side imaging (WSI) has been adopted as alternative to histopatholoy, but implementation for primary diagnosis is not popular. Seoul National University Hospital (SNUH) implemented whole slide imaging for primary diagnosis in the past two years based on the already-built pathology laboratory information system for paperless pathology workflow built three years ago. Method: Pathology laboratory system was designed for three purposes: 1) tracking and modulation system for slide preparation from tissue, 2) order and tracking system of ancillary test among laboratory units, 3) automatic report generator for formatted or repeated diagnosis and step-wise monitor reporting system from informal diagnosis of residents to electronically signed-out final diagnosis by specialist. After the complete implementation and adaptation of paper-less digitalized laboratory work-flow for five years, WSI system for primary diagnosis was implemented. Result: 35% of cases was sign-outed by monitor diagnosis of WSI. Hepatobiliary-pancreas, genitourinary, central nerve system, and renal pathology were major participants for WSI diagnosis. It took three to six months to adapt viewer and adjust histology on monitor with that of light microscopy. 45% of all tissue slides over a year were scanned for primary diagnosis, archiving or secondary use. All pathologists use WSI for review, consult, or conference. Conclusion: Not only does laboratory information system optimized for anatomic pathology have immediate effects on laboratory efficiency, it also has the future effect of increasing used friendliness to WSI and applying related software. This research was suppoorted by a grant of the Korea Health technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea. (HI16C1501).

Figure 1: Milestone of digital pathology in SNUH

Click here to view

   Machine learning for real-time search and prediction of disease state to aid pathologist collaboration on social media Top

Andrew J. Schaumberg1, 2, 3, *, Wendy Juarez3, 4, α, Sarah J. Choudhury3, 4, α, Laura G. Pastrián5,β, Bobbi S. Pritt6,β, Mario Prieto Pozuelo7,β, Ricardo Sotillo Sánchez8,β, Khanh Ho9,β, Nusrat Zahra10,β, Betul Duygu Sener11,β, Stephen Yip12,β, Bin Xu13,β, Srinivas Rao Annavarapu14,β, Aurélien Morini15,β, Karra A. Jones16,β, Kathia Rosado-Orozco17,β, Sanjay Mukhopadhyay18,β, Carlos Miguel19,β, Hongyu Yang20,β, Yale Rosen21,β, Rola H. Ali22,β, Olaleke O. Folaranmi23,β, Jerad M. Gardner24,β, Corina Rusu25,β, Celina Stayerman26,β, John Gross27,β, Dauda E. Suleiman28,β, S. Joseph Sirintrapun29, Mariam Aly30,31,δ,*, Thomas J. Fuchs2,29,δ,*

1Memorial Sloan Kettering Cancer Center and the Tri-Institutional Training Program in Computational Biology and Medicine, NY, USA,2Weill Cornell Graduate School of Medical Sciences, NY, USA, 3Weill Cornell High School Science Immersion Program,4Manhattan/Hunter Science High School, NY, USA,5University Hospital La Paz, Department of Pathology, Madrid, Spain,6Mayo Clinic, Department of Laboratory Medicine and Pathology, Rochester, MN, USA,7University Hospital HM Sanchinarro, Therapeutic Target Laboratory, Madrid, Spain,8Virgin of Altagracia Hospital, Department of Pathology, Manzanares, Spain,9 Hospital Center of Mouscron, Department of Pathology, Belgium,10Allama Iqbal Medical College, Department of Pathology, Lahore, Pakistan,11Konya Training and Research Hospital, Department of Pathology, Konya, Turkey,1 BC Cancer, Department of Pathology, British Columbia, Canada,13Sunnybrook Health Sciences Center, Department of Pathology, Toronto, Ontario, Canada,14Royal Victoria Infirmary, Department of Cellular Pathology, Newcastle upon Tyne, England, UK,15University Paris East Creteil, Faculty of Medicine of Creteil, France,16University of Iowa, Department of Pathology, IA, USA,17 HRP Labs, San Juan, Puerto Rico, USA,18Cleveland Clinic, Department of Pathology, Cleveland, OH, USA,19Asturias Medical Center, Department of Pathology, Oviedo, Spain,20St Vincent Evansville Hospital, Department of Pathology, Evansville, IN, USA,21SUNY Downstate Medical Center, Department of Pathology, NY, USA,22Kuwait University, Faculty of Medicine, Kuwait,23University of Ilorin Teaching Hospital, Department of Pathology, Nigeria,24University of Arkansas for Medical Sciences, Department of Pathology, Little Rock, AK, USA,25Augusta Hospital, Department of Pathology, Bochum, Germany,26TechniPath Laboratory, San Pedro Sula, Honduras,27Mayo Clinic, Bone and Soft Tissue and Surgical Pathology, Rochester, MN, USA,28Abubakar Tafawa Balewa University Teaching Hospital, Department of Histopathology, Bauchi, Nigeria,29Memorial Sloan Kettering Cancer Center, Department of Pathology, NY, USA,30Columbia University, Department of Psychology, NY, USA,31Affiliate Member of the Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA. Email: schaumberg.andrew+[email protected]

αThese authors contributed equally to this work.

βThese pathologist authors generously donated cases.

δThese authors are Principal Investigators of this work.

Pathologists are responsible for providing a diagnosis, which is an opinion rendered from the interpretation of microscopic images of a tissue specimen. Challenging cases require pathologists to seek additional opinions from colleagues. On-site colleagues are not always immediately available. However, there is an active worldwide community of pathologists on social media. This vibrant community is remarkably valuable for pathologists in developing countries, who often use social media to request opinions on a diagnosis.[1] Such access to pathologists worldwide has the capacity to (i) improve diagnosis accuracy and (ii) generate greater consensus on next steps in patient care. We leveraged data mining, text analysis, machine learning, and a social media bot (@pathobot on Twitter) to aid pathologists to obtain opinions [Figure 1]. Mentioning this bot in a social media post causes the bot to search social media and PubMed for images similar to the triggering post. The bot also posts predicted disease state (non-tumoral, benign/low grade malignant potential [low grade], or malignant) for each image in the triggering post, and for the case overall. Pathologists with similar cases are notified, so they may provide additional opinions. International pathologists (e.g. our collaborators in Nigeria and Pakistan), who do not have access to expensive diagnostics, have found this engine for crowd-sourced opinions especially valuable. We trained our bot on social media data. From Twitter, we assembled a dataset of 13,626 images from 6,351 Tweets from 25 pathologists from 13 countries. Each Tweet message includes both images and text commentary. To demonstrate the utility of these data for computational pathology, we apply machine learning to test whether we can (i) accurately identify histopathology stains, (ii) discriminate between tissue types, and (iii) differentiate non-tumoral, low grade, and malignant diseases. Using a Random Forest, we report (i) 0.967 ± 0.005 [mean ± stdev] Area Under Receiver Operating Characteristic [AUROC] (n=10,609) for ten repetitions of leave-one-pathologist-out [LOO] cross validation [CV] when differentiating human hematoxylin and eosin [H&E] stained microscopy images from all other types of images, e.g. natural scenes, and (ii) 0.996 ± 0.002 AUROC (n=7,526) when distinguishing H&E from immunohistochemistry stained microscopy images. We distinguish all ten tissue types on Twitter (bone and soft tissue, breast, dermatological, gastrointestinal, genitourinary, gynecological, head and neck, hematological, neurological, and pulmonary[2]), with 0.815 ± 0.010 AUROC (n=8,331) via 10-fold CV. For our most difficult and clinically relevant task of distinguishing non-tumoral, low grade, and malignant disease states, we report 0.750 ± 0.012 AUROC (n=6,549, 10-fold CV, with tissue type as covariate features for learning disease state). For pathology image search, we use this classifier for Random Forest similarity, and report [email protected]=1 of 0.655 ± 0.003 ([Figure 2], LOO CV). To help find rare disease entities, we expanded our bot's search to PubMed. From PubMed, we downloaded 1,074,484 articles. Our “H&E vs other” classifier identified 30,585 articles with at least one H&E figure. These H&E figures (113,161 images) comprise our PubMed dataset. A PubMed image has a title, abstract, and figure caption. To complement image similarity search, our hand-engineered algorithms match tissue type (e.g. “lung” is pulmonary) and entity keywords (”HIV” is a disease entity, but “WOW!” is not). In total, we maintain a large pathology-focused dataset of 126,787 images with associated text, from patients the world over, to facilitate disease prediction and search. We believe this is the first use of social media data for pathology case search and the first pathology study prospectively tested in full public view on social media. This approach facilitates diagnoses and decisions about next steps in patient care by connecting pathologists all over the world, searching for similar cases, and generating predictions about disease states in their shared images. We expect our project to cultivate a more connected world of physicians and improve patient care worldwide.
Figure 1: Our pipeline begins with pathologist recruitment (A). If a pathologist consents to having their images used (B), we download those images (C) and manually annotate them (D). Next, we train a Random Forest classifier to predict image characteristics, e.g., disease state (E). This classifier is used to predict disease and search. If a pathologist posts a case to social media and mentions @pathobot (F), our bot will use the post's text and images to find similar cases on social media and PubMed (G). The bot then posts summaries and notifies pathologists with similar cases (H). Pathologists discuss the results (I), and some also decide to share their cases, initiating the cycle again (A)

Click here to view
Figure 2: Image similarity search metrics. [email protected] with small error bars showing standard error of the mean of ten leave-one-pathologistout cross validation trials. Random forest similarity performs markedly better than an image feature vector L1 norm baseline for 3-class disease state prediction task, but not for the 10-class tissue type prediction task. We use the disease state Random forest for pathobot's image similarity search. Pathobot search is further informed by text keyword matching

Click here to view


This study was approved by the Institutional Review Board at Memorial Sloan Kettering Cancer Center.

A.J.S. was supported by NIH/NCI grant F31CA214029 and the Tri-Institutional Training Program in Computational Biology and Medicine (via NIH training grant T32GM083937). This research was funded in part through the NIH/NCI Cancer Center Support Grant P30CA008748.

S.Y. is a consultant and advisory board member for Bayer, receiving an honorarium and travel allowance.

T.J.F. is a founder, equity owner, and Chief Scientific Officer of Paige.AI.

We gratefully acknowledge Prof. Takehiko Fujisawa of Chiba University for sharing cases freely via Patholwalker on Twitter.


  1. Nix J, Gardner J, Costa F, Soares A, Rodriguez F, Moore B, Martinez-Lage M, Ahlawat S, Gokden M, and Anthony D. Neuropathology Education Using Social Media. J Neuropathol Exp Neuro. 2018;77(6):454-460
  2. Gardner J. Pathology Tag Ontology. [cited 2019 Aug 31]. Available from

   Telepathology Validation for Intraoperative Consultation in Multi-Facility Hospital Systems Top

Fnu Alnoor1, Jacob Abel2, Rudolfo Laucirica1, Mahul Amin1

1Department of Pathology and Laboratory Medicine, University of Tennessee Health Science Center, Memphis, TN, USA,2Department of Pathology and Clinical Laboratories, Medical School University of Michigan, Ann Arbor, MI, USA. E-mail: [email protected]

Background: Methodist Healthcare System (MHS) is a network of hospitals located throughout the city of Memphis, TN and Olive Branch, MS. In order to provide pathologists covering these outlying institutions with subspecialty expertise, we are instituting a dynamic telepathology (DT) system for intraoperative consultations.[1],[2],[3],[4],[5] In order to validate this protocol, we have completed a validation study before implementation. Methods: 56 cases were selected randomly from 2018 frozen cases. We used GoToMeeting software for online consultation and Captavison for live image viewer. A pathologist was consulted by another pathologist from frozen room to show him live microscopic images of sample cases and communication was documented. After washout period of a month, consulted pathologist reviewed all cases by direct microscope (DM). The data was tabulated in Microsoft excel and analyzed. Results: The concordance between DT and DM was 93%. 5% (3/56) were deferred to permanent on DT. Of them, one case had a definitive diagnosis on DM. Two cases (3.5%) that were called malignant using DT, were indeterminate for malignancy on DM and were deferred. One case was called negative for malignancy on DT, was positive for malignancy on DM. The overall image quality was satisfactory. Conclusion: Although there was a discrepancy in four cases, overall performance by DT was satisfactory. We believe that improvement in communication between the pathologists could improve the overall performance. This setup is also a cost effective and could serve multiple sites within the same healthcare system.


  1. Chandraratnam E, Santos LD, Chou S, Dai J, Luo J, Liza S, et al. Parathyroid frozen section interpretation via desktop telepathology systems: A Validation study. J Pathol Inform 2018;9:41.
  2. Cima L, Brunelli M, Parwani A, Girolami I, Ciangherotti A, Riva G, et al. Validation of remote digital frozen sections for cancer and transplant intraoperative services. J Pathol Inform 2018;9:34.
  3. Dietz RL, Hartman DJ, Zheng L, Wiley C, Pantanowitz L. Review of the use of telepathology for intraoperative consultation. Expert Rev Med Devices 2018;15:883-90.
  4. Li X, Gong E, McNutt MA, Liu J, Li F, Li T, et al. Assessment of diagnostic accuracy and feasibility of dynamic telepathology in China. Hum Pathol 2008;39:236-42.
  5. Słodkowska J, Pankowski J, Siemiatkowska K, Chyczewski L. Use of the virtual slide and the dynamic real-time telepathology systems for a consultation and the frozen section intra-operative diagnosis in thoracic/pulmonary pathology. Folia Histochem Cytobiol 2009;47:679-84.

   Semi-Supervised Deep Multiple Instance Learning for Breast Cancer Diagnosis Top

Ming Y. Lu1, Richard Chen1, Jing W. Wang1, Faisal Mahmood1

1Depatment of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA. E-mail: [email protected]

Background: One in eight women in the United States are diagnosed with some form of breast cancer through their lifetimes. The standard-of-care for diagnosis and prognosis of breast cancer is the subjective analysis of histology slides, which is both time-consuming and suffers from inter- and intraobserver variability. Deep learning has been widely applied to histology classification at the level of patches and small regions of interests (ROIs), yielding remarkable performance when sufficient labeled training data are provided. However, patch-level annotation is often difficult and costly to curate. By considering each labeled image as a collection of many smaller, unlabeled patches, multiple instance learning (MIL)[1] enables training of neural networks for histopathology image classification without patch-level annotations. In this work, we construct a semi-supervised pipeline for overcoming limited labeled data by combining the power of data-efficient self-supervised feature learning via contrastive predictive coding (CPC)[2] and the interpretability and flexibility of attention-based MIL.[3] Methods: For classifying breast histology images as either positive (carcinoma) or negative (non-carcinoma), we use a convolutional neural network consisting of a modified ResNet50 encoder, a dense multi-layer attention network, and a final classification layer. We first segment each image and then crop inside the foreground contours to form bags of unlabeled 256 x 256 patch instances. The encoder is pretrained in a self-supervised manner using CPC. During supervised learning, patches for each image are encoded into 1024-dimensional embeddings. The attention network aggregates them into a single image-level feature representation by computing a weighted average using their respective attention score. Finally, the classification layer predicts the probability score for the image using this aggregated representation. Results: We apply our two-stage CPC + MIL semi-supervised pipeline to the classification of breast cancer data.[4] For each split, we train on 75% of the data (300 images) and validate on the remaining 25% (100 images). We report an average validation accuracy of 96% and area under the ROC curve of 0.983 across 5 random splits. Conclusions: We demonstrate that a deep semi-supervised approach using CPC + attention-based MIL can be effectively applied to the classification of breast cancer histology images without requiring patch-level annotation. Given the flexibility of our approach, we hope to scale to whole slide images in the future and provide a data-efficient deep learning tool that can potentially serve as an additional reader to help pathologists improve reproducibility and diagnostic accuracy.


  1. Carbonneau MA, Cheplygina V, Granger E, Gagnon G. Multiple instance learning: A survey of problem characteristics and applications. Pattern Recognitions 2018;77:329-53.
  2. van den Oord A, Li Y, Vinyals O. Representation Learning with Contrastive Predictive Coding. arXiv preprint arXiv: 1807.03748; 2019.
  3. Ilse M, Tomczak JM, Welling M. Attention-based deep multiple instance learning. arXiv preprint arXiv:1802.04712; 2018.
  4. Aresta G, Araújo T, Kwok S, Chennamsetty SS, Safwan M, Alex V, et al. BACH: Grand challenge on breast cancer histology images. Med Image Anal 2019;56:122-39.

   Multimodal Fusion of Genotypic and Phenotypic Features for Survival Outcome Prediction Top

Richard J. Chen1, Ming Y. Lu1, Jing W. Wang1, Faisal Mahmood1

1Department of Computational Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA. E-mail: [email protected]

Background: The spatial heterogeneity in histopathology tissue has enormous potential in being able to capture the invasion and progression of cancer cells, and solve tasks in bioinformatics such as cancer subtyping, biomarker discovery, survival outcome prediction.[1],[2] Unlike molecular characterization which only provides average genome-wide profiling, whole-slide images (WSIs) have the ability to reveal the inherent phenotypic intratumoral heterogeneity only visible in tissue histology, which exists due to subclonality and tumor initiating cells in the microenvironment.[3] Despite the wealth of metadata available in WSIs, transcriptome profiling is still the mainstay for many analyses. Through gene expression profiling, the discovery of biomarkers such as IDH1 mutation and MGMT methylation has led to identification of small-molecule inhibitors that target vulnerabilities in glioblastoma and the incremental improvement of survival outcomes for certain groups of patients. These biomarkers have been able to partially explain some of the variance in glioblastoma survival outcomes, the explanation as to why some patients perform worse to therapies, and why most patients will recur in disease, has yet to be delineated. Recent advances in survival outcome prediction have been made in incorporating some histology features into survival models using deep learning approaches. While the theory on how to computationally integrate histopathology and molecular features is still in its nascent stage, the stratification of patients with glioblastoma into both molecular and histopathological similar signatures would potentially lead to better-defined treatment groups for precision medicine. Multimodal learning has emerged as an interdisciplinary, computational field that seeks to correlate and combine disparate heterogenous data sources to solve difficult supervised learning tasks such as survival outcome prediction. Recent advances in survival outcome prediction have been made by incorporating some histology features into survival models using deep learning approaches, but have been limited to using only image region-of-interests (ROIs).[4] In this work, we propose a strategy for multimodal fusion of morphological graph, histological image, and genomic features called Tensor Fusion, which calculates the outer product space of feature embeddings to explicitly model interactions of all features across modalities. We propose the first application of graph convolutional networks in histopathology for the task of survival outcome prediction, and demonstrate that by integrating graph features with histopathology image and genomic features, we are able to outperform unimodal approaches. Methods: We used data from the Cancer Genome Atlas (TCGA) Pan Cancer Atlas dataset, which contains 769 samples of paired diagnostic WSI, genotyping and transcriptome data with ground truth survival time labels for glioma. ROIs from the diagnostic slides in TCGA were used as representative tissue regions of the WSI. From the image ROI, we modeled the spatial distribution of nuclei as a graph, which we constructed using K-Nearest Neighbors. We limited the genomic representation for each glioma case to contain only mutation, copy number variation, and chromosome deletion data. For the histology image, morphological graph and genomic modalities, we respectively train a unimodal deep survival neural network (SNN) supervised using the Cox partial likelihood loss and the Adabound optimizer for 100 epochs. Afterwards, we implement an end-to-end deep multimodal SNN, in which we fuse each feature representation at the penultimate layer using the outer product. We then build a final decision network on top of our fusion representation, and compare the performance of our fusion strategy against vector concatenation as a baseline. Results: Our results demonstrate that multimodal learning with our proposed fusion mechanism is able to improve on vector concatenation for survival outcome prediction. Our Tensor Fusion approach was able to outperform all unimodal benchmarks, while fusion via concatenation performed worse than histology image features. We are able to use our network to identify histopathological regions associated with tumor invasion and angiogenesis in whole-slide images. Conclusion: We propose a learning strategy for fusing genomic and histology data, which we validate on the TCGA-glioma dataset. Future work will include more rigorous investigations into fusion strategies for heterogenous datasets, as well as a comprehensive pan-cancer analysis using Multimodal SNNs on other cancer types in TCGA, as well as OncoPanel in Dana-Farber Cancer Institute.


  1. Marusyk A, Almendro V, Polyak K. Intra-tumour heterogeneity: A looking glass for cancer? Nat Rev Cancer 2012;12:323-34.
  2. Mikkelsen VE, Stensjøen AL, Berntsen EM, Nordrum IS, Salvesen Ø, Solheim O, et al. Histopathologic features in relation to pretreatment tumor growth in patients with glioblastoma. World Neurosurg 2018;109:e50-8.
  3. Wei JW, Tafe LJ, Linnik YA, Vaickus LJ, Tomita N, Hassanpour S, et al. Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks. Sci Rep 2019;9:3358.
  4. Mobadersany P, Yousefi S, Amgad M, Gutman DA, Barnholtz-Sloan JS, Velázquez Vega JE, et al. Predicting cancer outcomes from histology and genomics using convolutional networks. Proc Natl Acad Sci U S A 2018;115:E2970-9.

   Midas Touch or Fool's Gold: Can Digital Pathology Capture the $223 Billion Digital Health Market? A Regulatory Science Perspective Top

Richard Huang1,2, Veronica E. Klepeis1,2

1Department of Pathology, Massachusetts General Hospital, Boston, MA, USA,2Harvard Medical School, Boston, MA, USA.

E-mail: [email protected]

Background: With an estimated digital health market of $223 billion globally by 2023,[1] penetration of digital pathology into the space has been limited. Of all the FDA approved artificial intelligence/machine learning (AI/ML)-based medical devices from the past several years, few have been from digital pathology.[2] FDA has proposed new regulatory frameworks to account for the expected growth and iterative nature of AI/ML-based medical devices.[3] It is vital that companies stay on top of the new regulatory paradigms so they can gain the necessary regulatory approvals in order bring their products to market. Until a medical technology reaches the market, it cannot help real patients in real world clinical settings. Will digital pathology take advantage of this change and turn all that it touches into gold, or will it lose out on the quarter trillion-dollar digital future? Methods: We examined and summarized FDA's published proposed precertification (“Developing a Software Pre-certification Program: A Working Model”) and AI/ML-based SaMD (“Proposed Regulatory Framework for Modifications to AI/ML-Based Software as a Medical Device”) regulatory frameworks.[4],[5] We selected three digital pathology companies that have had significant presence or contribution to the Digital Pathology Association or Pathology Visions conferences.[6] The three companies were also selected due to their publicized artificial intelligence/machine learning (AI/ML)-based products that would be categorized as SaMD under FDA's aforementioned frameworks. Publically available company and product descriptions, as well as news releases about the companies and their products, obtained from the organizations' own websites were examined in order to perform our mock Excellence Appraisal and Review Determination.[7],[8],[9] Recommendations regarding how to demonstrate the Excellence Principles were given where appropriate. Results: FDA outlines a new 5 “Excellence Principles” approach to organizational precertification, and a new 4 risk categories approach for review pathway determination for SaMD product approval. The three companies and products that were examined for our mock Excellence Appraisal and Review Pathway Determination were Paige.AI and its Paige Modules, Huron Digital Pathology and its Index & Search, and Proscia and its DermAI. Paige Modules was categorized as Type IV (highest risk) and both Index & Search and DermAI were categorized as Type III. Under the traditional regulatory model, these products could all require burdensome premarket approval. If these companies were precertified, even the highest risk product (Type IV) would be eligible for streamlined premarket review. However, none of the organizations examined were compliant with precertification requirements. Conclusions: We are entering an era of “high-performance medicine”,[10] where advanced technologies such as artificial intelligence could dramatically amplify our natural human abilities to diagnose, treat, and manage patients. FDA has taken the forward-thinking step of proposing new regulatory pathways in order to embrace the new digital health paradigm. These new pathways would enable companies to gain regulatory approval faster, therefore enter market faster, and ultimately increase and improve the digital diagnostic and therapeutic options available to patients. However, digital pathology companies need to be proactive as well. FDA sought public comments on the two proposed regulatory frameworks discussed in our work.[11],[12] Despite these regulatory frameworks having direct future impact on digital pathology, the field is severely underrepresented in the public comments. When the Pre-Cert Program first launched in 2017, the FDA started with 9 companies in its pilot program.[13] Now, the FDA is actively soliciting new companies to join their Pre-Cert Test Plan.[14] Digital pathology companies should take advantage of this opportunity and volunteer test cases. If digital pathology is to thrive in the booming digital health market, companies need to be at the forefront of adapting to new regulatory changes.


  1. PRNewswire. Global Digital Health Market is Expected to Attain a Size of $223.7 Billion by 2023; 2018. Available from: [Last accessed on 2019 Sep 06].
  2. The Medical Futurist. FDA Approvals for Smart Algorithms in Medicine in One Giant Infographic; 2019. Available from: [Last accessed on 2019 Sep 06].
  3. Gottlieb S. FDA Announces New Steps to Empower Consumers and Advance Digital Healthcare; 2017. Available from: [Last accessed on 2019 Sep 06].
  4. Food and Drug Administration. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback; 2019. Available from: [Last accessed on 2019 Sep 06].
  5. Food and Drug Administration. Developing a Software Precertification Program: A Working Model. Food and Drug Administration; 2019. Available from: [Last accessed on 2019 Sep 06].
  6. Digital Pathology Association. Vendor Directory. Digital Pathology Association; 2019. Available from: [Last accessed on 2019 Sep 06].
  7. Huron Digital Pathology. Huron Digital Pathology; 2019. Available from: [Last accessed on 2019 Sep 06].
  8. Paige.AI. Paige.AI; 2019. Available from: [Last accessed on 2019 Sep 06].
  9. Proscia. Proscia; 2019. Available from: [Last accessed on 2019 Sep 06].
  10. Topol EJ. High-performance medicine: The convergence of human and artificial intelligence. Nat Med 2019;25:44-56.
  11. FDA-2019-N-1185 Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback; 2019. Available from: [Last accessed on 2019 Sep 06].
  12. FDA-2017-N-4301 Fostering Medical Innovation: A Plan for Digital Health Devices; Software Precertification Pilot Program; 2019. Available from: [Last accessed on 2019 Sep 06].
  13. Food and Drug Administration. FDA Selects Participants for New Digital Health Software Precertification Pilot Program. Food and Drug Administration; 2017. Available from: [Last accessed on 2019 Sep 06].
  14. Food and Drug Administration. Digital Health Software Precertification (Pre-Cert) Program: Participate in 2019 Test Plan. Food and Drug Administration; 2019. Available from: [Last accessed on 2019 Sep 06].

   A 2-Step Full Digital Pathology Implementation in a Multi-Site Academic Pathology Department: First Lessons from the Second Step Top

Catherine Guettier, Sophie Prevot, Eric Poullier1, Eric Adnet1, Olivier Trassard, Jean-François Pomerol2, Pauline Baldo2

Department of Pathology,1IT Department, University Hospital of South-Paris (APHP), France,2TRIBVN Healthcare, Paris, France.

E-mail: [email protected]

Background: University Hospital of South-Paris is organized around 3 sites (Bicêtre, Antoine-Béclère and Paul-Brousse) located on the south shore of Paris within a 15km perimeter. For better resources allocation, increased technical efficiency and savings, it was decided in 2013 to reunite the 3 Pathology Departments on a single site based in Bicêtre Hospital. The department includes 12 pathologists, 3 residents and 21 technicians for 30,000 cases annually. We will discuss how digital pathology has allowed the reorganization of pathology activities within the 3 sites. This presentation covers the key issues that must be addressed to make it possible, such as technical requirements, workflow restructuring, storage strategy, management adaptation, training and staff participation. The perspectives towards full DP adoption and IA in the service will also be explored. Methods: The 1st step: Digital Pathology (DP) was prioritized from the beginning as a tool allowing remote frozen sections and enabling multi-site staff and Tumor Boards. Since July 2013, remote frozen sections using digital slide technology have been implemented with Aperio Scanner (Leica) and CaloPix IMS (TRIBVN Healthcare). This initial step proves to be a success both in terms of efficiency for the lab as well as in terms of physicians' acceptance. The 2nd step: In March 2018, all staff was regrouped in one brand new facility at Bicêtre Hospital. High throughput scanners (3D Histech P250 & P1000) and pathologist's digital workstations (CaloPix) were bought thanks to the financial support from ARS IdF. IT was completely replaced and LIS integration (Diamic CS, Dedalus) has been updated. It aims to create a full and modern digital pathology service. Results: The use of digital pathology brought benefices for pre and post-analytic workflows in the service. They concern mainly the service organization, medical-time savings, easy access to cases and associated information and the sharing of cases. Different issues have been encountered during the project, in particular technical ones. They concern the digital slides quality linked to slides' preparation and scanning, but also post-scanning such as LIS/IMS interface, slides storage retention shortage and the global adaptation to the digital workflow. Corrective actions have already been taken, mainly during the pre-analytic workflow. There are still opportunities for improvement in order to address the issues. The main opportunities include scanner software, LIS/IMS interface and digitization of the whole workflow. Conclusions: The use of digital pathology has proven to be effective for concerned staff. The service's main improvements concern the pre-analytic workflow standardization and robustness. Its major perspective involves the development of artificial intelligence projects.

   Quantitative Image Analysis for BCL-2 Immunohistochemistry for Breast Cancer Top

Kareem Hosny1, Yan Xiang1, Robert Freund2, Kerri-Ann Latchmansingh2, Ashley Lentini2

1Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania,2Department of Pathology and Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania, USA.

E-mails: [email protected], [email protected]

Background: The B-cell lymphoma 2 (Bcl-2) proteins are essential for maintaining the balance between cell death and cell proliferation (1-2). The best method for calculating Bcl-2 in breast cancer samples that contain large amounts of variance and noise is still the subject of debate. We identified image characteristics of regions of interest (ROIs) associated with diagnostic accuracy and efficiency.[1],[2] Materials and Methods: Forty-eight primary breast cancer specimens were selected and assessed for Bcl-2 protein expression. Selected ROIs were manually defined and annotated to include tumor cells of four BCL-2 staining intensity levels. After validation, the algorithms were used to examine the impact of the structure, size and number of areas selected as ROIs on diagnostic accuracy and efficiency. Results: We found that the innovative algorithm developed in this study significantly improved correlation coefficients between immunohistochemistry-based and gene expression-based methods (Oncotype DX recurrence score) to predict breast cancer recurrence risk and avoid data skew over the conventional scoring model that we reported earlier (randomly selected ROIs). Access to the algorithm allowed rapid comparisons of Bcl-2 counts in ROIs that varied in numbers of cells and selection of fields, the outputs demonstrated that the results vary on the number of cells and ROIs counted. Conclusions: In summary, our in-depth evaluation of viewing patterns and characteristics of ROIs will be critical to understanding diagnostic errors and sources of distraction, this study indicates that standardization of number of cells and number of regions selected for analysis should be incorporated into guidelines for Bcl-2 calculations.


  1. Xiang Y, Li L, Hou S, Hosny K, Garcia F, Zarella M. Bcl-2 Expression Correlates with Oncotype DX Recurrence Score in Mammary Carcinoma. [abstract]. USCAP, 2018
  2. Cardoso F, Di Leo A, Larsimont D, Gancberg D, Rouas G, Dolci S, Ferreira F, Paesmans M, Piccart M. Evaluation of HER2, p53, bcl-2, topoisomerase II-alpha, heat shock proteins 27 and 70 in primary breast cancer and metastatic ipsilateral axillary lymph nodes. Ann Oncol. 2001 May;12(5):615-20.

   Video Compression for the Expansion of Whole-Slide Imaging into Cytology Top

Jennifer Jakubowski1,2, Mark D. Zarella2

1School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, USA,2Department of Pathology and Laboratory Medicine, Drexel University College of Medicine, Philadelphia, PA, USA.

E-mail: [email protected]

Background: The use of whole-slide imaging (WSI) to produce digital images of glass slides allows for quick, easy access to slides without the use of a microscope. WSI has many beneficial applications, allowing for electronic sharing and collaboration, annotation for education, artificial intelligence and computational pathology, archival, and integration with electronic workflows and health records.[1] WSI works effectively on many different laboratory applications that involve slides with a flat single layer of tissue. However, 3D-oriented cellular clusters seen on cytology slides for cancer screening pose a unique challenge. Multiple focal planes must be captured so that all cells are visible by a pathologist. Modern scanners have z-stacking capabilities that allow for multiple focal planes to be captured and saved to a single image file. However, this generates large data files that can be ~10x larger than conventional whole-slide images, requiring increased bandwidth for viewing. This data storage and bandwidth burden have deterred the expansion of WSI to cytology. Methods: To overcome this obstacle, we tested a video compression algorithm to reduce image file size by harnessing the redundancy across focal planes. To analyze the effectiveness of compression, we captured ten z-stacks of cytology whole-slide images and transformed them into sequential video-like frames that emulate the focus of a microscope. Using open source software (ffmpeg), high efficiency video coding (HEVC) compression[2] was applied to these sequential frame sets. ffmpeg default configurations were used to demonstrate proof-of-concept, which frame sets were converted into a single. mp4 video file as a storage medium. Slides were scanned at x40 using the Hamamatsu NanoZoomer S210. Cytology z-stacks were in 2 μm intervals, allowing for an 18 μm thickness range. Mutual information (MI) was used to measure the degree of redundancy across image frames and was calibrated by using rotation, pixel shuffling, and JPEG compression. Deviation of each frame from the mean, expressed as normalized MI was then evaluated to quantify the redundancy that could potentially be harnessed for compression between frames. Compression ratios achieved from 9 randomly selected cytology slides (3 conventional, 3 ThinPrep, 3 SurePath) were then compared to standard JPEG and JP2000 (denoted JP2) compression ratios to compare algorithm performance. A representative 1.5 x 1.5 mm region was selected to compute these ratios. Images from these regions were derived from JPEG QF=0.80 compressed files. HEVC video files were then converted back to individual frames to gauge image degradation. Structural similarity index (SSIM) was used as a metric to quantify image quality after video compression application. Compressions were applied in three settings/groups: JPEG QF=0.60 (low), JPEG QF=0.70 (medium), and JPEG QF=0.80 (high). Settings were selected so that the compressed image achieved the same SSIM as the corresponding JPEG compressed image. Mean compression ratios for each quality group were calculated from the same 9 slides with standard error. This allows for evaluation of compression performance, in conjunction with image quality degradation, so the 3 compression strategies could be compared sufficiently. Results: Normalized mutual information (NMI) was highest in the central most frames (NMI: 0.36 - 0.38, frames 4 -7) and the lowest in the outermost frames (NMI: 0.3 - < 0.36, all other frames). Video compression resulted in a reduction of file size by a factor of 2.6 – 6.1 (median: 3.6) in comparison to standard JPEG compression and by a factor of 1 – 2.1 (median: 1.4) when compared to JP2 compression (n = 9). Perceptually, only minor impacts on image quality were observed following video compression [Figure 1]. These effects were comparable to JPEG artifacts widely considered acceptable in current practice. SSIM quantification showed that HEVC mean compression ratios were significantly greater than (p<0.01, Wilcoxon Sign Rank Test) both JPEG and JP2 values for all quality settings tested. Conclusion: Our findings suggest that video compression along focal planes, in addition to standard single-image compression, is a viable solution to the data storage burden associated with whole-slide imaging of cytology. HEVC compression performance resulted in over a three-fold reduction of file size compared to standard JPEG compression. Further improvement can likely be achieved by experimenting with free parameters in the HEVC algorithm that govern space constants, motion correction, and the unique color/spatial attributes of histology images. We plan to develop a compression algorithm designed specifically for the unique image attributes of cytology image z-stacks.
Figure 1: Frame 6 of 11 (0 μm). (left) Original; (right) HVEC video compression at 130M

Click here to view


  1. Zarella MD, Bowman D, Aeffner F, Farahani N, Xthona A, Absar SF, et al. A practical guide to whole slide imaging: A white paper from the Digital Pathology Association. Arch Pathol Lab Med 2019;143:222-34.
  2. Sullivan GJ, Ohm JR, Han WJ, Wiegand T. Overview of the high efficiency video 339 coding (HEVC) standard. IEEE Trans Circuits Syst Video 340 Technol 2012;22:1649-68.

   Interpretable Machine Learning Systems to Automatically Assist the Dermatopathologist Workflow Top

Simon M. Thomas1,2, James Lefevre1, Glenn Baxter2, Nicholas A. Hamilton1

1Institute for Molecular Bioscience, University of Queensland, St. Lucia, Australia,2Department of Pathology, MyLab Pty. Ltd., Salisbury, Australia.

E-mail: [email protected]

Background: Deep Learning algorithms have been shown to outperform expert humans in classification problems, but remain highly criticised for being “black boxes”. To integrate this technology successfully, we need machine decision processes to not just correlate with human decisions, but be implemented so that they generate full-context outputs. Doing so makes them more feasibly interrogated and explained. Skin cancer classification provides a unique opportunity for experimentation due to the reliable presentation of histology specimens on a slide i.e. useful context of stratified skin layers in both healthy and nonhealthy tissue. Methods: Our approach is to train deep neural networks to perform dense classification tasks, assigning each pixel in an image to a meaningful class. By constraining the entire input domain, the network is forced to learn representations that are explicable to humans. These interpretable representations can then be used to perform an overall classification, as well as routine characterization tasks such as measuring surgical margin clearance. To test our hypothesis, we used 290 histological images of non-melanoma skin cancer: Basal Cell Carcinoma (BCC), Squamous Cell Carcinoma (SCC), and Intraepidermal Carcinoma (IEC). For each image, we hand-annotated pixels into 12 classes: Glands, Inflammation, Follicles, Hypodermis, Reticular, Papillary Dermis, Hypodermis, Keratin, Background, BCC, SCC, and IEC. We then trained and fine-tuned a ResNet50 network with a decoder architecture to perform high resolution segmentation. This was done for several resolutions (down scale factors 2x, 5x and 10x). Results: Our best network achieved an overall pixel accuracy of 85.3%, producing high quality and interpretable segmentations Using these outputs to perform a per-image classification (Healthy, BCC, SCC, IEC), we achieved an accuracy of 96.6% (no false-negatives) and 99.1% for Cancer versus Non-Cancer. We further demonstrate the feasibility of automatic surgical margin clearance by using these segmentations to perform routine measurements. Conclusion: Our novel results bring us closer to the interpretability that is necessary for healthcare by providing a domain-aware system that performs full-context classification. At the same time we demonstrate for the first time how these systems could be meaningfully integrated into the dermatopathologist workflow.

   Implementation of deep learning, HALO-AI, into routine clinic – tumor cell count study Top

Taro Sakamoto1, Tomoi Furukawa1, Kishio Kuroda1, Yoshiaki Zaizen1, Yukio Kashima2, Akira Yoshikawa3, Junya Fukuoka1

1Department of Pathology, Nagasaki University of Medical Sciences, Nagasaki,2Department of Pathology, Awaji Medical Center, Hyogo,3Department of Pathology, Kameda Medical Center, Chiba, Japan.

E-mail: [email protected]

Background: The image analysis using Artificial Intelligence (AI), especially deep learning, gains a great deal of attention as a supplementary tool for pathological diagnosis. We previously reported that tumor cell count is more accurate when AI is implemented. Our aim was to develop a workflow to apply the AI platform to daily practice. Here, we prospectively investigated whether AI can improve pathologists' tasks. Methods: We analyzed 53 biopsy-cases of lung adenocarcinoma, prospectively. Before signing out cases, Whole Slide Imaging (WSI) of each case was submitted to AI analytical team, and the ratio of tumor cells in total cells were analyzed by HALO AI ® (Indica Lab, Albuquerque) (Ratio-AI). During the signout session of those cases, attending pathologists estimated the ratio without seeing the segmentation data of HALO AI (Ratio-path). Then, the pathologists referred AI-segmentation and judged their quality into 3 levels: good, fair, and poor. The numbers of ratio given by HALO-AI were indicated afterward to the pathologists, and pathologists decide the final ratio (Ratio-final)[Figure 1]. We compared those three ratios by Spearman's rank correlation coefficient. Results: Out of 53 cases, 26, 16 and 11 were judged as good, fair, and poor by pathologists. The correlations to Ratio-final were ρ=0.8347 to Ratio-AI [Figure 2] and ρ=0.8061 to Ratio-path [Figure 3]. When split into good/fair and poor groups, their correlations were ρ=0.932[Figure 2] and ρ=0.790 [Figure 3]. Conclusion: We successfully implemented educated AI platform into routine practice, which improved the quality of pathological diagnosis. The judgment of AI quality by pathologists is critical for clinical use.
Figure 1: Our AI workflow

Click here to view
Figure 2: Ratio-AI vs Ratio-Final

Click here to view
Figure 3: Ratio-Path vs Ratio-Final

Click here to view

   Lessons Learned from Validation of an Image Search Engine for Histoptahology Top

Shivam Kalra1,2, Charles Choi2, Sultaan Shah2, Liron Pantanowitz3, H. R. Tizhoosh1,4

1Kimia Lab, University of Waterloo, Waterloo, Ontario, Canada,2Huron Digital Pathology, St. Jacobs, ON, Canada,3Department of Pathology, University of Pittsburgh Medical Center, PA, USA,4Vector Institute, Toronto, ON, Canada. E-mail: [email protected]

Background: Searching for similar images in large archives of histopathology has been subject of investigations in recent years.[1],[2] Retrieving similar cases can assist pathologists with access to the evidence archived in evidently diagnosed cases. This work reports the results of the validation of search engine specialized for digital pathology. We represent whole slide images (WSIs) in a compact manner by selecting a few patches/tiles and convert them into binary codes. Representing WSIs with a “mosaic of patches” and by converting them into a small number of barcodes, called “Bunch of Barcodes” (BoB) enabled us to search fast and store the data efficiently. We used 300 WSIs from the University of Pittsburgh Medical Center (UPMC) and 2,020 WSIs from The Cancer Genome Atlas Program (TCGA) provided by the National Cancer Institute. Both datasets amount to more than 4,000,000 patches of 1000x1000 pixels when dense sampling is employed for tissue clustering. We report promising classification accuracies for WSI search when enough cases were available. As well, we report correlation values pathologist perception and the search results when patches were examined. Methods: The recent success of artificial intelligence has opened new horizons for diagnostic pathology.[3] In our previous works we have proposed a method for representing an entire WSI with a small set of patches, referred to as mosaic.[4],[5],[6] The idea of “mosaicking” a tissue sample is crucial for the feasibility of image search. Subsequently, we utilized barcodes for image representation and characterization. A whole-side image is indexed by converting its mosaic of patches to a “Bunch of Barcodes” (BoB). Employing binary information accelerates the search and eases the computation and storage load. Our search engine is a complete and functioning search engine for indexing WSIs with major emphasis on the needs and realistic infrastructure of laboratories and clinics. Results: For validating image search capabilities, we used two different datasets: (1) a private dataset consisting of 300 H&E stained WSIs across more than 80 different primary diagnoses from multiple organs provided by the University of Pittsburgh Medical Center (UPMC), and 2) 2,020 WSIs taken from NCI's TCGA repository of more than 33,000 WSIs. [Figure 1] shows the results for search as a function of indexing percentage and the number retrievals. For some primary sites (e.g. breast), a high accuracy was easily achievable whereas for some other sites (e.g., thymus) achieving high accuracy was not easily possible. [Figure 2] shows that relationships between different sites according to analysis of top search results. Conclusions: We validated an image search engine for digital pathology. The image search is generally based on a combination of supervised and unsupervised algorithms. Using pre-trained networks, deep features can characterize images or its regions of interest. We tested the search more than 2300 WSIs. The search queries were responded by accurate results when evaluated as a classification task (which is of course quite restrictive). As well, we examined the conformance of search results with the pathologist's evaluations.
Figure 1: Sample results: For both breast and thymus we observed that by retrieving more similar cases the likelihood of correct diagnosis through majority voting increases. In contrast, more dense sampling of the tissue may or may not have a considerable impact on the accuracy. Whereas for breast sampling did not play a role, for thymus more sampling meant higher accuracy

Click here to view
Figure 2: The cord diagram of search relationships

Click here to view


  1. Zheng L, Wetzel AW, Gilbertson J, Becich MJ. Design and analysis of a content-based pathology image retrieval system. IEEE Trans Inf Technol Biomed 2003;7:249-55.
  2. Mehta N, Alomari RS, Chaudhary V. Content based sub-image retrieval system for high resolution pathology images using salient interest points. Conf Proc IEEE Eng Med Biol Soc 2009;2009:3719-22.
  3. Tizhoosh HR, Pantanowitz L. Artificial intelligence and digital pathology: Challenges and opportunities. J Pathol Inform 2018;9:38.
  4. Kumar MD, Babaie M, Tizhoosh HR. Deep barcodes for fast retrieval of histopathology scans. In: 2018 International Joint Conference on Neural Networks (IJCNN). IEEE; p. 1-8.
  5. Tizhoosh HR, Zhu S, Lo H, Chaudhari V, Mehdi T. Minmax Radon barcodes for medical image retrieval. In: Advances in Visual Computing. Springer International Publishing; p. 617-27.
  6. Kalra S, Choi C, Shah S, Pantanowitz L, Tizhoosh HR. Yottixel – An image search engine for large archives of histopathology whole slide images. Submitted to Medical Image Analysis; August, 2019.

   Secure Cloud-Based Research Platform for Whole Slide Biomarker Analysis Top

Elizabeth Edwards1, Daniel Eversole1, Peter Miller1

1Research and Development, Akoya Biosciences, Marlborough, MA, USA. E-mail: [email protected]

Background: Spatial biomarker discovery and exploration tools for the assessment of multiplex immunofluorescence (mIF) tissue sections are often locally based, leading to bottlenecks in processing and difficulty in data-sharing. Existing web browsing tools do not scale to the high-dimensional, high-complexity experiments used in biomarker discovery. Fast whole slide acquisition has only intensified these concerns. Even as cloud-based collaborative workflows come online, organizations must continue their work in ongoing and extensive historical LAN-locked projects - migration to the cloud must accommodate these without disruption. We demonstrate a solution that combines a LAN-based solution with a tightly integrated secure cloud hub, enabling users to operate in both environments as they transition to fully cloud-based workflows. Methods: Digital slide analysis requires data to be proximate to the processor for speed and latency. Simple cloud storage of imagery for LAN analysis is prohibitive due to long download times. To address these concerns, a unique hybrid solution was devised, combining LAN-based Network Attached Storage (NAS) mirror (cache) with bulk cloud storage as follows:

  • A LAN-based transport app transitions data from the scanning hardware to the local mirror. This can occur at low-traffic time to minimize the local network impact
  • A gateway solution encrypts and stores the data in secure AWS S3 storage, preventing accidental local data loss.

Imagery is always close to processing (LAN or cloud-based); data and metadata are held on a HIPAA-compliant cloud. A study-based browser enables viewing imagery and analysis results; administrators can invite collaborators across organizational and IT boundaries with data access defined by the owning organization. Containerized cloud image processing ensures consistent calculations regardless of local IT reconfiguration and updates. Results: Continuing LAN-based Workflows with Infinite Storage: Proxima enables organizations with current LAN-based workflows to continue working as planned while gaining secure, recoverable, infinite cloud-based storage for their studies. The 16TB NAS serves two purposes:

  • A local mirror of the cloud data. Users are able to see all data stored in the cloud as if it were local, even when cloud storage exceeds the 16TB NAS limit. Active data remains cached to provide LAN-speed access
  • An uploader for new data. Data that arrives on the NAS is immediately encrypted and synced to the cloud.

Data arriving into cloud storage is catalogued by Proxima and stored in S3 buckets per organization; this data can be retrieved rapidly in case of accidental deletion on the 16TB local cache. Encryption is managed per organization for increased security. Meanwhile, LAN-based workflows can continue using the NAS as a shared drive with local high-speed access as shown in [Figure 1].
Figure 1: Architecture diagram showing LAN-based scanner, NAS, and end users with connection to cloud-based S3 storage, Proxima Server, and scalable back-end processing

Click here to view

Transitioning to a Cloud-based Workflow: Cloud-based workflows provide multiple benefits, such as enabling collaboration across geographical and organizational boundaries, reproducible results in the face of changing IT environments, and scalable computing power without capital expense. Still, the transition from a LAN-based workflow to a cloud-based workflow may need to occur gradually due to existing protocols and the need for validation. Proxima enables organizations to continue existing workflows and transition to the cloud only when ready; both cloud and LAN workflows can co-exist. Even during LAN workflows, Proxima organizations can take advantage of shared access for image viewing and review. Once data is cloud-present, organizations can enable users for viewing.

  • Organizations manage their cloud-based user pool; they can limit user access to within-organization only, or they can extend invitations to external collaborators
  • Data access is managed per-study; the study owner has the ability to invite users to view data. The ownership of the data is maintained by the organization
  • Users view whole slide imagery using their local browser with secure connections and authorization / authentication on a per-organization basis.

Additionally, owners can manage the life cycle of their studies; they can be readily available in S3 Standard storage, placed into S3 Glacier storage for data that is not actively used, but necessary to retain, or permanently deleted with certification according to data retention policies. See [Figure 1] for AWS bucket storage options. When ready, the organization can extend the viewing and collaboration experience to include image analysis.

  • Auto-scaled, cloud-based processing can rapidly analyze whole slide data
  • Individual analysis protocols are containerized for robust, repeatable results, regardless of the end-user's operating system
  • Results are both viewable and explorable in a whole slide context for all users with privileges.

Conclusions: Biomarker discovery exploration in both LAN-based and cloud workflows is achieved using this architecture, enabling studies at scale.

   Cloud-Based Open Source Digital Pathology Data Analysis Environment Top

Kent Johnson1, Peter Miller1, Carla Coltharp1

1R&D, Akoya Biosciences, Marlborough, MA, USA.

E-mail: [email protected]

Background: Multiplex immunofluorescence (mIF) produces a wealth of data in high dimensional space; phenotypes, cell densities, distances, clusters of cells, and phenotype relationships are all important. Digital Pathology workflow is becoming widespread for mIF and users want to process large studies and share datasets easily. But standards are not in place to facilitate this. We describe an open, extensible platform for sharing, analysis, and visualization. Further, it integrates with open-source tools so researchers can custom-write their own analyses if desired. Methods: The platform, under development, includes the following elements running as micro-services in a HIPAA-compliant cloud environment:

  • Whole slide images served via XYZ protocol to view samples at any spatial scale
  • Rendering control for selection of appearance
  • On-demand unmixing of multispectral imagery
  • Cell segmentation and phenotyping
  • Integration with R,[1] including leaflet,[2] Shiny,[3] and Akoya's phenoptr[4] spatial analysis package
  • All imagery and metadata are encrypted, with authorization via Auth0 tokens

Open source scripts and tools are used wherever possible.

Here, we demonstrate the use of this platform in the context of the Phenoptics™ workflow, which involved:

  • Staining formalin-fixed paraffin-embedded (FFPE) samples of primary tumors using Opal™ reagents to visualize cells positive for CD8, FoxP3, PD-1, PD-L1, CD68, and a tumor marker (PanCK or Sox10+S100 cocktail) on the same slide.
  • Multispectral scanning on Vectra® Polaris™ using MOTiF™ technology (0.5 μm/pixel resolution).
  • Analysis algorithm development using Phenochart™ and inForm® software to segment cells and identify their phenotypes.
  • Batch processing in the cloud to apply algorithms across the entire slide.
  • Aggregation and high-level visualization of cell phenotyping results with open-source R packages and scripts

Results: Improved digital pathology workflow with 3rd-party integration tools [Figure 1]. Conclusions: The open source digital pathology platform we are developing improves whole slide workflows by:
Figure 1: (a) Parallel processing of in Form cell segmentation and phenotyping algorithms in the cloud reduces analysis times > 5-fold. Larger improvements expected as resources are scaled up. (b) Authentication allows secure local access to cloud-based imagery with 3rd party apps such the slide viewer built with R Shiny and leaflet packages shown in Figure 2. (c) Secure, local access to cloud-based imagery allows flexibility to create custom scripts for analysis and visualization of spatial interactions in the tumor microenvironment. An example implementation is shown in Figure 3: A slide and phenotype viewer, built as an opensource R package

Click here to view
Figure 2: Authenticated slide viewer with access to cloud-based imagery built in R

Click here to view
Figure 3: Cell phenotype maps of a lung cancer tissue overlaid onto cloud-based whole slide scan imagery. 7-color Opal™ panel: DAPI, PanCK (Opal 690), CD8 (Opal Polaris 480), PD-1 (Opal Polaris 780), PD-L1 (Opal 520), CD68 (Opal 620), FoxP3 (Opal 570). Top-left panel is the unmixed multispectral scan. Remaining panels show maps of cell density for different phenotypes based on marker expression and/or proximity measurements (listed above or below each panel). Bottom panels show zoomed-in views of the maps shown in the top panels

Click here to view

  • Enabling secure sharing of imagery and analysis results
  • Facilitating open-source script sharing for flexible data exploration
  • Reducing analysis time with cloud processing.


  1. R Core Team. R: A language and environment for statistical computing. Version 3.5.3. R Foundation for Statistical Computing, Vienna, Austria; 2019. Available from: [Last accessed on 2019 Mar 09].
  2. Cheng J, Karambelkar B, Xie Y. Leaflet: Create Interactive Web Maps with the JavaScript 'Leaflet' Library. R package version 2.0.2; Available from: [Last accessed on 2019 Mar 09].
  3. Chang W, Cheng J, Allaire JJ, Xie Y, McPherson J. Shiny: Web Application Framework for R. R package version 1.3.2.; Available from: [Last accessed on 2019 Mar 09].
  4. Johnson KS. Phenoptr: inForm Helper Functions. R package version 0.2.3. Akoya Biosciences; Available from: [Last accessed on 2019 Mar 09].

   Using Microsoft Power BI for Real-Time Analysis of a Distributed Whole Slide Scanning Operation Top

Robin L Dietz1, Matthew O'Leary2, Anthony Piccoli2, Jennifer Picarsic1, Douglas J Hartman1, Liron Pantanowitz1

1Department of Pathology, UPMC, Pittsburgh, PA,2Division of Information Services, UPMC, Pittsburgh, PA Email: [email protected]

Background: Digitizing thousands of slides in a pathology department for clinical use generates vast amounts of data by whole slide scanners. Data quantification to monitor the performance of different scanners at various locations was necessary at our institution to track volume and accordingly adjust staffing needs. The aim of this study was to perform regular data analytics of our scanning operation. Methods: Data from scanners populating an SQL database were imported into Microsoft PowerBI (Business Intelligence). PowerBI allows for real-time analysis of scanner data and fast development of unique data queries, such as the number of scanned and unmatched slides (i.e. digital images not linked to the laboratory information system). The number of unmatched slides was found by querying the number of “successful” scans without an associated barcode read. Results: Over 16 months, 8 Leica AT2 whole slide scanners generated >120,000 slides [Figure 1]a. A query of unmatched slides revealed those scanners where this problem occurred and helped identify root causes [Figure 1]b. Further inquiry into the failure of scanners to read certain barcodes showed that immunohistochemistry slides with 1D barcodes and 2D barcode labels printed for outside consult cases had the highest rates of scanner barcode-reading failure [Figure 2]. Conclusions: Real-time analysis of our scanner-generated database was successfully created using Microsoft PowerBI. This has helped guide decisions concerning scanner utilization, scan technician staffing needs, and quantifying bottlenecks such as failed barcode reads in our high-volume scanner workflow. These findings promoted the adoption of new slide labels and barcode reader management in our department to better streamline our digital pathology operation.
Figure 1: (a) Screenshot of a the PowerBI workspace showing the monthly scan totals. (b) Database query of the monthly totals of unmatched slides.

Click here to view
Figure 2: Comparison of the different types of barcodes used with varying success. (a) Scanners were only able to be set to read 2D, resulting in the failure to read 1D barcodes. (b) 2D labels are sometimes printed out of alignment, resulting in scanner failures. (c) This was also seen on smaller yellow labels used for consultation cases. (d) Etched labels were found to be the most successful

Click here to view


  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9], [Figure 10], [Figure 11], [Figure 12], [Figure 13], [Figure 14], [Figure 15], [Figure 16], [Figure 17], [Figure 18], [Figure 19], [Figure 20], [Figure 21], [Figure 22], [Figure 23], [Figure 24], [Figure 25], [Figure 26]

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6], [Table 7], [Table 8]




   Browse articles
    Similar in PUBMED
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

  In this article
    Oral and Poster ...
    Predicting Respo...
    Machine Learning...
    CytoProcessor: A...
    What Can We Lear...
    Artificial Intel...
    Establishing an ...
    Importance of Pr...
    AI and Digital P...
    Implementation o...
    Peering into the...
    Walking Down Mem...
    The Critical Rol...
    Workshop on Deep...
    Implementing Dig...
    Using Whole Slid...
    LIS Integration ...
    Similar Image Se...
    Multimodal Data ...
    Leeds Digital Pa...
    Clinical-Grade A...
    Establishing the...
    Small Round Cell...
    Automated Diagno...
    Preliminary Find...
    Remote Access fo...
    Performance Asse...
    Transition of Me...
    Google AutoML ve...
    Integrating Cyto...
    Automated De-Ide...
    How thin is a Th...
    Probing Intra- a...
    Validation of Di...
    Role of Digital ...
    An Automated Ima...
    Implementation o...
    A Novel Deep Lea...
    Quantitative Nuc...
    Laboratory Infor...
    Machine learning...
    Telepathology Va...
    Multimodal Fusio...
    Midas Touch or F...
    A 2-Step Full Di...
    Quantitative Ima...
    Video Compressio...
    Interpretable Ma...
    Implementation o...
    Lessons Learned ...
    Secure Cloud-Bas...
    Cloud-Based Open...
    Using Microsoft ...
   Oral Abstracts
    Prostate Cancer ...
   Poster Abstracts
    Image Analysis S...
    Semi-Supervised ...
    Article Figures
    Article Tables

 Article Access Statistics
    PDF Downloaded926    
    Comments [Add]    

Recommend this journal