Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 36287  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 


ORIGINAL ARTICLE
Year : 2020  |  Volume : 11  |  Issue : 1  |  Page : 5

Limited number of cases may yield generalizable models, a proof of concept in deep learning for colon histology


1 Department of Pathology and Laboratory Medicine, University of California, Sacramento, CA, USA
2 Division of Biostatistics, UC Davis Genome Center, Genome and Biomedical Sciences Facility, University of California, Davis, CA, USA

Correspondence Address:
Dr. Hooman H Rashidi
Department of Pathology and Laboratory Medicine, University of California, Davis 4400 V Street, Sacramento 95817, CA
USA
Dr. John Paul Graff
Department of Pathology and Laboratory Medicine, University of California, Davis 4400 V Street, Sacramento 95817, CA
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpi.jpi_49_19

Rights and Permissions

Background: Little is known about the effect of a minimum number of slides required in generating image datasets used to build generalizable machine-learning (ML) models. In addition, the assumption within deep learning is that the increased number of training images will always enhance accuracy and that the initial validation accuracy of the models correlates well with their generalizability. In this pilot study, we have been able to test the above assumptions to gain a better understanding of such platforms, especially when data resources are limited. Methods: Using 10 colon histology slides (5 carcinoma and 5 benign), we were able to acquire 1000 partially overlapping images (Dataset A) that were then trained and tested on three convolutional neural networks (CNNs), ResNet50, AlexNet, and SqueezeNet, to build a large number of unique models for a simple task of classifying colon histopathology into benign and malignant. Different quantities of images (10–1000) from Dataset A were used to construct >200 unique CNN models whose performances were individually assessed. The performance of these models was initially assessed using 20% of Dataset A's images (not included in the training phase) to acquire their initial validation accuracy (internal accuracy) followed by their generalization accuracy on Dataset B (a very distinct secondary test set acquired from public domain online sources).Results: All CNNs showed similar peak internal accuracies (>97%) from the Dataset A test set. Peak accuracies for the external novel test set (Dataset B), an assessment of the ability to generalize, showed marked variation (ResNet50: 98%; AlexNet: 92%; and SqueezeNet: 80%). The models with the highest accuracy were not generated using the largest training sets. Further, a model's internal accuracy did not always correlate with its generalization accuracy. The results were obtained using an optimized number of cases and controls. Conclusions: Increasing the number of images in a training set does not always improve model accuracy, and significant numbers of cases may not always be needed for generalization, especially for simple tasks. Different CNNs reach peak accuracy with different training set sizes. Further studies are required to evaluate the above findings in more complex ML models prior to using such ancillary tools in clinical settings.


[FULL TEXT] [PDF]*
Print this article     Email this article
 Next article
 Previous article
 Table of Contents

 Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
 Citation Manager
 Access Statistics
 Reader Comments
 Email Alert *
 Add to My List *
 * Requires registration (Free)
 

 Article Access Statistics
    Viewed865    
    Printed70    
    Emailed0    
    PDF Downloaded174    
    Comments [Add]    

Recommend this journal