DeepCIN: Attention-based cervical histology image classification with sequential feature modeling for pathologist-level accuracy
Sudhir Sornapudi1, R Joe Stanley1, William V Stoecker2, Rodney Long3, Zhiyun Xue3, Rosemary Zuna4, Shellaine R Frazier5, Sameer Antani3
1 Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, USA 2 Stoecker and Associates, Rolla, MO, USA 3 Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA 4 Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA 5 Department of Surgical Pathology, University of Missouri Hospitals and Clinics, Columbia, MO, USA
Correspondence Address:
Dr. R Joe Stanley Department of Electrical and Computer Engineering, Missouri University of Science and Technology, 127 Emerson Electric Co. Hall, 301 W. 16th Street, Rolla USA
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/jpi.jpi_50_20
|
Background: Cervical cancer is one of the deadliest cancers affecting women globally. Cervical intraepithelial neoplasia (CIN) assessment using histopathological examination of cervical biopsy slides is subject to interobserver variability. Automated processing of digitized histopathology slides has the potential for more accurate classification for CIN grades from normal to increasing grades of pre-malignancy: CIN1, CIN2, and CIN3. Methodology: Cervix disease is generally understood to progress from the bottom (basement membrane) to the top of the epithelium. To model this relationship of disease severity to spatial distribution of abnormalities, we propose a network pipeline, DeepCIN, to analyze high-resolution epithelium images (manually extracted from whole-slide images) hierarchically by focusing on localized vertical regions and fusing this local information for determining Normal/CIN classification. The pipeline contains two classifier networks: (1) a cross-sectional, vertical segment-level sequence generator is trained using weak supervision to generate feature sequences from the vertical segments to preserve the bottom-to-top feature relationships in the epithelium image data and (2) an attention-based fusion network image-level classifier predicting the final CIN grade by merging vertical segment sequences. Results: The model produces the CIN classification results and also determines the vertical segment contributions to CIN grade prediction. Conclusion: Experiments show that DeepCIN achieves pathologist-level CIN classification accuracy.
|