Journal of Pathology Informatics

RESEARCH ARTICLE
Year
: 2019  |  Volume : 10  |  Issue : 1  |  Page : 39-

Whole-slide image focus quality: Automatic assessment and impact on ai cancer detection


Timo Kohlberger1, Yun Liu1, Melissa Moran1, Po-Hsuan Cameron Chen1, Trissia Brown2, Jason D Hipp3, Craig H Mermel1, Martin C Stumpe4 
1 Google Health, Palo Alto, CA, USA
2 Work done at Google Health via Advanced Clinical, Deerfield, IL, USA
3 Google Health, Palo Alto, CA; Current Affiliation: AstraZeneca, Gaithersburg, MD, USA
4 Google Health, Palo Alto, CA; Current Affiliation: Tempus Labs, Chicago, IL, USA

Correspondence Address:
Dr. Timo Kohlberger
Google LLC, 1600 Amphitheatre Parkway, Mountain View, CA
USA

Background: Digital pathology enables remote access or consults and powerful image analysis algorithms. However, the slide digitization process can create artifacts such as out-of-focus (OOF). OOF is often only detected on careful review, potentially causing rescanning, and workflow delays. Although scan time operator screening for whole-slide OOF is feasible, manual screening for OOF affecting only parts of a slide is impractical. Methods: We developed a convolutional neural network (ConvFocus) to exhaustively localize and quantify the severity of OOF regions on digitized slides. ConvFocus was developed using our refined semi-synthetic OOF data generation process and evaluated using seven slides spanning three different tissue and three different stain types, each of which were digitized using two different whole-slide scanner models ConvFocus's predictions were compared with pathologist-annotated focus quality grades across 514 distinct regions representing 37,700 35 μm × 35 μm image patches, and 21 digitized “z-stack” WSIs that contain known OOF patterns. Results: When compared to pathologist-graded focus quality, ConvFocus achieved Spearman rank coefficients of 0.81 and 0.94 on two scanners and reproduced the expected OOF patterns from z-stack scanning. We also evaluated the impact of OOF on the accuracy of a state-of-the-art metastatic breast cancer detector and saw a consistent decrease in performance with increasing OOF. Conclusions: Comprehensive whole-slide OOF categorization could enable rescans before pathologist review, potentially reducing the impact of digitization focus issues on the clinical workflow. We show that the algorithm trained on our semi-synthetic OOF data generalizes well to real OOF regions across tissue types, stains, and scanners. Finally, quantitative OOF maps can flag regions that might otherwise be misclassified by image analysis algorithms, preventing OOF-induced errors.


How to cite this article:
Kohlberger T, Liu Y, Moran M, Chen PHC, Brown T, Hipp JD, Mermel CH, Stumpe MC. Whole-slide image focus quality: Automatic assessment and impact on ai cancer detection.J Pathol Inform 2019;10:39-39


How to cite this URL:
Kohlberger T, Liu Y, Moran M, Chen PHC, Brown T, Hipp JD, Mermel CH, Stumpe MC. Whole-slide image focus quality: Automatic assessment and impact on ai cancer detection. J Pathol Inform [serial online] 2019 [cited 2020 Aug 10 ];10:39-39
Available from: http://www.jpathinformatics.org/article.asp?issn=2153-3539;year=2019;volume=10;issue=1;spage=39;epage=39;aulast=Kohlberger;type=0