Fusheng Wang, Jun Kong, Jingjing Gao, Lee A.D. Cooper, Tahsin Kurc, Zhengwen Zhou, David Adler, Cristobal Vergara-Niedermayr, Bryan Katigbak, Daniel J Brat, Joel H Saltz J Pathol Inform 2013, 4:5 (14 March 2013) DOI:10.4103/2153-3539.108543 PMID:23599905Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. |
Magdaleni Bellis, Shereen Metias, Christopher Naugler, Aaron Pollett, Serge Jothy, George M Yousef J Pathol Inform 2013, 4:3 (14 March 2013) DOI:10.4103/2153-3539.108540 PMID:23599903Digital pathology is a rapidly evolving niche in the world of pathology and is likely to increase in popularity as technology improves. We performed a questionnaire for pathologists and pathology residents across Canada, in order to determine their current experiences and attitudes towards digital pathology; which modalities digital pathology is best suited for; and to assess the need for training in digital pathology amongst pathology residents and staff. An online survey consisting of 24 yes/no, multiple choice and free text questions regarding digital pathology was sent out via E-mail to all members of the Canadian Association of Pathologists and pathology residents across Canada. Survey results showed that telepathology (TP) is used in approximately 43% of institutions, primarily for teaching purposes (65%), followed by operating room consults (46%). Seventy-one percent of respondents believe there is a need for TP in their practice; 85% use digital images in their practice. The top two favored applications for digital pathology are teaching and consultation services, with the main advantage being easier access to cases. The main limitations of using digital pathology are cost and image/diagnostic quality. Sixty-two percent of respondents would attend training courses in pathology informatics and 91% think informatics should be part of residency training. The results of the survey indicate that Pathologists and residents across Canada do see a need for TP and the use of digital images in their daily practice. Integration of an informatics component into resident training programs and courses for staff Pathologists would be welcomed. |