Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 347  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 




 
Table of Contents    
RESEARCH ARTICLE
J Pathol Inform 2011,  2:33

Computer-aided identification of prostatic adenocarcinoma: Segmentation of glandular structures


1 Department of Radiology, The University of Chicago, Chicago, IL 60637, USA
2 Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL 60611, USA
3 Pritzker School of Medicine, The University of Chicago, Chicago, IL 60637, USA
4 Department of Pathology, The University of Chicago, Chicago, IL 60637, USA

Date of Submission16-Dec-2010
Date of Acceptance24-Apr-2011
Date of Web Publication26-Jul-2011

Correspondence Address:
Yahui Peng
Department of Radiology, The University of Chicago, Chicago, IL 60637
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2153-3539.83193

Rights and Permissions
   Abstract 

Background: Identification of individual prostatic glandular structures is an important prerequisite to quantitative histological analysis of prostate cancer with the aid of a computer. We have developed a computer method to segment individual glandular units and to extract quantitative image features, for computer identification of prostatic adenocarcinoma. Methods: Two sets of digital histology images were used: database I (n = 57) for developing and testing the computer technique, and database II (n = 116) for independent validation. The segmentation technique was based on a k-means clustering and a region-growing method. Computer segmentation results were evaluated subjectively and also compared quantitatively against manual gland outlines, using the Jaccard similarity measure. Quantitative features that were extracted from the computer segmentation results include average gland size, spatial gland density, and average gland circularity. Linear discriminant analysis (LDA) was used to combine quantitative image features. Classification performance was evaluated with receiver operating characteristic (ROC) analysis and the area under the ROC curve (AUC). Results: Jaccard similarity coefficients between computer segmentation and manual outlines of individual glands were between 0.63 and 0.72 for non-cancer and between 0.48 and 0.54 for malignant glands, respectively, similar to an interobserver agreement of 0.79 for non-cancer and 0.75 for malignant glands, respectively. The AUC value for the features of average gland size and gland density combined via LDA was 0.91 for database I and 0.96 for database II. Conclusions: Using a computer, we are able to delineate individual prostatic glands automatically and identify prostatic adenocarcinoma accurately, based on the quantitative image features extracted from computer-segmented glandular structures.

Keywords: Computer-aided classification, digital histology images, feature analysis, image segmentation, prostatic adenocarcinoma


How to cite this article:
Peng Y, Jiang Y, Eisengart L, Healy MA, Straus FH, Yang XJ. Computer-aided identification of prostatic adenocarcinoma: Segmentation of glandular structures. J Pathol Inform 2011;2:33

How to cite this URL:
Peng Y, Jiang Y, Eisengart L, Healy MA, Straus FH, Yang XJ. Computer-aided identification of prostatic adenocarcinoma: Segmentation of glandular structures. J Pathol Inform [serial online] 2011 [cited 2019 Dec 14];2:33. Available from: http://www.jpathinformatics.org/text.asp?2011/2/1/33/83193


   Introduction Top


Prostate cancer is the most commonly diagnosed non-skin cancer and second leading cause of cancer-related death among US men. [1] Transrectal ultrasound-guided core needle biopsy followed by histology is the clinical standard for prostate cancer diagnosis. Histological analysis, based on the pathologists' visual interpretation of the biopsy tissue, is regarded universally in medicine as the gold standard, but it is also subjective and therefore not immune to inter- and intra-pathologist variability. [2],[3],[4],[5]

Quantitative analysis with the aid of computational methods may help reduce the level of subjectivity. [6] Computer methods for automated prostate cancer detection and grading have been proposed based on texture image feature analysis, fourier and wavelet analysis, graph theory, fractal analysis, and other analyses that are not gland-specific. [7],[8],[9],[10],[11],[12],[13],[14],[15] Promising results that are reported in these studies suggest that incorporating computational methods into histological analysis can be helpful to the clinical interpretation of prostatic tissue.

Prostatic glandular structures (or prostatic glands) include both acini and ducts, which are normally difficult to distinguish microscopically. Segmentation of individual glandular units is important because abnormal growth patterns of glands are the first signs that lead pathologists to suspect prostate cancer. [16] Small, crowded, and compact glands often indicate malignancy. [17] To measure these histological characteristics quantitatively, segmentation of individual glands is crucial. However, while segmentation of surrogate structures such as glandular lumina, with or without epithelial-cell cytoplasm, has been reported, [18],[19],[20] segmentation of the complete glandular unit has not been accomplished. The goal of this study is to segment the complete individual glandular units in digital histology images for the purpose of prostate cancer identification. We hypothesize that quantitative image features extracted from the gland segmentation results can be used to identify prostatic adenocarcinoma from non-cancer prostatic tissue.


   Methods Top


Image Databases and Image Acquisition

We acquired digital histology images from hematoxylin-eosin (HE) stained 5 μm sections of formalin-fixed and paraffin-embedded prostatic tissue. Images were taken from regions of interest (ROIs) containing either non-cancer glandular structures or adenocarcinoma with Gleason grade 3 patterns. Non-malignant abnormal conditions such as atrophy were not further differentiated. Tissue ROIs were free of severe artifacts caused by tissue preparation.

Two image databases were collected. Database I consisted of 57 digital color images (20 containing adenocarcinoma) from 15 prostatectomy sections of eight patients from Northwestern University. Database I was used to develop and evaluate image-processing techniques. Database II consisted of 116 digital images (44 containing adenocarcinoma) from 31 prostatectomy sections of 17 patients who were operated upon in 2001 or 2002 at the University of Chicago. Database II was used to validate the image-processing techniques.

For all cases in each database, a uropathologist reviewed the tissue sections retrospectively and marked ROIs for digital image acquisition. After the digital images were acquired, the same pathologists reviewed the digital images to confirm diagnosis.

A Carl Zeiss AxioCam HRC charge-coupled device digital camera mounted on an Olympus BX41 microscope was used to acquire ROI digital histology images. Both the ocular and objective lenses were at 10X and the total magnification was 100X. Digital color images were saved with a matrix size of 650 × 515, which corresponded to a physical field of view of 1.152 × 0.912 mm 2 . Illustrations of benign and malignant glandular structures are shown in [Figure 1].
Figure 1: Digital histology images of (a) benign glands and (b) malignant adenocarcinoma glands. L = glandular lumen; S = stroma; and E = prostatic epithelium

Click here to view


Image Artifact Correction

A flowchart of our image-segmentation technique is shown in [Figure 2], illustrated with an example image. Raw digital images contain artifacts of vignetting (non-uniform illumination) and color cast (a color tint that masks the entire image) caused by image acquisition. To remove these artifacts, we acquired an additional background-reference image under exactly the same acquisition condition as the raw image (same microscope magnification, illumination, and focus), but with a blank area of the slide (no tissue) in the microscope field of view. We then divided the raw image, pixel-by-pixel in each color channel, by the background-reference image. This method is similar to flat-field corrections of pixel-to-pixel variations in the sensitivity of digital detectors. [21]
Figure 2: Flowchart of computer gland segmentation techniques

Click here to view


Segmentation of Four Tissue Components

Tissue component segmentation consisted of three steps: (1) after decomposing a corrected color image into the red, green, and blue color channels, we applied principal-component analysis to remove correlations between the three color-channel images; (2) we used a k-means clustering method to identify four types of tissue components: glandular lumina, stroma, epithelial-cell cytoplasm, and epithelial-cell nuclei; and (3) we applied mathematical morphological operations to remove small isolated regions and fill holes. We describe these steps in more detail as follows.

The red, green, and blue color channel images represent intensities of the respective color components in an image, and their spatially corresponding pixels are usually highly correlated. We transformed these images linearly to their principal components such that their spatially corresponding pixels became mutually uncorrelated. [22],[23] We then normalized the principal components such that their pixel variances became equal, and used the resulting normalized principal components in the k-means clustering analysis.

K-means clustering is an iterative method that partitions abstract data into a user-specified number of non-hierarchical clusters based on similarity (measured by distances between data points in the abstract feature space) such that each data point belongs to the nearest cluster. [24],[25] We applied the k-means clustering method to partition the image pixels (each with three color principal components) into four clusters (therefore, k = 4) that correspond to the four types of tissue components.

The initial cluster centroids were generated randomly and the k-means clustering was applied 20 times independently to each image. One final result that had the smallest sum of within-cluster variances was identified from the 20 runs. The clusters were then identified as a specific tissue component as follows: First, we identified the glandular lumina and cell nuclei clusters based on the observation that glandular lumina have the brightest pixel values in all three color channels, whereas, cell nuclei have the darkest pixel values. Subsequently, we morphologically dilated the glandular lumina and identified the cluster that most overlapped the dilated glandular lumina as the epithelial-cell cytoplasm cluster, because the epithelial cell cytoplasm was spatially closer to the glandular lumina than the stroma. Finally, the remaining cluster was identified as the stroma cluster.

Additional post processing was necessary to refine the segmentation results. For the glandular lumina, a binary morphological opening operation with a seven-pixel diameter circular kernel was applied once, to separate the lumina that were joined together mistakenly. Then, the isolated 'lumina' that were smaller than 50 pixels (~150 μm 2 ) were removed. For the stroma, the binary morphological closing operation with a five-pixel-diameter circular kernel was applied once, to merge the incorrectly separated stromal regions, and holes less than 500 pixels (~1,500 μm 2 ) in size were filled to integrate stromal cell nuclei into the surrounding stroma. Other cell nuclei that were not a part of the stroma and with a size greater than 10 pixels (~30 μm 2 ) were identified as epithelial-cell nuclei. Finally, any pixel initially identified as belonging to the epithelial-cell cytoplasm, but was later included in one of the other three tissue components as a result of post-processing was reclassified as not being a member of the epithelial-cell cytoplasm.

Identification of Individual Glandular Units

Subsequently, we segmented individual glandular units with a seeded region-growing method, using the glandular lumina and epithelial-cell cytoplasm combined as region-growing seeds (i.e., initial estimate of glands). We removed all pixels that were within a distance of three pixels from any epithelial cell nuclei from the region-growing seeds, to help reduce the occurrence of spatially proximate or touching glands becoming merged incorrectly. Also, lumen regions smaller than 50 pixels (~150 μm 2 ) in size, which were not likely to be true lumina, were not used as seeds. The region-growing method was applied iteratively and simultaneously with a 3 x 3-pixel kernel to all region-growing seeds. Every new pixel to be included in the grown glands was checked to prevent pixels that belonged to the stroma, and pixels that would cause two or more glands to merge, from being identified as pixels of glands. The iterative process ended when all glands ceased to grow. We then filled any holes and removed any regions smaller than 1,000 pixels (~3,000 μm 2 ) in size, and labeled each resulting region as an individual gland.

Evaluation of Segmentation of Glandular Units

We evaluated gland segmentation results in two experiments, one qualitative and one quantitative, using the image database I (n = 57). (Image database II was used for validation). In the qualitative experiment, a pathologist and a researcher subjectively evaluated the computer segmentation results on a five-point scale: +2 (excellent), +1 (good), 0 (acceptable), -1 (fair), and -2 (poor). They also estimated in each image the numbers of false-positive glands (non-glandular regions that the computer incorrectly marked as glands) and false-negative glands (glands that the computer missed).

In the quantitative experiment, two researchers manually outlined individual glands and their results were compared with the computer segmentation results. The researchers did not distinguish malignant from benign glands or whether cancer was present in an image in manual gland outlining (they were naïve observers not having been trained to recognize cancerous glands). Researcher A outlined all individual glands twice independently and researcher B outlined individual glands once. Researcher B did not outline any partial gland truncated at the margins of an image.

We used the Jaccard similarity coefficient to measure the similarity in each image between two gland segmentation results. [26] The Jaccard similarity coefficient was defined as the ratio of the total area of intersection to the total area of union between two segmentation results (of the computer or a researcher). Both complete and partial glands (truncated at the image margin) were included, except that when researcher B's manual outlines (which did not include truncated partial glands) were used as one set of gland segmentation results, the Jaccard similarity coefficient was calculated only on the complete glands.

Image Feature Extraction and Evaluation

From the computer gland-segmentation results we obtained quantitative measures of glandular size, circularity, and spatial density of gland distribution. These features were motivated by the experience that malignant glands tend to be small and crowded with compact shapes. Gland size was calculated from the number of pixels within a gland and expressed in the unit of square millimeters (mm 2 ). The average gland size over an image was calculated. Glandular circularity was defined as P 2 /4pA, where P was the perimeter and A was the area of the gland. The average glandular circularity over an image was calculated. Density of gland distribution was defined as the number of glands divided by the total area of an image.

We evaluated the effectiveness of these features in distinguishing malignant from non-cancer glands using the receiver operating characteristic (ROC) curve, which is a plot of sensitivity versus 1 - specificity (or false-positive rate). [27],[28],[29] We obtained the maximum-likelihood estimate of ROC curves [30] and used the area under the ROC curve (AUC) [31] as a summary statistic of the ROC curve.

Analysis of Combination of Image Features

We used the Fisher linear discriminant analysis (LDA) [25],[32] to combine two of the features to distinguish malignant from non-cancer glands. We used the leave-one-out (LOO) cross-validation procedure, [33] in which in any one re-partitioning of the image database, all images except for one are used to train an LDA, whereas the excluded image is used to test the LDA, and exhaustive re-partitioning of the image database ensures that each image is used once, and only once, as a test image. Test results from all images are combined subsequently for ROC analysis.


   Results Top


Segmentation of Glandular Units

An example of computer segmentation and human outline of prostatic glandular units is shown in [Figure 3]. The computer correctly identifies a majority of glands, and several spatially proximate glands that touch each other are segmented correctly as individual glands. Partial glands at the margin of the image are segmented similarly as researcher A's outlines (researcher B did not outline partial glands). However, the computer segmented glands are not as smooth in their boundaries as human outlined glands, and a few small glands are missed.
Figure 3: Example images of (upper left) original image (after artifact correction), (upper right) computer gland segmentation results, (lower left) researcher A's manual outlines of glands, and (lower right) researcher B's manual outlines of glands

Click here to view


Results of subjective evaluation of computer gland segmentation are shown in [Figure 4]. The average ratings on all images not containing adenocarcinoma are 0.6 ± 0.91 (mean ± standard deviation) and 1.4 ± 1.05, respectively, given by the pathologist and the researcher. The average ratings on all images containing adenocarcinoma are -1.2 ± 0.77 and -0.8 ± 0.94, respectively, given by the pathologist and the researcher. The average numbers of false-negative glands missed by the computer are estimated to be 3.1 ± 2.37 and 13.1 ± 9.99 in non-cancer and cancer images, respectively, given by the pathologist; and 4.5 ± 3.75 and 20.7 ± 12.57, respectively, given by the researcher. The average numbers of false-positive glands marked by the computer are estimated to be 2.9 ± 1.50 and 10.9 ± 12.07 in non-cancer and cancer images, respectively, given by the pathologist; and 4.1 ± 2.78 and 8.3 ± 6.04, respectively, given by the researcher. Despite apparent differences in the ratings given by the pathologist and the researcher, their evaluations are substantially concordant. In particular, their results agree that the computer segmented non-cancer glands more accurately than malignant glands.
Figure 4: Subjective and qualitative evaluation of computer segmentation results by (left) a pathologist and (right) a researcher. Shown are histograms of (top) overall accuracy, and (bottom) estimates of false-negative and false-positive glands in the computer segmentation results

Click here to view


Jaccard similarity coefficients for comparison of segmentation results of the researchers and the computer are shown in [Figure 5]. Intra-observer agreement (of researcher A) has the highest similarity: the average Jaccard coefficients are 0.87 ± 0.07 and 0.85 ± 0.06 for non-cancer and cancer images, respectively. The average Jaccard coefficients for interobserver agreement (between researchers A and B) are 0.79 ± 0.09 and 0.75 ± 0.09 for non-cancer and malignant images, respectively. Intra- and interobserver Jaccard coefficients are expected to be less than perfect (i.e., 1.0, which corresponds to identical gland outlines) because of subjectivity in the visual perception of the glands. The average Jaccard coefficients between the computer and researcher A are 0.63 ± 0.13 and 0.48 ± 0.09 for non-cancer and malignant images, respectively, and 0.72 ± 0.08 and 0.54 ± 0.12, respectively, between the computer and researcher B. These results show that interobserver agreement is slightly less than intraobserver agreement and that human-computer agreement is similar to interobserver agreement for non-cancer glands.
Figure 5: Comparison of Jaccard coefficients (left) between repeated gland identification by researcher A (intraobserver comparison) and between glands identified by researchers A and B (interobserver comparison); (middle) between repeated gland identification by researcher A (intraobserver comparison) and between glands identified by the computer and researcher B (human-computer comparison); and (right) between glands identified by the computer and both researchers (human-computer comparisons)

Click here to view


Image Feature Performance and Classification Results

The ROC curves of the three individual glandular features calculated from computer segmentation results, and of two of the three features combined, are shown in [Figure 6], on both databases I and II. For database I, the AUC values of the individual features of average gland size, gland density, and average gland circularity are 0.92 ± 0.04 (maximum-likelihood estimate ± standard error), 0.80 ± 0.06, and 0.51 ± 0.08, respectively. These results indicate that both average gland size and gland density are effective in classifying glands as cancerous or non-cancerous, and average gland circularity is not effective, probably because the computer does not segment malignant glands accurately enough.
Figure 6: Receiver operating characteristic (ROC) curves of the individual and combined glandular features calculated from computer outlines of individual prostatic glands in images of (left) databases I, and (right) II. Area under the ROC curve (AUC) can be interpreted as a summary index of classification performance. An AUC value of 0.5 indicates a 'random call,' whereas an AUC value of 1.0 indicates perfect separation of non-cancer and cancer glands

Click here to view


Combining average gland size and gland density with an LDA yielded an AUC value of 0.91 ± 0.05. For comparison, when the features were calculated from manual gland outlines, the AUC values for average gland size, gland density, average gland circularity, and average gland size and gland density combined were 0.99 ± 0.02, 0.96 ± 0.03, 0.93 ± 0.03, and 0.99 ± 0.01, respectively, from researcher A's outlines, and 0.995 ± 0.009, 0.96 ± 0.02, 0.96 ± 0.02, and 0.995 ± 0.009, respectively, from researcher B's outlines. Performance of the features calculated from computer segmentation results on image database II (n = 116) was similar to that on image database I. The AUC values were 0.97 ± 0.02, 0.94 ± 0.02, 0.66 ± 0.05, and 0.96 ± 0.02 for average gland size, gland density, average gland circularity, and average gland size and gland density combined, respectively.


   Discussion Top


We have developed an image-segmentation method to identify prostatic glandular units in HE images and extracted several quantitative glandular features useful for identifying prostate cancer and benign tissue. Unlike previous studies [18],[19],[20] in which glandular lumina (with or without partial epithelial cell cytoplasm) were segmented as a surrogate structure of the glandular units, we segmented the outline of complete glands. We also evaluated our segmentation results both qualitatively and quantitatively, and our method could be used to compare with other segmentation methods in the future.

In this initial study of our computer segmentation technique, we used only ROI images acquired from prostatectomy specimens because of the abundance and intact morphology of both non-cancer and malignant glands. This abundance of complete glands was crucial to our development of the computer techniques. To be clinically useful, the computer techniques must also work well on images of biopsy specimens, in which the number of complete glands might be limited. The limited number of complete glands and the presence of partial glands in biopsy specimens would undoubtedly be more challenging for computer segmentation techniques. However, carefully developed computer techniques based on prostatectomy specimens could be expected to work well on biopsy specimens also, if the computer techniques successfully capture the essential features of the prostatic glandular structures. We plan to test our technique on biopsy specimens in a future study.

We included only images of Gleason grade 3 adenocarcinoma in this study and did not include images of Gleason grade 4 or 5 high-grade adenocarcinoma. Gleason grade 3 adenocarcinomas are much more common in prostatectomy specimens than high Gleason grade adenocarcinomas. Gleason grade 3 adenocarcinomas have well-defined glandular structures and they sometimes retain certain histological features similar to non-cancer glands. In contrast, high Gleason grade adenocarcinomas are composed of poorly-differentiated, ill-defined, and fused glands-complete and separate glandular units often cannot be found. Because of this, segmentation of glands in high-grade adenocarcinomas is significantly more difficult, or even impossible, because of the disruption or complete loss of glandular formation. In this study, we have focused on the correct identification of low-grade prostatic adenocarcinoma. In future studies, we will investigate high-grade adenocarcinomas.

We collected two separate sets of images and used one (database I, n = 57) to develop and test the computer techniques and the other (database II, n = 116) for independent validation. These two sets of images were acquired from two different institutions and two different pathologists selected the ROIs of non-cancer and malignant glands. Therefore, the similar performance of the image features in these two sets of images suggests that our computer techniques are robust to variations in digital histology images, which are due to variations in tissue preparation and variations in ROI selection.

Our computer technique of glandular structure segmentation is based on k-means clustering and region growing. The k-means clustering algorithm is an unsupervised segmentation method, and its results depend solely on the image in question, and except for the value of k chosen a priori, are not influenced in any way by any other (e.g., training) images. Because of this feature, although color variations were large between our two image databases collected from two different institutions, the k-means clustering method produced good segmentation results in both sets of images. This consistency and robustness are an advantage of our computer technique especially because gland heterogeneity is common in both non-cancer and malignant prostatic tissue.

The region-growing method helped to delineate gland boundaries accurately. Being at the center of a glandular unit, the glandular lumen is often a good approximation of gland location and shape, and using the lumina as the initial seeds to region growing helped us to obtain accurate gland segmentation results. Furthermore, the rule of maintaining at least a minimum distance from epithelial-cell nuclei prevented spatially proximate glands from merging together and helped to delineate gland boundaries accurately, because epithelial-cell nuclei are often a good approximation of the gland margin, especially when the color contrast between stroma and glands is small.

Our results suggest that the Jaccard similarity coefficient is useful in comparing computer segmentation results against intra- and interobserver variation in the manual outlining of individual glandular units. However, a limitation of this similarity measure is that it is based on size (area) alone without taking into account the shape of the segmented glands. It also does not differentiate relatively minor errors in otherwise correctly identified glands from misses of entire glands or incorrect inclusion of non-glands. Although it is possible to modify the definition of the similarity coefficient to differentiate between these two types of errors, additional and arguably arbitrary thresholds are required to do so. On the other hand, our qualitative evaluation of the computer segmentation results complements the results of the Jaccard similarity coefficient because the pathologist and the researcher took into consideration, to some degree, the differences between these errors in their subjective evaluations of the computer segmentation results.

It is important to note that the intra- and interobserver similarity (or disagreement) in the manual outlining of individual glandular units that we present here is not a measure of intra- and interobserver variability in prostate cancer detection. The researchers who outlined the glands had not been trained to recognize prostate cancer and they treated all glandular units equally without any regard to whether they were cancerous, or whether cancer was present elsewhere in an image. It is unknown how variability in manual glandular unit outlining is related to variability in cancer detection, because pathologists do not outline glandular units explicitly when making a diagnosis. The purpose of this analysis was strictly to identify the similarities and differences between computer and human gland-segmentation results.

Our computer technique segments non-cancer glands more accurately than malignant glandular units. This is partly because the formation of glandular units is intact in non-cancer glands, but often deviates from the norm in malignant glands (e.g., with small glands). Our computer technique tends to miss small glands because of a size threshold that we imposed to reduce the number of false-positive glands identified by the computer. Because cancer glands tend to be small, cancer glands are affected disproportionally more frequently. However, even with these obvious limitations, our results show that image features extracted from the gland segmentation results can already be effective in classifying prostatic adenocarcinoma from non-cancer glands, which is ultimately the goal of computer segmentation of individual prostatic glandular units.

We analyzed the quantitative image features of average gland size, gland density, and average gland circularity to identify small, crowded, and compact glands that are hallmarks of prostatic adenocarcinoma. Both the average gland size and gland density were effective in identifying prostatic adenocarcinoma. The average gland circularity was not as effective as expected, probably because the computer did not segment malignant glands well enough to allow accurate calculation of the gland shape. These results suggest that quantitative image features based on computer-segmented glands could be used to classify Gleason grade 3 adenocarcinoma accurately from non-cancer glands.

The computer techniques were developed on digital ROI images selected by pathologists. To eliminate potential bias in the selection of ROIs and to be practical in future applications, the computer techniques need to be tested on whole-slide images. Although we have not had the opportunity to test that, we expect, with refinement and improvement on the segmentation of cancer glands, the gland-segmentation technique developed in this study to work well on whole-slide images. We plan to test that in future studies.

In summary, we report a computer image-analysis technique for automated segmentation of individual prostatic glandular units in digital images of HE sections of the prostate. Analysis of quantitative image features shows that average gland size and gland density are effective in differentiating prostatic adenocarcinoma from non-cancer glands. These results indicate that individual prostatic glandular units can be segmented automatically with the help of a computer, and that digital image features can be extracted from computer segmentation of prostatic glandular structures to identify prostatic adenocarcinoma accurately. With further development, our techniques could be useful for computer-aided histological analysis of prostate cancer.


   Sources of Support Top


This work was supported in part by the NIBIB of the NIH through grant R21 EB006466. Partial funding was also provided by the NIH through grants S10 RR021039 and P30 CA14599. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of any of the supporting organizations.


   Acknowledgment Top


We thank Charles G. Giger for his assistance in the evaluation of the computer technique presented here.

 
   References Top

1.Jemal A, Siegel R, Ward E, Hao Y, Xu J, Thun MJ. Cancer statistics, 2009. CA Cancer J Clin 2009;59:225-49.  Back to cited text no. 1
    
2.Allsbrook WC Jr, Mangold KA, Johnson MH, Lane RB, Lane CG, Amin MB, et al. Interobserver reproducibility of Gleason grading of prostatic carcinoma: Urologic pathologists. Hum Pathol 2001;32:74-80.  Back to cited text no. 2
    
3.Allsbrook WC Jr, Mangold KA, Johnson MH, Lane RB, Lane CG, Epstein JI. Interobserver reproducibility of Gleason grading of prostatic carcinoma: general pathologist. Hum Pathol 2001;32:81-8.  Back to cited text no. 3
    
4.Sooriakumaran P, Lovell DP, Henderson A, Denham P, Langley SE, Laing RW. Gleason scoring varies among pathologists and this affects clinical risk in patients with prostate cancer. Clin Oncol 2005;17:655-8.  Back to cited text no. 4
    
5.Frable WJ. Surgical pathology-second reviews, institutional reviews, audits, and correlations: what's out there? Error or diagnostic variation? Arch Pathol Lab Med 2006;130:620-5.  Back to cited text no. 5
    
6.Rojo MG, García GB, Mateos CP, García JG, Vicente MC. Critical comparison of 31 commercially available digital slide systems in pathology. Int J Surg Pathol 2006;14:285-305.  Back to cited text no. 6
    
7.Stotzka R, Männer R, Bartels PH, Thompson D. A hybrid neural and statistical classifier system for histopathologic grading of prostatic lesions. Anal Quant Cytol Histol 1995;17:204-18.  Back to cited text no. 7
    
8.Smith Y, Zajicek G, Werman M, Pizov G, Sherman Y. Similarity measurement method for the classification of architecturally differentiated images. Comput Biomed Res 1999;32:1-12.  Back to cited text no. 8
    
9.Roula M, Diamond J, Bouridane A, Miller P, Amira A. A multispectral computer vision system for automatic grading of prostatic neoplasia. In Proceedings of 1 st IEEE Int Symp Biomed Imaging; 2002 Jul 7-10; Washington DC. IEEE; 2002. p. 193-6.  Back to cited text no. 9
    
10.Jafari-Khouzani K, Soltanian-Zadeh H. Multiwavelet grading of pathological images of prostate. IEEE Trans Biomed Eng 2003;50:697-704.  Back to cited text no. 10
    
11.Diamond J, Anderson NH, Bartels PH, Montironi R, Hamilton PW. The use of morphological characteristics and texture analysis in the identification of tissue composition in prostatic neoplasia. Hum Pathol 2004;35:1121-31.  Back to cited text no. 11
    
12.Doyle S, Rodriguez C, Madabhushi A, Tomaszeweski J, Feldman M. Detecting prostatic adenocarcinoma from digitized histology using a multi-scale hierarchical classification approach. In Proceeding of 28 th IEEE Eng Med Biol Soc; 2006 Aug 30-Sep 03; New York NY. IEEE; 2006. p. 4759-62.  Back to cited text no. 12
    
13.Doyle S, Hwang M, Shah K, Madabhushi A, Feldman M, Tomaszeweski J. Automated grading of prostate cancer using architectural and textural image features. In Proceedings of 4 th IEEE Int Symp Biomed Imaging; 2007 Apr 12-15; Arlington VA. IEEE; 2007. p. 1284-7.  Back to cited text no. 13
    
14.Tabesh A, Teverovskiy M, Pang HY, Kumar VP, Verbel D, Kotsianti A, et al. Multifeature prostate cancer diagnosis and Gleason grading of histological images. IEEE Trans Med Imaging 2007;26:1366-78.  Back to cited text no. 14
    
15.Huang PW, Lee CH. Automatic classification for pathological prostate images based on fractal analysis. IEEE Trans Med Imaging 2009;28:1037-50.  Back to cited text no. 15
    
16.Epstein JI, Yang XJ. Biopsy Interpretation of the Prostate. Philadelphia: Lippincott Williams and Wilkins; 2002.  Back to cited text no. 16
    
17.Epstein JI. Diagnostic criteria of limited adenocarcinoma of the prostate on needle biopsy. Hum Pathol 1995;26:223-9.  Back to cited text no. 17
    
18.Naik S, Doyle S, Agner S, Madabhushi A, Feldman M, Tomaszewski J. Automated gland and nuclei segmentation for grading of prostate and breast cancer histopathology. In Proceedings of 5 th IEEE Int Symp Biomed Imaging; 2008 May 14-17; Paris France. IEEE; 2008. p. 284-7.  Back to cited text no. 18
    
19.Naik S, Doyle S, Feldman M, Tomaszewski J, Madabhushi A. Gland segmentation and computerized Gleason grading of prostate histology by integrating low-, high-level and domain specific information. In Proceedings of 2 nd Workshop on Microscopic Image Analysis with Appl in Biology; 2007 Sep 21; Piscataway NJ.  Back to cited text no. 19
    
20.Farjam R, Soltanian-Zadeh H, Jafari-Khouzani k, Zoroofi RA. An image analysis approach for automatic malignancy determination of prostate pathological images. Cytometry B Clin Cytom 2007;72:227-40.  Back to cited text no. 20
    
21.Yaffe MJ, Mainprize JG. Detectors for digital mammography. Technol Cancer Res Treat 2004;3:309-24.  Back to cited text no. 21
    
22.Costa LD, Cesar RM. Shape Analysis and Classification: Theory and Practice. Boca Raton: CRC Press LLC; 2001.  Back to cited text no. 22
    
23.Jolliffe I. Principal Component Analysis. New York: Springer-Verlag; 1986.  Back to cited text no. 23
    
24.Xu R, Wunsch DC II. Clustering. Hoboken: John Wiley and Sons; 2009.  Back to cited text no. 24
    
25.Duda RO, Hart PE, Stork DG. Pattern classification. New York: John Wiley and Sons; 2001.  Back to cited text no. 25
    
26.Tan PN, Steinbach M, Kumar V. Introduction to data mining. Boston: Pearson Addison Wesley; 2005.  Back to cited text no. 26
    
27.Metz CE. Basic principles of ROC analysis. Semin Nucl Med 1978;8:283-98.  Back to cited text no. 27
    
28.Metz CE. ROC methodology in radiologic imaging. Invest Radiol 1986;21:720-33.  Back to cited text no. 28
    
29.Metz CE. Some practical issues of experimental design and data analysis in radiological ROC studies. Invest Radiol 1989;24:234-45.  Back to cited text no. 29
    
30.Metz CE, Herman BA, Shen JH. Maximum likelihood estimation of receiver operating characteristic (ROC) curves from continuously-distributed data. Stat Med 1998;17:1033-53.  Back to cited text no. 30
    
31.Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982;143:29-36.  Back to cited text no. 31
    
32.Fukunaga I. Introduction to Statistical Pattern Recognition. San Diego: Academic Press; 1990.  Back to cited text no. 32
    
33.Chan HP, Sahiner B, Wagner RF, Petrick N. Classifier design for computer-aided diagnosis: Effects of finite sample size on the mean performance of classical and neural network classifiers. Med Phys 1999;26:2654-68.  Back to cited text no. 33
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6]


This article has been cited by
1 Image Processing Based Colorectal Cancer Detection in Histopathological Images
Anamika Banwari,Namita Sengar,Malay Kishore Dutta
International Journal of E-Health and Medical Communications. 2018; 9(2): 1
[Pubmed] | [DOI]
2 Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization
Philipp Kainz,Michael Pfeiffer,Martin Urschler
PeerJ. 2017; 5: e3874
[Pubmed] | [DOI]
3 Glandular Morphometrics for Objective Grading of Colorectal Adenocarcinoma Histology Images
Ruqayya Awan,Korsuk Sirinukunwattana,David Epstein,Samuel Jefferyes,Uvais Qidwai,Zia Aftab,Imaad Mujeeb,David Snead,Nasir Rajpoot
Scientific Reports. 2017; 7(1)
[Pubmed] | [DOI]
4 Optimizing structural and mechanical properties of cryogel scaffolds for use in prostate cancer cell culturing
A. Cecilia,A. Baecker,E. Hamann,A. Rack,T. van de Kamp,F.J. Gruhl,R. Hofmann,J. Moosmann,S. Hahn,J. Kashef,S. Bauer,T. Farago,L. Helfen,T. Baumbach
Materials Science and Engineering: C. 2017; 71: 465
[Pubmed] | [DOI]
5 Pilot Study of the Use of Hybrid Multidimensional T2-Weighted Imaging–DWI for the Diagnosis of Prostate Cancer and Evaluation of Gleason Score
Meredith Sadinski,Gregory Karczmar,Yahui Peng,Shiyang Wang,Yulei Jiang,Milica Medved,Ambereen Yousuf,Tatjana Antic,Aytekin Oto
American Journal of Roentgenology. 2016; : 1
[Pubmed] | [DOI]
6 IV Administered Gadodiamide Enters the Lumen of the Prostatic Glands: X-Ray Fluorescence Microscopy Examination of a Mouse Model
Devkumar Mustafi,Sophie-Charlotte Gleber,Jesse Ward,Urszula Dougherty,Marta Zamora,Erica Markiewicz,David C. Binder,Tatjana Antic,Stefan Vogt,Gregory S. Karczmar,Aytekin Oto
American Journal of Roentgenology. 2015; 205(3): W313
[Pubmed] | [DOI]
7 Profound Effect of Profiling Platform and Normalization Strategy on Detection of Differentially Expressed MicroRNAs – A Comparative Study
Swanhild U. Meyer,Sebastian Kaiser,Carola Wagner,Christian Thirion,Michael W. Pfaffl,Thomas Preiss
PLoS ONE. 2012; 7(6): e38946
[Pubmed] | [DOI]



 

 
Top
  

    

 
  Search
 
   Browse articles
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
   Introduction
   Methods
   Results
   Discussion
   Sources of Support
   Acknowledgment
    References
    Article Figures

 Article Access Statistics
    Viewed5134    
    Printed235    
    Emailed1    
    PDF Downloaded692    
    Comments [Add]    
    Cited by others 7    

Recommend this journal