Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 1986  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 




 
Table of Contents    
SYMPOSIUM - ORIGINAL RESEARCH
J Pathol Inform 2011,  2:12

Learning histopathological patterns


1 Centre for Image Analysis, Uppsala University, Uppsala, Sweden
2 DTU Informatics, Lyngby, Denmark

Date of Submission20-Oct-2011
Date of Acceptance20-Oct-2011
Date of Web Publication19-Jan-2012

Correspondence Address:
Andreas Kårsnäs
Centre for Image Analysis, Uppsala University, Uppsala
Sweden
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2153-3539.92033

Rights and Permissions
   Abstract 

Aims: The aim was to demonstrate a method for automated image analysis of immunohistochemically stained tissue samples for extracting features that correlate with patient disease. We address the problem of quantifying tumor tissue and segmenting and counting cell nuclei. Materials and Methods: Our method utilizes a flexible segmentation method based on sparse coding trained from representative image samples. Nuclei counting is based on a nucleus model that takes size, shape, and nucleus probability into account. Nuclei clustering and overlays are resolved using a gray-weighted distance transform. We obtain a probability measure for pixels belonging to a nucleus from our segmentation procedure. Experiments are carried out on two sets of immunohistochemically stained images - one set based on the estrogen receptor (ER) and the other on antigen KI-67. For the nuclei separation we have selected 207 ER image samples from 58 tissue micro array-cores corresponding to 58 patients and 136 KI-67 image samples also from 58 cores. The images are hand-annotated by marking the center position of each nucleus. For the ER data we have a total of 1006 nuclei and for the KI-67 we have 796 nuclei. Segmentation performance was evaluated in terms of missing nuclei, falsely detected nuclei, and multiple detections. The proposed method is compared to state-of-the-art Bayesian classification. Statistical analysis used: The performance of the proposed method and a state-of-the-art algorithm including variations thereof is compared using the Wilcoxon rank sum test. Results: For both the ER experiment and the KI-67 experiment the proposed method exhibits lower error rates than the state-of-the-art method. Total error rates were 4.8 % and 7.7 % in the two experiments, corresponding to an average of 0.23 and 0.45 errors per image, respectively. The Wilcoxon rank sum tests show statistically significant improvements over the state-of-the-art method. Conclusions: We have demonstrated a method and obtained good performance compared to state-of-the-art nuclei separation. The segmentation procedure is simple, highly flexible, and we demonstrate how it, in addition to the nuclei separation, can perform precise segmentation of cancerous tissue. The complexity of the segmentation procedure is linear in the image size and the nuclei separation is linear in the number of nuclei. Additionally the method can be parallelized to obtain high-speed computations.

Keywords: Computer-aided classification, digital histopathology images, flexible learning based segmentation, image segmentation


How to cite this article:
Kårsnäs A, Dahl AL, Larsen R. Learning histopathological patterns. J Pathol Inform 2011;2, Suppl S1:12

How to cite this URL:
Kårsnäs A, Dahl AL, Larsen R. Learning histopathological patterns. J Pathol Inform [serial online] 2011 [cited 2019 Nov 21];2, Suppl S1:12. Available from: http://www.jpathinformatics.org/text.asp?2011/2/2/12/92033


   Introduction Top


Diagnosis from immunostained histological tissue biopsies plays a major role in cancer diagnosis. Histopathological images are obtained from digitized light microscopy of thin tissue slices colored using immunohistochemical staining techniques with different biomarkers. This results in images with tissue components colored according to specific functionality. For example proliferating cell nuclei are colored differently from other nuclei, cf [Figure 1]. The fundamental problem in histopathological analysis is to infer information about diagnosis and decide treatment.
Figure 1: Segmentation example of histopathological images. (a) The original microscopy image where proliferating nuclei appear brown and other nuclei are blue. Tumor tissue appears as darker regions. (b) Part of the original image (marked with red in the first image). (c) The probability map for segmenting nuclei, where bright color is high probability. (d) The segmentation result with a probability map for tumor tissue. Both segmentations are based on the same procedure, but with different training samples. (e) Our nuclei separation with the original images at the top and separated nuclei at the bottom

Click here to view


Diagnosis of cancer based on automated analysis of histopathology images relies on segmentation and extraction of quantitative features including cell nuclei, cell membranes, cytoplasm, or larger tissue parts containing clusters of cells. [1],[2] Both color and shape are used for quantifying structures, especially cell nuclei. A threshold in the color space is often employed followed by region processing, e.g., using an elliptical model [3],[4] or learning shape features. [5]

Manual analysis can be time consuming and biased and employing computers can improve performance both in relation to objectivity and labor. Additionally, features that are beyond the capabilities of manual analysis can be extracted. Obtaining these benefits require robust and flexible automated segmentation and classification methods. One of the major difficulties when analyzing histopathological images is the biological variations of the tissue and the staining variations. [6] It is important that segmentation algorithms are robust to these changes as well.

Our contribution includes applying a highly flexible segmentation and classification technique that is easily adapted to the varying appearance of the histopathological images. Cell nuclei often appear clustered resulting in under-segmentation. To overcome this, we propose a nuclei separation approach that utilizes the probability map obtained from our segmentation procedure.

The nuclei separation method is based on the commonly used combination of the watershed algorithm and the complement to the distance transform. [7] However, this often leads to over-segmentation, which is usually solved with either region merging techniques [8],[9],[10] or marker-controlled watersheds. [4],[11],[12] We propose a method where we combine these strategies to achieve a better separation. Our nuclei separation method is an extension of the method of Jung and Kim. [4]


   Materials and Methods Top


Our approach for analyzing histopathological images is based on applying a segmentation procedure followed by an automatic nuclei separation. Here we will give a short description of the segmentation procedure and we refer to reference [13] for a detailed description.

Tissue Segmentation

The segmentation procedure is based on assigning a label probability to all pixels in the image resulting in a label probability image. We obtain the label probability image by employing a dictionary of small image patches, which we denote the intensity dictionary. This intensity dictionary is coupled with a dictionary of small label patches - the so-called label dictionary. To encode an unknown image we start by employing a nearest neighbor search in the intensity dictionary. Then we choose the corresponding label patch from the associated label dictionary and this way build our label probability image. Based on the label probability image we can obtain a segmentation by choosing the most probable label in each pixel. The segmentation procedure is illustrated in [Figure 2].
Figure 2: Illustration of segmentation procedure. (a) Building the segmentation probability image. For the image patch marked with the red square in the top image of (a) we select the most similar dictionary element in (b). The associated label element in the bottom image of (b) is chosen and added to the segmentation probability image, which is shown at the bottom of (a). Bright color indicates high probability. The two layers of the label images show that there are two classes in this segmentation example brown nuclei and background

Click here to view


We will now provide some details of how the segmentation is performed, how the dictionaries are trained, and how to choose good samples for building the dictionaries.

Intensity and Label Dictionaries

The two dictionaries, that are the basis for our segmentation method, consist of an intensity dictionary of small image patches, D ∈ Rsl×m, with an associated label dictionary of label patches, L ∈ Rsc×m, where m is the number of dictionary elements. The image patches are of size √s×√s×l, where s is the number of pixels in an image patch and l is the color depth of the image. In this work we use RGB with l=3. The label dictionary elements are of size √s×√s×c, where c is the number of labels. Each label pixel contains the probability of a given label. The image and label patches are concatenated to form vectors such that each column in D contains an image patch and each column in L contains a label patch. An illustration of the dictionaries and the segmentation procedure is shown in [Figure 2].

There is an element-wise association between the intensity and label dictionaries, such that each intensity dictionary element has an associated label dictionary element. We use this for inferring label probability to the image that we are segmenting. The segmentation is done by going through all pixels, where a √s×√s×l image patch p can be extracted. We find the nearest neighbor among all intensity dictionary elements dj ∈ D, i.e. , where x denotes the spatial position of the image patch. From the label dictionary we pick the associated label element 1* and add this to the label probability image that we are building, as illustrated in [Figure 2].

Label Probability Image

In the process of building the label probability image we add the label probability to each pixel in the area covered by the image patch. This is done for all pixels so the patches overlap and each pixel get a contribution from its neighborhood. The final probability is estimated as the average of these contributions. This results in a robust labeling, because each pixel is labeled as an average of several contributions. In addition the patches will typically cover more than one class and this way edges are handled very nicely. This is illustrated in the bottom part of [Figure 2].

Learning Dictionaries

The dictionary learning is based on a modified vector quantization approach. We want to choose an intensity dictionary D that models data well, but this should be coupled with a discriminative label dictionary L. These dictionaries are obtained using an iterative clustering procedure, where the dictionary elements are built as a weighted average of a set of image patches.

The clustering procedure runs as follows; we select a training set of image patches with associated label patches, c.f. [Figure 3]. From this set we select a subset of image patches as our initial intensity dictionary D and their associated label patches as the label dictionary D. We now iteratively update the dictionary elements using a weighted average of the nearest intensity patches in our training set. Patch weights are estimated according to the similarity of the ideal label element and the label patch. The ideal label element is a label element modified to have maximum discriminative power. If the ideal label element and the patch label are similar we assign a high weight, and if they are dissimilar the weight is low. In effect, this results in an intensity dictionary D with elements that are similar to the image and simultaneously a label dictionary L with high discriminative power. The precise details of the segmentation procedure are given in reference. [13]
Figure 3: Illustration of building the dictionaries. (a) The training image at the top together with the manual annotated label image at the bottom. (b) A number of image patches have been extracted together with their associated label patches. (c) The image patches from (b) are grouped to form the intensity dictionary and the label dictionaries. In this example there are two classes: brown nuclei and background

Click here to view


Training Data

A representative training set is necessary to build a good segmentation model. The training set should contain all relevant tissue types and structures. It is also important that the annotation covers the entire training image because wrongly segmented training data can weaken the segmentation. But in general we found our method to be very robust to noisy training data.

Nuclei Separation

Our images are acquired from very thin sections of the tissue sample. The sections are often thinner than the nuclei itself, but despite these thin slices it is not uncommon that nuclei overlap in the two-dimensional image. This often leads to merged nuclei regions in the segmentation output, as seen in [Figure 4]. Merging can also occur when nuclei are heavily clustered, as is often the case in tumor regions. To acquire a correct nuclei segmentation and count, merged nuclei therefore have to be separated.
Figure 4: Examples of images which are hard to separate, (left) nuclei overlapping due to occlusion, and (right) nuclei clustering

Click here to view


One of the most widely used object separation techniques is the watershed algorithm. [7] The watershed algorithm operates on a marking function and a set of markers. A very common marker function is the complement of the distance transform [7] using the regional minima of the inverse distance transform as markers. Using the watershed algorithm on the inverse distance transform is useful for separating touching objects that are approximately convex. However, due to small local variations in the distance transform, it often leads to over segmentation as you seldom have a one-to-one correspondence between number of local minima and nuclei.

To overcome this problem, two main strategies are found in the literature: region merging and marker-controlled watershed.

Region-merging refers to techniques that merge over-segmented regions based on certain criteria. Common criteria are based on features like border-strength, size or shape of the region. [8],[9],[10]

Marker-controlled watershed tries to optimize the region minima so that each minimum corresponds to one true region only. This can be done by preprocessing of the marker function by, for example, using different morphological operations. [4],[11]

Our proposed method for separating merged nuclei combines region merging with a marker-controlled watershed and is based on the object separation algorithm presented by Jung and Kim in Ref. [4] This method uses the h-minima transform to remove local minima in the inverse distance transform before applying the watershed. However, as Jung and Kim point out, the empirical selection of the h-value often makes robust segmentation difficult. They therefore propose an adaptive h-value selection method that uses an optimal energy function, based on the hypothesis that a nucleus can be described by an ellipsoidal model. The h-minima transform, with h=1, and watershed transform are repeated and every time two regions merge, the optimal energy function S is calculated. The final segmentation is chosen as the regions with the lowest S.

S is mainly based on the mean averaged fitting residual over all merged regions in a segmented object, where the averaged fitting residual , defined in Ref. [4] is calculated as



which is an averaging over the nh number of merged regions we get using the value h. r is the average distance between the affine transformation Th,i of the boundary points b h,i of region i and the best fitting ellipse Fh,i. Th,i converts the ellipsoidal model to the unit circle and is done to compensate for nuclei having different sizes.

Apart from the averaged fitting residual, we have also introduced an area-based penalty function that adds a penalty to the segmentation distortion function if objects are very small or very large compared to the normal area of a nuclei. is calculated as



where ai is the area of object i and as is the area of a standard nuclei. The values 2.5 and 0.6 above are empirically decided and as is estimated from the training data.

Finally, S is estimated as



S is calculated per object (single nucleus or merged nuclei) and therefore allows different h-values for different objects instead of using a global h-value. Algorithm 1 provides the pseudocode for our method.



The method proposed in reference [4] also introduces an over-segmentation criterion based on the outer angle of merged regions. The criterion rejects h-values that produce merged regions with too small an outer angle. The outer angle is the angle spanned by the circular section with the same center as the best fitting ellipse, which covers the part of the border of the merged region that is also a part of the border of the whole object.

While their over-segmentation criterion successfully identifies over-segmentation, in many cases it has problems in certain cases. One problem is that it rejects the h-value for the whole object, even though this h-value might be the best h-value for segmenting another part of the object. Another, more serious, problem is in the case of heavily clustered nuclei, where nuclei in the middle of the cluster will have very small or no outer angle. Using the outer angle criterion will prevent these clusters from being separated, as it will always reject the h-values until the inner nuclei is merged with a neighbor.

Our proposed method instead uses region merging, where regions are merged using a criterion based on size. Before calculating the optimal energy function, we identify all regions that are very small. The identified regions are then merged with the neighboring region to which it shares the longest common border. S is then calculated on these merged regions. This way, over-segmentation is still identified, but no h-values are rejected and neither are nuclei in the middle of clusters.

Apart from adding merging of small regions, we also introduce a probability-weighted distance transform that replaces the binary distance transform the watershed operates on. One major drawback of the binary distance transform is that it only considers the binary object. It is very good for separating objects with a clear concavity where the objects merge. However, when there is no such concavity, it will never separate the objects, no matter how clear the border of the two objects is if you are looking at other features, such as intensity. To overcome this problem, we combine the ordinary binary distance transform with the probability map from the segmentation. The idea is that pixels between nuclei should have a lower value in the probability map than the pixels within the nuclei. Hence, by combining the distance transform with the probability map, we can suppress the values of the distance transform where there is a low probability of pixels being considered nuclei, and keep the values where there is a high probability. This way, the good properties of the distance transform is kept while also adding the good properties of the probability map.

Combining gray-values, or fuzzy values, with the distance transform is not a new idea. Several methods have been presented in the past, such as the gray-weighted distance transform (GWDT) [14] or weighted distance transform on curve spaces (WDTOCS). [15] A comparison of the two methods [16] concludes that GWDT follows low gray-values whereas WDTOCS will minimize the changes in gray level values. In our data, it makes more sense to follow low gray-values and that has also been verified on our data.

Data and Results

Our analysis on segmenting positive nuclei is aimed at demonstrating the performance of our segmentation procedure and our nuclei separation approach. We validate our nuclei separation method by comparing segmentation results to hand annotated ground truth images. Segmentation of regions of cancerous tissue in whole slides is shown to illustrate the flexibility of our segmentation procedure.

A number of different ways to qualitatively measure the performance of our algorithm exist, such as the number of correctly segmented nuclei or size and shape of the nuclei missed. [4] The ground truth of our images corresponds to an approximate manual annotation of nuclei centre. The measure we have used is an error rate based on the sum of number of annotations that do not have a corresponding segmented object and the number of segmented objects that do not have a corresponding annotation.

Data

We have based our experiments on two sets of immunohistochemically stained images - one set based on the estrogen receptor (ER) and the other on antigen KI-67. In the segmentation experiment we use the original images. For the nuclei separation we have selected 207 ER image samples from 58 tissue micro array-cores corresponding to 58 patients and 136 KI-67 image samples also from 58 cores. Each image sample is about 100 - 200 pixels in width and height. The images are sampled to be challenging for the algorithm. We have hand annotated the images by marking the center position of each nucleus and we do not include nuclei that touches the image border. For the ER data we have a total of 1006 nuclei and for the KI-67 we have 796 nuclei. Data examples are shown in [Figure 5]. For all nuclei separation experiments we used 9×9 image patches in the segmentation algorithm.
Figure 5: Example of data for the nuclei separation experiment. Left three images are KI-67 immunostained and right is ER stained. The first image is the immunostained image, the second is the probability map - bright colors indicating high nuclei probability, and third image is the nuclei separation with the ground truth marked with red points. Image patches (9 by 9 pixels) were used for the segmentation. Note that both images have an error with one nucleus not being separated

Click here to view



   Results Top


We have made a quantitative analysis on nuclei segmentation and separation. We have performed five combinations of methods for each of the two sets of images. The basic method is a standard Bayesian classifier for segmentation based on the RGB color representation followed by a standard watershed nuclei separation (VisioMorph, Visiopharm A/S, http://www.visiopharm.com/ ). In the rest of the experiments we have employed the segmentation procedure we propose. In the second experiment we also use watershed, so the gain in performance from experiment #1 to #2 is only caused by our segmentation procedure. The third experiment is the competing method of Jung and Kim, [11] and the last two are our suggestions - #4 with standard distance transform and #5 with weighted distance transform (GWDT).

The results from the nuclei segmentation and separation experiments are summarized in [Table 1], which shows error rates of nuclei across all images. The error rates are divided in three types of errors: Missing object (no nucleus was segmented where there was supposed to be one), missing annotation (a nucleus was segmented where there was no real nucleus), and multiple annotations (multiple real nuclei in one segmented object, i.e., where separation failed). [Table 1] also shows the mean number of errors per image, no matter which error type.
Table 1: Results for our nuclei separation experiment

Click here to view


We have also made a statistical analysis on the number of errors per image, no matter which error type. This is shown in [Figure 7], which is a box-plot showing median, 75 th percentile, and outlier values as well as the mean error per image from [Table 1]. Included in the statistical analysis is also a confidence test shown in [Figure 8] where we have used the Wilcoxon rank-sum test to test the significance of the differences across experiments.

The proposed segmentation method has been evaluated qualitatively on tumor tissue and the results are shown in [Figure 6].
Figure 6: Segmentation of tumor tissue. (Top) microscopic cores and (bottom) segmentation results with white pixels classified as tumor tissue. We used 5 by 5 pixel image patches and trained a dictionary for each type of image - we used one dictionary for the first two images from the left, a second for the third image and a third dictionaryfor the last two images

Click here to view


The results from the nuclei segmentation experiments are summarized in [Table 1], [Figure 7] and [Figure 8]. In both experiments, our proposed method, evaluated in experiment #5, provides better results than the other methods. The Wilcoxon rank-sum test, in [Figure 8], shows that it is significantly better than experiment #1, #2, and #3 for the ER image set and experiment #1 and #2 for the Ki-67 image set. However, the difference between the best performing methods is not statistically significant. From the box-plot in [Figure 7] we can see that the median of most experiments is 0 errors per image but the values of the extreme outliers decrease with the more high-performing methods.
Figure 7: Box-plot of number of errors per image. (Left) ER experiment (right) KI-67 experiment. On each box, the central red mark is the median, the edges of the box are the 75th percentile, the whiskers extend to the most extreme data points not considered outliers, and outliers are plotted individually as red points. Also shown is the mean as blue plus signs, which can also be seen numerically in Table 1

Click here to view
Figure 8: P-values from the Wilcoxon rank-sum test that tests significance for difference in error rates per image between the experiments. (Left) ER-experiment, (right) KI-67 experiment

Click here to view


The goal of this manuscript is to show how the proposed method can be used for segmenting different structures in histopathological images. We have focused on experiments where we detect positive nuclei for two different markers. However, for the method to be useful in histopathological applications, negative nuclei typically have to be segmented as well. We have not done any quantitative or qualitative experiments for segmenting both positive and negative nuclei, but the segmentation method supports segmentation of two, or more, classes and we have performed a few tests to show that this is possible. The output of one of the tests can be seen in [Figure 9]. Making a more thorough analysis of the method's ability to segment positive and negative nuclei is part of our future work.
Figure 9: Segmentation of positive and negative nuclei with positive nuclei labeled white and negative nuclei gray, (left) original images, (right) segmentation

Click here to view



   Discussion Top


We have presented a method for analysis of immunohistochemically stained tissue samples based on a highly flexible segmentation procedure and a novel nuclei separation method. The nuclei separation method is based on shape, size, and nuclei probability. We have performed two experiments to validate our method - a tissue segmentation experiment that illustrates the flexibility of our segmentation procedure, and a nuclei separation experiment, where we compare the segmentation to hand annotated images with a total of 1802 nuclei. We obtain better performance on both binary segmentation and nuclei separation compared to the state of the art, despite these results are not statistically significant.

We employ images based on two common markers within breast cancer diagnosis, one based on the estrogen receptor (ER) and one based on the KI-67 receptor. The ER images are relatively uniform in appearance whereas the KI-67 images vary significantly with the brown stained nuclei going from pale brown to dark brown. This is also the reason why the results of the ER experiment are better than the KI-67 experiment.

Our algorithm is based on learning the local appearance of the images that we are segmenting, and we use no prior assumption about the number of nuclei. As a result we obtain a flexible and robust model that works well independently of the density and distribution of stained nuclei. If there are for example no brown nuclei, it will just return an empty segmentation. Furthermore, the algorithm can be trained for segmenting two or more tissue types, which is for example relevant for quantifying the ratio between positive and negative nuclei.

The performance gain in employing our nuclei separation experiment is however not statistically significant, but it is very close. Ideally a larger dataset could show if there really is a significant difference, but annotating data are time consuming and we chose the 1802 hand annotated nuclei. A related problem is that there are, to our knowledge, no standard data benchmarks available, which could provide a better comparison.

In addition to efforts in the academic sector, a number of commercial products exist that already claim some success in the area of cell nuclei segmentation. These include Aperio Genie™ ( http://www.aperio.com/ ), CRI inForm™ ( http://www.cri-inc.com/ ), Definiens Tissue Studio™ ( http://www.tissuestudio.com/ ), and Visiopharms VisioMorph™ ( http://www.visiopharm.com/ ).

The only one of these products we have access to is VisioMorph and the method it uses very much equals the first experiment in our study. It is based on a Bayesian classifier and a watershed algorithm for separating nuclei. A version of our proposed method is implemented in TissueMorph™. We do not have access to the other commercial products, and therefore we have not been able to make a comparative study. Again it would be beneficial with a standard benchmark with reported performance for these commercial products.

The purpose of automated analysis of immunostained histopathological images is to infer information about diagnosis, disease, and potential development of the disease and ultimately how the patient can be cured. Current methods often follow the methodology of the pathologist. Using automated techniques allows a much higher degree of detail in the analysis, and in addition a much larger volume of data can be analyzed. Based on our flexible segmentation and nuclei separation technique it is possible to extract much of this information in a precise and robust manner.


   Conclusion Top


We have addressed the problem of segmenting and quantifying tissue in microscopic images of immunohistochemically stained tissue samples. We employ a recently published segmentation procedure coupled with a nuclei separation method based on the h-minima transform. We demonstrate our method on a data set with 1802 hand annotated nuclei, and obtain good performance compared to state of the art nuclei separation. Our segmentation procedure is simple, highly flexible, and we demonstrate how it, in addition to the nuclei separation, can perform precise segmentation of cancerous tissue. The complexity of the segmentation procedure is linear in the image size and the nuclei separation is linear in the number of nuclei. Additionally the method can be parallelized to obtain high-speed computations.


   Acknowledgment Top


The work was partly financed by NordForsk, Visiopharm, and Centre for Imaging Food Quality project, which is funded by the Danish Council for Strategic Research (contract no 09067039) within the Program Commission on Health, Food and Welfare. Furthermore we would like to thank Visiopharm A/S for making data available for our experiments.

 
   References Top

1.Demir C, Yener B. Automated cancer diagnosis based on histopathological images: A systematic survey. Technical Report, New York: Rensselaer Polytechnic Institute; 2005.  Back to cited text no. 1
    
2.Gurcan MN, Boucheron LE, Can A, Madabhushi A, Rajpoot NM, Yener B. Histopathological image analysis: A review. IEEE Rev Biomed Eng 2009;2:147-71.  Back to cited text no. 2
    
3.Al-Kofahi Y, Lassoued W, Lee W, Roysam B. Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Trans Biomed Eng 2010;57:841-52.  Back to cited text no. 3
    
4.Jung C, Kim C. Segmenting clustered nuclei using h-minima Transform-Based marker extraction and contour parameterization. IEEE Trans Biomed Eng 2010;57:2600-4.  Back to cited text no. 4
    
5.Arif M, Rajpoot N. Classification of potential nuclei in prostate histology images using shape manifold learning. In International Conference on Machine Vision (ICMV), 2007.  Back to cited text no. 5
    
6.Boucheron LE. Object- and spatial-level quantitative analysis of multispectral histopathology images for detection and characterization of cancer. PhD thesis, University of California, Santa Barbara; 2008.  Back to cited text no. 6
    
7.Meyer F. Topographic distance and watershed lines. Signal Processing; 1994.  Back to cited text no. 7
    
8.Umesh Adiga PS, Chaudhuri BB. An efficient method based on watershed and rule-based merging for segmentation of 3-D histopathological images. Unite Sates: Pattern Recognition; 2001.  Back to cited text no. 8
    
9.Chen X, Zhou X, Wong ST. Automated segmentation, classification, and tracking of cancer cell nuclei in Time-Lapse microscopy. IEEE Trans Biomed Eng 2006;53:762-6.  Back to cited text no. 9
    
10.Wählby C, Sintorn IM, Erlandsson F, Borgefors G, Bengtsson E. Combining intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue sections. J Microsc 2004;215:67-76.  Back to cited text no. 10
    
11.Cheng J, Rajapakse JC. Segmentation of clustered nuclei with shape markers and marking function. IEEE Trans Biomed Eng 2009;56:741-8.  Back to cited text no. 11
    
12.Veta M, Huisman A, Viergever MA, van Diest PJ, Pluim JP. Marker-controlled watershed segmentation of nuclei in hande stained breast cancer biopsy images. In 2011 International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, Illinois, U.S.A. 2011.  Back to cited text no. 12
    
13.Dahl AL, Larsen R. Learning dictionaries of discriminative image patches. In British Machine Vision Conference, (BMVC), 2011.  Back to cited text no. 13
    
14.Rudovitz D. Data structures for operations on digital images. In: Cheng GC, et al. editors: Pictorial Pattern Recognition; 1968.  Back to cited text no. 14
    
15.Toivanen PJ. New geodesic distance transforms for gray-scale images. Pattern Recogn. Lett.1996;17:437-50.  Back to cited text no. 15
    
16.Fouard C, Gedda M. An objective comparison between gray weighted distance transforms and weighted distance transforms on curved spaces. In: Kuba A, Nyúl LG, Palágyi K, editors. Discrete Geometry for Computer Imagery. Berlin, Heidelberg: Springer; 2006.  Back to cited text no. 16
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9]
 
 
    Tables

  [Table 1]


This article has been cited by
1 Dictionary-enhanced imaging cytometry
Antony Orth,Diane Schaak,Ethan Schonbrun
Scientific Reports. 2017; 7: 43148
[Pubmed] | [DOI]
2 High-throughput histopathological image analysis via robust cell segmentation and hashing
Xiaofan Zhang,Fuyong Xing,Hai Su,Lin Yang,Shaoting Zhang
Medical Image Analysis. 2015; 26(1): 306
[Pubmed] | [DOI]
3 Skin Blood Perfusion and Cellular Response to Insertion of Insulin Pen Needles With Different Diameters
Kezia Ann Præstmark,Casper Bo Jensen,Bente Stallknecht,Nils Berg Madsen,Jonas Kildegaard
Journal of Diabetes Science and Technology. 2014; 8(4): 752
[Pubmed] | [DOI]
4 Multispectral Imaging
J. R. Mansfield
Veterinary Pathology. 2014; 51(1): 185
[Pubmed] | [DOI]
5 Automatic Ki-67 Counting Using Robust Cell Detection and Online Dictionary Learning
Shaoting Fuyong Xing,Shaoting Hai Su,Janna Neltner,Janna Lin Yang
IEEE Transactions on Biomedical Engineering. 2014; 61(3): 859
[Pubmed] | [DOI]



 

 
Top
  

    

 
  Search
 
   Browse articles
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
   Introduction
    Materials and Me...
   Results
   Discussion
   Conclusion
   Acknowledgment
    References
    Article Figures
    Article Tables

 Article Access Statistics
    Viewed2344    
    Printed33    
    Emailed1    
    PDF Downloaded561    
    Comments [Add]    
    Cited by others 5    

Recommend this journal