

RESEARCH ARTICLE 



J Pathol Inform 2019,
10:30 
Statistical analysis of survival models using feature quantification on prostate cancer histopathological images
Jian Ren^{1}, Eric A Singer^{2}, Evita Sadimin^{3}, David J Foran^{4}, Xin Qi^{4}
^{1} Department of Electrical and Computer Engineering, Rutgers University, Piscataway, NJ, USA ^{2} Department of Pathology and Laboratory Medicine, Section of Urologic Oncology; Center for Biomedical Imaging and Informatics, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA ^{3} Department of Pathology and Laboratory Medicine, Section of Urologic Oncology, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA ^{4} Center for Biomedical Imaging and Informatics, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
Date of Submission  06Nov2018 
Date of Acceptance  14Jun2019 
Date of Web Publication  27Sep2019 
Correspondence Address: Dr. Xin Qi Center for Biomedical Imaging and Informatics, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ USA
Source of Support: None, Conflict of Interest: None  Check 
DOI: 10.4103/jpi.jpi_85_18
Abstract   
Background: Grading of prostatic adenocarcinoma is based on the Gleason scoring system and the more recently established prognostic grade groups. Typically, prostate cancer grading is performed by pathologists based on the morphology of the tumor on hematoxylin and eosin (H and E) slides. In this study, we investigated the histopathological image features with various survival models and attempted to study their correlations. Methods: Three texture methods (speededup robust features, histogram of oriented gradient, and local binary pattern) and two convolutional neural network (CNN)based methods were applied to quantify histopathological image features. Five survival models were assessed on those image features in the context with other prostate clinical prognostic factors, including primary and secondary Gleason patterns, prostatespecific antigen levels, age, and clinical tumor stages. Results: Based on statistical comparisons among different image features with survival models, image features from CNNbased method with a recurrent neural network called CNNlongshortterm memory provided the highest hazard ratio of prostate cancer recurrence under Cox regression with an elastic net penalty. Conclusions: This approach outperformed the other image quantification methods listed above. Using this approach, patient outcomes were highly correlated with the histopathological image features of the tissue samples. In future studies, we plan to investigate the potential use of this approach for predicting recurrence in a wider range of cancer types.
Keywords: Histopathological image, image features, neural networks, prostate cancer, survival models
How to cite this article: Ren J, Singer EA, Sadimin E, Foran DJ, Qi X. Statistical analysis of survival models using feature quantification on prostate cancer histopathological images. J Pathol Inform 2019;10:30 
How to cite this URL: Ren J, Singer EA, Sadimin E, Foran DJ, Qi X. Statistical analysis of survival models using feature quantification on prostate cancer histopathological images. J Pathol Inform [serial online] 2019 [cited 2021 Oct 16];10:30. Available from: https://www.jpathinformatics.org/text.asp?2019/10/1/30/268079 
Introduction   
Survival analysis is a means for predicting patient outcomes by providing invaluable information for selecting treatment. Predicting prostate cancer survival outcomes is a significant challenge. Following radical prostatectomy, men must be closely monitored for the evidence of recurrence. This is typically done via prostatespecific antigen (PSA) blood tests. A detectable or rising PSA after surgery is the evidence of biochemical recurrence. The measure of time from surgery to biochemical recurrence is biochemical recurrencefree survival (bRFS). Multiple studies examined predictors of bRFS using quantitative histopathological features with some survival models.^{[1],[2],[3],[4]} However, numerous prediction tools^{[5],[6],[7],[8],[9],[10],[11]} utilized wholeslide images (WSIs) to assess prostate cancer recurrence and predicted the likely outcomes resulting from treatments. Few of these studies simultaneously considered clinical factors (primary and secondary Gleason patterns, PSA value, age, tumor stage) and tissue WSIs to correlate with recurrence under different survival models.
The Gleason scoring system for prostate cancer remains one of the best predictors for prostate cancer progression and recurrence,^{[12],[13],[14],[15]} despite significant interobserver reproducibility among pathologists.^{[16],[17],[18]} A more recently adapted grading system stratifies patients into five prognostic grade groups^{[19]} based on their Gleason patterns: grade Group 1 (Gleason ≤ 3 + 3 = 6), Grade Group 2 (Gleason 3 + 4 = 7), Grade Group 3 (Gleason 4 + 3 = 7), Grade Group 4 (Gleason 4 + 4 = 8, 3 + 5 = 8, and 5 + 3 = 8), and Grade Group 5 (Gleason 4 + 5 = 9, 5 + 4 = 9, and 5 + 5 = 10). [Figure 1] shows an example of Gigapixel WSI with different Gleason patterns. The greenframed patch contains Gleason pattern 3; the blueframed patch contains Gleason pattern 4; and the redframed patch contains Gleason pattern 5. In this study, we conducted experiments on public prostate cancer dataset using different feature quantification methods and recurrence analysis using different survival models. Histopathological image features were quantified through texture methods and neural networkbased approaches. We focused on the prostate cancer grade groups of 1–4. The bRFS was applied as the time to recurrence for prostate cancer progression analysis.  Figure 1: Example Gigapixel wholeslide image with different Gleason patterns. The green framed patch contains Gleason pattern 3; the blueframed patch contains Gleason pattern 4; and the redframed patch contains Gleason pattern 5
Click here to view 
Materials and Methods   
Materials
In this study, we used the prostate dataset from the Genomic Data Commons (GDC).^{[20]} The dataset included wholeslide histopathological images from patients and their corresponding clinical reports, including the primary and secondary Gleason pattern, patients' PSA value, age, and tumor stage. All the image data, annotations of Gleason score, and clinical information were publicly available.
We selected the patients with lowrisk (Gleason score 3 + 3), intermediaterisk (Gleason score 3 + 4 or 4 + 3), and highrisk prostate cancer (Gleason score 4 + 4) because those patient populations show a reasonable range of prognoses for our analysis. We excluded patients with Gleason Grade Group 5 patients in this study due to poor prognosis of their disease.^{[21]} Considering the high computational cost on the Gigapixel tissue WSIs, existing WSIs classification and recurrence analysis approaches were focused on effectively utilizing the cropped patches from region of interests.^{[22],[23],[24],[25],[26],[27]} For image preparation, we adopted the twostep cropping–selecting process. First, original patches were automatically generated within each WSI under ×40 with a patch size of 4096 × 4096. Second, the patches with the tissue accounting for at least 20% of the whole area were selected for our experiments. The number of WSIs and cropped patches under different Gleason scores is shown in [Table 1].  Table 1: The number of wholeslide images and their corresponding automatically selected patches under different Gleason scores composing from a sum of Gleason patterns 3+3, 3+4, 4+3, and 4+4 prostate prognostic grading groups
Click here to view 
Methods
Initially, we utilized various quantification methods to extract image features from WSIs. Next, the recurrence analysis was performed on the combination of image features and clinical factors utilizing different survival models, as shown in [Figure 2]. Hazard ratios using different survival models were calculated to indicate the correlation between image features (or in context of clinical factors) and recurrence; the higher the hazard ratio, the higher the correlations.  Figure 2: Outline of image feature quantification from wholeslide images and assessed by various survival models
Click here to view 
Image feature quantification
We adopted five approaches for the purpose of feature quantification including unsupervised and supervised methods. The unsupervised texture methods consisted of speededup robust features (SURFs),^{[28]} histogram of oriented gradients (HOGs),^{[29]} and local binary pattern (LBP).^{[30]} The two supervised methods are based on convolutional neural networks (CNNs). For supervised methods, we randomly selected 20% of the cases as testing set, 10% as validation set, and the remaining as training set.
Texture features
We chose three texture methods for prostate cancer histopathological image analysis. They were rotation, translation, and scale and intensityinvariant which were suitable for descriptions of the texture features within WSIs.
The SURF^{[28]} is partly inspired by the scaleinvariant feature transform (SIFT) descriptors. The standard version of SURF is several times faster than SIFT and more robust against different image transformations than SIFT. The image is transformed into coordinates, using the multiresolution pyramid technique, to copy the original image with a pyramidal Gaussian or Laplacian pyramid shape to obtain an image with the same size but with reduced bandwidth. The HOG^{[29]} counts occurrences of gradient orientation in a local region of an image. It is similar to that of edgeorientation histograms, SIFT descriptors, and shape contexts but differs in that it is computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy. The LBP^{[30]} is used to model the image local features in texture spectrum units in a multiresolution grayscale mode. It is based on recognizing local binary unit patterns for any quantization of the angular space and spatial resolution.
The image features for each patch were generated using a bagofwords approach^{[31]} from the texture features of different texture methods. By treating image features as words, a bag of words is a sparse vector of occurrence counts (histogram) of a vocabulary of local image features. In the bagofword approach, it converts vectorrepresented texture features to codewords, which also produce a codebook. The image features are mapped to certain codewords through the clustering process, and the image is then represented by the histogram of the codewords. Empirically, we use 100 as the number of cluster centers to report the best performance for texture features. To select the texture features for WSIs, we apply principal component analysis (PCA)^{[32]} of the image features for all patches within a WSI due to correlations among the patches.
Convolutional neural networkbased features
In recent years, with the advances of deep learning, studies using CNNs have demonstrated significant improvement on histopathological image classification^{[27],[33],[34],[35],[36]} and segmentation.^{[33],[34],[37],[38]} For the WSIs, applications based on CNNs have been widely developed.^{[39],[40],[41]} In our study, we adopted two approaches to obtain CNNbased features. The first one was using the neural network to obtain image features for each patch, and then the features for WSIs were obtained by utilizing PCA on all patches. The CNN employed in the study is shown in [Table 2]. The input to the network was the cropped patches from prostate pathological WSIs. The activations from the second to the last layer were considered as the image features of the input samples. To train the network with patches, we assigned Gleason pattern as the ground truth annotation for the patch. The GDC WSIs have been previously graded with the primary and secondary patterns, as well as the final Gleason score given.
To model variations among Gleason patterns within a WSI, we used the multitask architecture to enable the network to learn as much information about the Gleason patterns from the patches of a WSI as possible. During the training process, we assigned the primary pattern and the sum of primary pattern and secondary pattern (Gleason score) as labels for each patch and use the following multitask loss function:
where for the i^{th} image within the batch of N images, and encoded the Gleason grading for the primary pattern and the sum score and and encoded the predicted grading of the model. The onehot encoding is a process by which categorical variables are converted into a form that could be provided to CNN to do a better job in classification. The results suggested that using the primary Gleason pattern and the Gleason score together achieved the best estimate of risk of recurrence by capturing local and global image feature distribution more efficiently than using either one alone.
For the second approach, we treated the cropped patches from the WSI as an image sequence and used one type of recurrent neural network (RNN) called longshortterm memory (LSTM) to explore the longterm dynamic information of the patch spatial sequence within the WSI. We denoted the method as CNN features with LSTM (CNN + LSTM). The LSTM could fully leverage the patch spatial sequence within a WSI to get the representative features that model the global Gleason score of the WSI and the distribution of the Gleason patterns among the WSI. Recently, the LSTM model has been successfully used in speech recognition,^{[42],[43]} language translation models,^{[44]} image captioning,^{[45]} and video classification.^{[46]} Compared with the traditional RNNs, LSTM is more effectively in longrange and shortterm spatial sequence modeling. In general, given an input feature sequence (x_{1}, x_{2},…, x_{T}), the LSTM outputs the output sequence (y_{1}, y_{2},…, y_{T}). The hidden layer of LSTM is computed recursively from t = 1 to t = T with the following equations:
where x_{i} is the network activations of the i^{th} patch, h_{t} is the hidden vector, i_{t}, c_{t}, f_{t}, and o_{t} are, respectively, the activation vectors of the input gate, memory cell, forget gate, and output gate. W terms denoted the weight matrices connecting different units, b terms denoted the bias vectors, and σ is the logistic sigmoid function. From the above equations, we can see the memory cell c_{i} in LSTM having two inputs: the weighted sum of the current inputs and the previous memory cell units c_{t − 1}, which enables the model to learn when to forget the old information and when to consider new information. The output gate o_{t} controls the propagation of information to the following step.
Since we utilized the spatial characteristic encoded features from CNN, the training process of LSTM of patches within WSIs was formed in a spatial format instead of time sequential manner. As shown in [Figure 3], we used the image coordinates to indicate the location of each patch in the patch spatial sequence. In this way, we considered both the unique characteristics of each patch and the finegrained variations between patches. For one prostate WSI, the patches were fed into the network to get the activations from the second to the last layer. Then, we utilized a onelayer LSTM to recursively map the activations of each patch to a feature vector. In addition, the average pooling layer was applied on top of the network to get a feature vector as the computational image features for the WSI. The number of hidden units for each LSTM is 1024. During the training process, we applied the multitask loss and assign the primary pattern and the Gleason score for the WSIs.  Figure 3: The multitask neural network architecture for computational image features extraction from wholeslide images. The cropped patches are formed as a sequence by the image coordinates. The longshortterm memory is built on top of the convolutional neural network for the longterm spatial modeling of the activation sequence. An average pooling layer maps the activations into one feature vector
Click here to view 
Survival models
To evaluate the performance of various survival models using different image features quantified by textural and CNNbased methods on patients with prostate cancer, we used the bRFS since their initial treatment as a timetorecurrence variable for survival models. Using survival models, we assessed the image features related to recurrence hazard risk scores in the context of other clinical prognostic factors, including the primary and the secondary Gleason patterns, PSA, age, and clinical tumor stage.
The hazard risk scores of image features in the context of clinical mean a measure of prostate cancer recurrence risk ratio, commonly in timetoevent analysis or survival analysis. The survival models evaluated in our study include multivariate Cox proportionalhazards model,^{[47]} Cox regression by an elastic net penalty (COXEN),^{[48]} parametric proportionalhazard model (PHEX),^{[49]} parametric proportionalhazard model with lognormal distance (PHLogN),^{[49]} and parametric proportionalhazard model with loglogistic distance (PHLogL).^{[49]}
For the highdimensional data, univariate Cox regression was applied to the computational image features. Only those with Wald test, P < 0.05 is selected in conjunction with clinical factors as inputs of the survival models. The Cox proportionalhazards model is a popular regression model for the analysis of survival data. It is a semiparametric method for adjusting survival rate estimates to quantify the effect of predictor variables. In contrast with parametric models, it makes no assumptions about the shape of the socalled baseline hazard function. It represents the effects of explanatory variables as a multiplier of a common baseline hazard function H_{0}. Given the patients (t_{i}, l_{i}, x_{i}), where i = 1, 2.,N, we have the t_{i} as the patient's recurrence time for individual i; l_{i} is the label of the censored data that equals 1 if the recurrence occurred at that time and 0 if the patient has been censored; and X_{i} as the vector of covariates of the selected image features and clinical factors.
The hazard function is the nonparametric part of the Cox proportionalhazards regression function corresponding to
Here, x_{ij} is the image features j for patient i, where j = 1, 2, …p and β_{i} is the Cox regression parameter for each patient.
The hazard ratio is derived from , representing the relative risk of instant failure for patients having the predictive value X_{i} compared to the ones having the baseline values. Here, d_{i} is weighting parameters for each patient.
For the COXEN, the elastic net penalty is given in the equation below. It is a mixture of the L1 (Lasso) and L2 (ridge regression) penalty. Here, is the ratio between L1 and L2 for elastic net.
where
Based on the assumption that the effect of the covariates is to increase or decrease the hazard by a proportionate amount at all durations, the parametric proportionalhazard model is a locationscale model for arbitrary transform of the time variable t_{i}, leading to accelerated failure time model with different penalty distance functions. The distance functions we use for parametric proportionalhazard models are exponential transformation (PHEX), lognormal (PHLogN), and loglogistic (PHLogL) distances.
The survival model fitting to different image features were quantified by Akaike information criteria (AIC).^{[50]}
AIC = −2log (likelihood) +2K(11)
where likelihood is a measureofmodel fitness and K represents the number of model parameters. The smaller value of the AIC, the better the goodness of fit of the survival models.
Experimental Results   
In this section, we conducted the experiments on the public prostate cancer dataset to make statistical analysis on various survival models using different histopathological image feature quantification methods.
Implementation details
For the CNNbased approaches to extract image features, we first used the patches to train the CNN with multitask loss. Each patch was resized as 256 × 256 and assigned two labels according to the Gleason grading of the WSI: one being the primary pattern and another being the Gleason score. The CNN was trained with minibatch stochastic gradient descent. The momentum is 0.9, and weight decay was 5 × 10^{−5}. The initial learning rate is 10^{−3} and annealed by 0.1 after 10^{4} iterations. To train the LSTM, we set the same momentum, the weight decay, and the initial learning rate. The learning rate is annealed by 0.1 after 2 × 10^{3} iterations. The implementation is based on the Caffe toolbox.^{[51]}
Comparison of image features
First, only using image features from tissue specimens, including clinical Gleason primary and secondary patterns and the quantified image features from various image methods, their Cox hazard ratios are shown in [Table 3]. CNN achieved better results than texture methods, including SURF,^{[28]} HOG,^{[29]} and LBP.^{[30]} Using CNN with LSTM to model the spatial relation of patches achieved the highest Cox hazard ratio, which indicated the best recurrence correlation for prostate cancer patients' recurrence data. On the other hand, the image features obtained from texturebased methods and CNN approaches achieved higher Cox hazard ratios as compared to utilizing primary and secondary patterns alone.  Table 3: The Cox hazard ratios of only using clinical Gleason primary and secondary patterns and image features from different image analysis methods
Click here to view 
Second, in addition to the image features, PSA levels, ages, and clinical tumor stages were included in the Cox survival model, besides the primary and the secondary Gleason patterns. The results of combining clinical factors and image features are shown in [Table 4], demonstrating that the image features generated from CNNbased approaches were more representative than the texture features by having higher values of hazard ratio. In addition, those features were more representative than clinical prognostic factors. We also calculated the AIC values, as shown in [Table 4]. The smaller AIC value encodes the better goodness of fit of the survival model. CNN + LSTM achieved the best fitness on the Cox regression model compared to other image features quantification methods.  Table 4: The Cox hazard ratios and Akaike information criteria of using clinical factors including Gleason primary and secondary patterns, patients' prostatespecific antigen, age, and clinical tumor stages, and image features from different image analysis methods
Click here to view 
Finally, without any image features, we showed the Cox hazard ratios of the clinical factors, as shown in [Table 5]. From the results of [Table 3], [Table 4], [Table 5], we can see that primary Gleason patterns have higher Cox hazard ratios than the ones of other clinical factors, which was consistent with its high prediction power for prostate cancers.^{[1],[4]}
Ablation study on training strategies
Furthermore, considering the multiple Gleason patterns within WSIs, we designed two training strategies to train the CNNbased approaches. The first one was to use multitask loss to learn both the primary Gleason pattern and the sum of the primary and secondary patterns (namely, the Gleason score). The second one was to use the primary Gleason pattern or the Gleason score alone to learn the patterns within the patches or WSIs.
The performance of two CNNbased approaches on patient recurrence analysis was compared using different training strategies. The results are shown in [Table 6]. We can see that the multitask architecture achieved better correlation with patients' recurrence than training label using the primary Gleason pattern or Gleason score alone as it has much higher recurrence hazard ratios and lower AIC values. This is because the primary Gleason pattern and the Gleason score together could better reflect the local and global image features in the WSIs than use each alone.  Table 6: The Cox hazard ratios and Akaike information criteria of convolutional neural networkbased approaches on patients' progression analysis using three different training strategies
Click here to view 
Comparison of survival models
In this section, we performed statistical analysis on various survival models, including COXEN,^{[48]} PHEX,^{[50]} PHLogN,^{[50]} and PHLogL,^{[50]} using prostate images with Gleason score 6–8 and clinical factors. The Cox proportionalhazards model does not need an assumption of a particular survival distribution of the patients' survival data. The only assumption in the model is about the proportional hazards. Unlike the Cox proportionalhazards model, parametric models with different penalty distance functions (such as exponential, lognormal, and loglogistic) need to specify the hazard functions.^{[52],[53]} Studies have indicated that under certain circumstances, such as strong effect or strong time trend in covariates or followup depending on covariates, the parametric models are good alternatives to the Cox regression model.^{[53]}
We assessed different survival models and show the hazard ratios of image features and patients' clinical prognostic factors, as shown in [Table 7]. Based on these results, first, we can see that the image features quantified from WSIs outperformed other clinical factors in all texture and CNNbased approaches. Second, CNNbased approaches achieved a better correlation with patients' recurrence due to their higher hazard ratios than other texture methods for all survival models. Third, by comparing with [Table 4], COXEN achieved the lowest AIC value with image features obtained from CNN + LSTM, proving that the model was more suitable for recurrence analysis for prostate patients with low, intermediate, and high risk than other survival models.  Table 7: The Cox hazard ratios and Akaike information criteria of different survival models using texture methods and convolutional neural networkbased approaches
Click here to view 
Discussion and Conclusions   
In this paper, we presented three unsupervised texture methods (SURF, HOG, and LBP) and two supervised CNNbased methods to quantify the features from histopathological images. Five survival models were assessed on those image features along with prostate cancer clinical prognostic factors, including the primary and the secondary Gleason patterns, PSA, age, and clinical tumor stage to perform bPFS analyses.
Based on the statistical comparisons among different image feature quantification methods with survival models, the CNNLSTM provided the highest hazard ratio of prostate cancer recurrence under COXEN. COXEN outperforms other image quantification methods with other survival models, respectively. In our approach, patient outcomes were better correlated with their histopathological image features. Due to the limited size of the public prostate dataset, the results achieved from our experiments were preliminary. To further validate its generalizability of our approach, more prostate images from local institutions are needed to perform extensive experiments.
In the future, besides using tissue WSIs for patients' bRFS analysis, integrating patients' genomic information and tissue histopathology images will be investigated as a means for providing additional predictive power. Doing so would provide a more quantitative and accurate clinical decisionmaking support system for patients with prostate cancer.
Financial support and sponsorship
This research was funded, in part, by grants from NIH contracts 4R01LM00923908, 4R01CA16137505, 1UG3CA22502101, and P30CA072720.
Conflicts of interest
Dr. Singer is the principal investigator on an investigatorinitiated clinical trial that is funded by Astellas/Medivation (NCT02885649) (http://cinj.org/clinicaltrials/index&?show=trial&p=081604). The other authors declare that they have no competing interests.
References   
1.  Madabhushi A, Agner S, Basavanhally A, Doyle S, Lee G. Computeraided prognosis: Predicting patient and disease outcome via quantitative fusion of multiscale, multimodal data. Comput Med Imaging Graph 2011;35:50614. 
2.  Lee G, Singanamalli A, Wang H, Feldman MD, Master SR, Shih NN, et al. Supervised multiview canonical correlation analysis (sMVCCA): Integrating histologic and proteomic features for predicting recurrent prostate cancer. IEEE Trans Med Imaging 2015;34:28497. 
3.  Lee G, Veltri RW, Zhu G, Ali S, Epstein JI, Madabhushi A. Nuclear shape and architecture in benign fields predict biochemical recurrence in prostate cancer patients following radical prostatectomy: Preliminary findings. Eur Urol Focus 2017;3:45766. 
4.  Leo P, Lee G, Shih NN, Elliott R, Feldman MD, Madabhushi A. Evaluating stability of histomorphometric features across scanner and staining variations: Prostate cancer diagnosis from whole slide images. J Med Imaging (Bellingham) 2016;3:047502. 
5.  Kattan MW, Eastham JA, Stapleton AM, Wheeler TM, Scardino PT. A preoperative nomogram for disease recurrence following radical prostatectomy for prostate cancer. J Natl Cancer Inst 1998;90:76671. 
6.  Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, et al. Assessing the performance of prediction models: A framework for traditional and novel measures. Epidemiology 2010;21:12838. 
7.  Hull GW, Rabbani F, Abbas F, Wheeler TM, Kattan MW, Scardino PT. Cancer control with radical prostatectomy alone in 1,000 consecutive patients. J Urol 2002;167:52834. 
8.  Kattan MW, Wheeler TM, Scardino PT. Postoperative nomogram for disease recurrence after radical prostatectomy for prostate cancer. J Clin Oncol 1999;17:1499507. 
9.  Cooperberg MR, Broering JM, Carroll PR. Time trends and local variation in primary treatment of localized prostate cancer. J Clin Oncol 2010;28:111723. 
10.  Ren J, Sadimin ET, Wang D, Epstein JI, Foran DJ, Qi X. Computer aided analysis of prostate histopathology images Gleason grading especially for Gleason score 7. Conf Proc IEEE Eng Med Biol Soc 2015;2015:30136. 
11.  Ren J, Sadimin E, Foran DJ, Qi X. Computer aided analysis of prostate histopathology images to support a refined Gleason grading system. Proc SPIE Int Soc Opt Eng 2017. pii: 101331V. 
12.  Egevad L, Granfors T, Karlberg L, Bergh A, Stattin P. Prognostic value of the Gleason score in prostate cancer. BJU Int 2002;89:53842. 
13.  Gleason DF, Mellinger GT. Prediction of prognosis for prostatic adenocarcinoma by combined histological grading and clinical staging. J Urol 1974;111:5864. 
14.  Epstein JI, Partin AW, Sauvageot J, Walsh PC. Prediction of progression following radical prostatectomy. A multivariate analysis of 721 men with longterm followup. Am J Surg Pathol 1996;20:28692. 
15.  Billis A, Guimaraes MS, Freitas LL, Meirelles L, Magna LA, Ferreira U. The impact of the 2005 international society of urological pathology consensus conference on standard Gleason grading of prostatic carcinoma in needle biopsies. J Urol 2008;180:54852. 
16.  Allsbrook WC Jr., Mangold KA, Johnson MH, Lane RB, Lane CG, Amin MB, et al. Interobserver reproducibility of Gleason grading of prostatic carcinoma: Urologic pathologists. Hum Pathol 2001;32:7480. 
17.  Allsbrook WC Jr., Mangold KA, Johnson MH, Lane RB, Lane CG, Epstein JI. Interobserver reproducibility of Gleason grading of prostatic carcinoma: General pathologist. Hum Pathol 2001;32:818. 
18.  Glaessgen A, Hamberg H, Pihl CG, Sundelin B, Nilsson B, Egevad L. Interobserver reproducibility of modified Gleason score in radical prostatectomy specimens. Virchows Arch 2004;445:1721. 
19.  Pierorazio PM, Walsh PC, Partin AW, Epstein JI. Prognostic Gleason grade grouping: Data based on the modified Gleason scoring system. BJU Int 2013;111:75360. 
20.  Kandoth C, McLellan MD, Vandin F, Ye K, Niu B, Lu C, et al. Mutational landscape and significance across 12 major cancer types. Nature 2013;502:3339. 
21.  Makino T, Miwa S, Koshida K. Impact of Gleason pattern 5 on outcomes of patients with prostate cancer and iodine125 prostate brachytherapy. Prostate Int 2016;4:1525. 
22.  Wang H, Xing F, Su H, Stromberg A, Yang L. Novel image markers for nonsmall cell lung cancer classification and survival prediction. BMC Bioinformatics 2014;15:310. 
23.  Yao J, Wang S, Zhu X, Huang J. Imaging Biomarker Discovery for Lung Cancer Survival Prediction. In: Ourselin S., Joskowicz L., Sabuncu M., Unal G., Wells W. (eds) Medical Image Computing and ComputerAssisted Intervention – MICCAI 2016. MICCAI 2016. Springer, Cham: Lecture Notes in Computer Science 2016;9901:64957. 
24.  Yu KH, Zhang C, Berry GJ, Altman RB, Ré C, Rubin DL, et al. Predicting nonsmall cell lung cancer prognosis by fully automated microscopic pathology image features. Nat Commun 2016;7:12474. 
25.  Zhu X, Yao J, Huang J. Deep convolutional neural network for survival analysis with pathological images. In: IEEE International Conference on Bioinformatics and Biomedicine. BIBM; 2016. p. 5447. 
26.  Zhu X, Yao J, Luo X, Xiao G, Xie Y, Gazdar A, et al. Lung Cancer Survival Prediction from Pathological Images and Genetic Data – An Integration Study. IEEE 13 ^{th} International Symposium on Biomedical Imaging; 2016. p. 1173 6. 
27.  Hou L, Samaras D, Kurc TM, Gao Y, Davis JE, Saltz JH, et al. Patchbased convolutional neural network for whole slide tissue image classification. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 2016;2016:242433. 
28.  Bay H. SURF: Speeded up robust features. Comput Vis Image Underst 2008;110:34659. 
29.  Dalal N, Triggs B. Histograms of Oriented Gradients for Human Detection. CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition; 2005. p. 88693. 
30.  Ojala T, Pietikainen M, Maenpaa T. Multiresolution grayscale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 2002;24:97187. 
31.  Fei L, Perona PA. Bayesian hierarchical model for learning natural scene categories. In: CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2005. p. 52431. 
32.  Wold S, Esbensen K, Geladi P. Principal component analysis. Chemometr Intell Lab Syst 1987;2:3752. 
33.  Ing N, Ma Z, Li J, Salemi H, Arnold C, Kundsen BS, et al. Semantic segmentation for prostate cancer grading by convolutional neural networks. In: Proceedings of Medical Imaging; Digital Pathology 2018. p. 105811B. 
34.  Xu J, Luo X, Wang G, Gilmore H, Madabhushi A. A deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing 2016;191:21423. 
35.  Hou L, Singh K, Samaras D, Kurc TM, Gao Y, Seidman RJ, et al, Automatic histopathology image analysis with CNNs, Proceedings of 2016 New York Scientific Data Summit (NYSDS), New York 2016. p. 16 
36.  CruzRoa A, Basavanhally A, Gonzalez F, Gilmore H, Fledman M, Ganesan S, et al. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. In: Proceedings of SPIE Medical Imaging: International Society for Optics and Photonics; 2014. p. 9041031. 
37.  Pan X, Li L, Yang H, Liu Z, Zhao L, Fan Y. Accurate segmentation of nuclei in pathological images via sparse reconstruction and deep convolutional networks. Neurocomputing 2017;229:8899. 
38.  Naik S, Doyle S, Agner S, Madabhushi A, Feldman M, Tomaszewski J. Automated gland and nuclei segmentation for grading of prostate and breast cancer histopathology. In: ISBI: 5 ^{th} IEEE International Symposium on Biomedical Imaging: From Nano to Macro. Paris, France; 2008. p. 2847. 
39.  Kothari S, Phan JH, Stokes TH, Wang MD. Pathology imaging informatics for quantitative analysis of wholeslide images. J Am Med Inform Assoc 2013;20:1099108. 
40.  Roullier V, Lézoray O, Ta VT, Elmoataz A. Multiresolution graphbased analysis of histopathological whole slide images: Application to mitotic cell extraction and visualization. Comput Med Imaging Graph 2011;35:60315. 
41.  Toth R, Shih N, Tomaszewski JE, Feldman MD, Kutter O, Yu DN, et al. Histostitcher: An informatics software platform for reconstructing wholemount prostate histology using the extensible imaging platform framework. J Pathol Inform 2014; 58. 
42.  Graves A, Mohamed AR, Hinton GE. Speech recognition with deep recurrent neural networks. In: ICASSP 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. 2013. p.6645 9. 
43.  Graves A, Jaitly N. Towards end to end speech recognition with recurrent neural networks. In: Proceedings of the 31 ^{st} International Conference on Machine Learning (ICML14). 2014. p. 176472s. 
44.  Sutskever I, Vinyals O, Le QV. Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems 27 (NIPS) 2014. p. 310412 
45.  Donahue J, Hendricks LA, Rohrbach M, Venugopalan S, Guadarrama S, Saenko K, et al. Longterm recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence 2017;39:67791. 
46.  Wu Z, Wang X, Jiang YG, Ye H, Xue X. Modeling spatial temporal clues in a hybrid deep learning framework for video classification. In: Proceedings of the 23 ^{rd} ACM International Conference on Multimedia. Brisbane, Australia 2015. p. 461 70. 
47.  Therneau TM, Grambsch P. Modeling Survival Data: Extending the Cox Model, Chapter of estimating the survival and hazard functions, SpringerVerlag New York, 2000. p. 739. 
48.  Yang Y, Zou H. A cocktail algorithm for solving the elastic net penalized Cox's regression in high dimensions. Stat Interface 2012;6:16773. 
49.  Kalbfleisch JD, Prentice RL. The Statistical Analysis of Failure Time Data. Chapter of relative risk cox regression model, John Wiley & Sons; 2011. p. 95. 
50.  MoghimiDehkordi B, Safaee A, Pourhoseingholi MA, Fatemi R, Tabeie Z, Zali MR, et al. Statistical comparison of survival models for analysis of cancer data. Asian Pac J Cancer Prev 2008;9:41720. 
51.  Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, et al, Caffe: Convolutional architecture for fast feature embedding. In: Proceedings of the 22 ^{nd} ACM International Conference on Multimedia. Orland, Florida 2014. p. 6758. 
52.  Cleves M, Gould W, Gutierrez RG, Marchenko Y. An Introduction to Survival Analysis Using Stata, 3 ^{rd}, StataCorp LP, 2010. 
53.  Cox DR, Oakes D. Analysis of Survival Data. London: Chapman&Hall; 1984. 
[Figure 1], [Figure 2], [Figure 3]
[Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6], [Table 7]
