Journal of Pathology Informatics Journal of Pathology Informatics
Contact us | Home | Login   |  Users Online: 167  Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size 




 
Table of Contents    
ORIGINAL ARTICLE
J Pathol Inform 2017,  8:13

Performance of a web-based method for generating synoptic reports


1 Google, NY, USA
2 Northwestern University, Northwestern, Evanston, IL, USA
3 Department of Cancer Services, Baptist Hospital and Baptist Health of South Florida Healthcare System, Miami, FL, USA
4 Department of Information Technology, Baptist Hospital and Baptist Health of South Florida Healthcare System, Miami, FL, USA
5 Department of Pathology, Baptist Hospital and Baptist Health of South Florida Healthcare System, Miami, FL, USA
6 Department of Pathology, Baptist Hospital and Baptist Health of South Florida Healthcare System; Department of Pathology, Baptist Hospital, Miami, FL, USA

Date of Submission30-Nov-2016
Date of Acceptance22-Jan-2017
Date of Web Publication10-Mar-2017

Correspondence Address:
Andrew A Renshaw
Department of Pathology, Baptist Hospital, 8900 N Kendall Dr, Miami, FL 33176
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpi.jpi_91_16

Rights and Permissions
   Abstract 

Context: The College of American Pathologists (CAP) requires synoptic reporting of all tumor excisions. Objective: To compare the performance of different methods of generating synoptic reports. Methods: Completeness, amendment rates, rate of timely ordering of ancillary studies (KRAS in T4/N1 colon carcinoma), and structured data file extraction were compared for four different synoptic report generating methods. Results: Use of the printed tumor protocols directly from the CAP website had the lowest completeness (84%) and highest amendment (1.8%) rates. Reformatting these protocols was associated with higher completeness (94%, P < 0.001) and reduced amendment (1%, P = 0.20) rates. Extraction into a structured data file was successful 93% of the time. Word-based macros improved completeness (98% vs. 94%, P < 0.001) but not amendment rates (1.5%). KRAS was ordered before sign out 89% of the time. In contrast, a web-based product with a reminder flag when items were missing, an embedded flag for data extraction, and a reminder to order KRAS when appropriate resulted in improved completeness (100%, P = 0.005), amendment rates (0.3%, P = 0.03), KRAS ordering before sign out (100%, P = 0.23), and structured data extraction (100%, P < 0.001) without reducing the speed (P = 0.34) or accuracy (P = 1.00) of data extraction by the reader. Conclusion: Completeness, amendment rates, ancillary test ordering rates, and data extraction rates vary significantly with the method used to construct the synoptic report. A web-based method compares favorably with all other methods examined and does not reduce reader usability.

Keywords: Accuracy, anatomic pathology, cancer, College of American Pathologists, computer, diagnosis, error, internet, quality assurance, surgical pathology, templates, web


How to cite this article:
Renshaw MA, Renshaw SA, Mena-Allauca M, Carrion PP, Mei X, Narciandi A, Gould EW, Renshaw AA. Performance of a web-based method for generating synoptic reports. J Pathol Inform 2017;8:13

How to cite this URL:
Renshaw MA, Renshaw SA, Mena-Allauca M, Carrion PP, Mei X, Narciandi A, Gould EW, Renshaw AA. Performance of a web-based method for generating synoptic reports. J Pathol Inform [serial online] 2017 [cited 2017 Apr 29];8:13. Available from: http://www.jpathinformatics.org/text.asp?2017/8/1/13/201921


   Introduction Top


Checklists in surgical pathology designed to generate a “synoptic” section of the report have been associated with an improvement in the completeness of the surgical pathology report,[1],[2],[3],[4],[5],[6],[7],[8],[9],[10] though completeness usually does not exceed 90%.[1],[2],[3],[4],[11],[12],[13],[14],[15],[16] There are standards for content used in the creation of these checklists [17],[18],[19],[20],[21] as well as formatting.[22] More recent studies have shown that specific formatting changes to the College of American Pathologists (CAP) cancer protocols [23] significantly improve the completeness of the report,[6] the clerical error rate (as measured by amendments) of synoptic reports is associated with the number of required data elements (RDEs),[24] and specific formats of the report itself are associated with user preference [25] and speed of data extraction by users.[26],[27] The CAP currently requires a “synoptic” report format as well as specific required elements within the synoptic report for all primary resections of specific diagnoses but has no requirement concerning how that synoptic report is generated.[28] Written paper protocols and an electronic product that works through a pathologist's laboratory information system (electronic cancer checklist [eCC]) are available from CAP.[23] In addition, a web-based method has also been described.[29] Nevertheless, data concerning the impact of these different methods of generating a synoptic report on the quality of the synoptic report are limited.

In addition, tools have been developed to create “structured data” for registries and other interested parties from these synoptic reports.[5],[30],[31],[32],[33],[34],[35] This “structured data” consists of having the data elements as discrete elements in a true database (i.e., SQL, etc.,) which most commonly is achieved by creating a data file (i.e., comma separated value, tab delimited, or excel file for examples) with discrete data elements rather than a free text narrative text file such as “Word” (most surgical pathology reports). Synoptic reporting can be implemented with different features ranging from simply including a table within the free text narrative report to fully structured reporting with binding to other databases including SNOMED and other databases.[10],[36] However, data comparing the completeness or accuracy of the subsequently extracted and created structured database for any implementation of synoptic reporting is quite limited. To assess this, we compared four different types of synoptic report generating methods, including a web-based method, across a wide range of different quality measures.


   Methods Top


From 2004 to 2016, four different methods were used to generate synoptic reports in our hospital, [Table 1]. All reports were dictated by the pathologists, typed by secretaries, and subsequently edited by the pathologists before sign out. Initially, pathologists used the checklists directly from the CAP website (CAP format protocol). Next, these checklists were edited as detailed previously (removal of all optional elements, numbering all elements, and consistent formatting) (edited CAP protocol).[6] Subsequently, Word-based macros were made for the secretaries that included all RDE headers and into which the secretaries only had to type the response for each header. Finally, a web-based product was made and placed on our hospital system intranet [Figure 1]. This consisted of a web page connected to a JSON file that contained all the elements required by CAP for creating a synoptic report. This product listed all RDE headers and allowed the pathologists or secretary to simply select from a list of the most common responses or free text a response if the desired response was not there. In all cases, the pathologist dictated the response to select from looking at the website, and the secretary subsequently went through the website and selected the items. The website then generated a rich text formatted table-based synoptic report that was then “cut and pasted” directly into our surgical pathology report (PowerPath) [Figure 2].
Table 1: Performance of four different synoptic report generating methods

Click here to view
Figure 1: Screen shot of the web based protocol for colon carcinoma

Click here to view
Figure 2: Screen shot of the synoptic report generated by the web based protocol for colon carcinoma

Click here to view


The web page was designed by two of the authors (MAR, SAR) so that only required (by either CAP or our clinicians) data elements were included. Each question had to be answered, and the pathologists could only select one answer (no multi-answer questions), although he or she was allowed to free text anything they like if they could not find the response they wished for, and “Not applicable” was the appropriate response in some cases. The default response for each question was a flag that read “YOU MISSED THIS ITEM” that the report automatically generated if any item was missing from the report. In this way, if a pathologist skipped an item, the secretary could leave this text in place and still generate a report, and the pathologist could see and fix this response during the editing phase. There were also notes reminding the pathologists which ancillary studies needed to be ordered when associated with an RDE in the synoptic report. These notes appeared on the web page that the pathologists used to create the synoptic report but did not appear on the final synoptic report itself. For the purposes of this study, we tracked how often the pathologist ordered KRAS studies on all colon carcinoma cases that were either stage T4 or N1 (as per the desires of our clinicians) before and after sign out as a measure of whether they remembered to order this ancillary study or had to go back and order it later. Finally, at the beginning of each RDE in the report, the web page placed a pipe “\” specifically designed to improve free text extraction [Figure 3]. As a result, all RDEs could be easily searched for using a free text search because they all began with the pipe “\” and were separated from the response by a colon “:”. Multiline responses had no effect on this since the information was in a rich text format table that placed all information related to the data element before any information related to the response.
Figure 3: Example of a report used to test the speed and accuracy of information retrieval containing the embedded flag (pipe)

Click here to view


All cases with a tumor summary were reviewed by the tumor registry for completeness. Data on completeness were collected by the staff of the tumor registry and reported back to the pathologists on a monthly basis. The definition of completeness was all items listed as required by the CAP during the appropriate time period. As those requirements changed over time, so did the definition of completeness. However, during this study, all methods were updated in a timely manner, and hence that there were no incomplete cases identified because of a new requirement that was not listed in the method. However, during the 1st month of the web-based report, a single data item (specimen integrity) was left off the web-based protocol for endometrium. As a result, this item was missing from every single case during that month. This omission was subsequently fixed, and this was not included in our results for completeness for the web-based option since the pathologists did correctly include all items that were actually included in the web-based protocol during this time.

All amendments on cases with a tumor summary were reviewed. Only amendments that related to the tumor summary and were not based on additional clinical or pathologic information were included. Specifically, we were trying to track clerical errors such as spelling mistakes and features that did not match that were supposed to (i.e., the stage did not match the reported extent).

Structured data were extracted from the synoptic report section of all surgical pathology reports using free text searches and regular expressions. Before the inclusion of the pipe “\” items were identified using a set of standard regular expression searches looking for the text of the RDE. Only breast carcinoma cases were extracted since the search required the use of specific headers to identify information. With the use of the pipe “\”, searches were based on identifying the pipes and the corresponding colons (“:”) and all synoptic reports were included since the data were extracted regardless of which headers were included. The data were subsequently collected in Tab delimited (txt) file, and these results reviewed and compared. All data were stored as strings. Numeric data were extracted from these strings in a subset of cases, but this conversion was not included as part of the definition of a complete report. Successful data extraction consisted of the inclusion of the correct data item and response in the TXT file based on manual comparison with the original report.

To test whether our embedded flag affected the end users ability to quickly and accurately extract information from a surgical pathology report, a computer-based quiz was designed to test this. The methods were similar to that described previously.[26] Specifically, the participant is shown a specific phrase that may or may not be in a synoptic report. When the user presses “Enter” the synoptic report appears on the screen, and the timer starts. The user then examines the report to determine if the phrase is or is not present. If it is present the user types the number “2,” if it is not they type “1,” and then press “Enter.” The timer stops when “Enter” is pressed. The program automatically records the time and whether the answer was correct, and this data are then transferred to a comma-separated values (CSVs) file for further analysis.

We constructed our synoptic report for this test from an invasive breast carcinoma using the tumor protocol from CAP. All elements were identical except for changes in the reporting of focality, nipple involvement, skin involvement, and lymphovascular invasion. The formats were tested in a quiz that contained 32 total questions half of which had the pipe on the left side of the report [Figure 3] and half of which did not [Figure 4]. All questions were presented in random order. At the end of the quiz, each participant was asked whether they felt the pipe interfered with their reading of the report.
Figure 4: Example of a report used to test the speed and accuracy of information retrieval without the embedded flag (pipe)

Click here to view


Twenty-six participants completed the quiz. They were all nonpathologists and included, six cancer registrars, 15 medical personnel (4 MD, 11 non-MDs), and five nonmedical personnel (administrative assistants, other professionals). We specifically excluded pathologists from this testing, since we wanted to measure the performance of a user other than a pathologist. To allow comparison between these uses, times were normalized to the mean of the standard format for each user. As a result, the normalized time for the format without the pipe was the control with a normalized time of one, and the time for the pipe format was in comparison with that time. The results for these are reported as mean ± standard deviation (no units since they are normalized).

Statistical analysis was performed using a Fisher's exact test for categorical data and a Student's t-test for continuous data. The significance threshold was set at 0.05.


   Results Top


The results are summarized in [Table 1].

Use of the printed tumor protocols directly from the CAP website had the lowest completeness (84%) and highest amendment (1.8%) rates. Reformatting these protocols was associated with higher completeness (94%, P < 0.001) and reduced amendment (1%, P = 0.20) rates. Extraction into a structured data file was successful 93% of the time. Extraction failed most often due to the headers for the presence of ductal carcinoma in situ (DCIS) and the margin status of DCIS not being consistently unique (as copied from the CAP protocol directly, since the fact that one of the elements is related to margin status is located well away from the actual line for DCIS). Word-based macros improved completeness (98% vs. 94%, P < 0.001) but not amendment rates (1.5%). KRAS was ordered before sign out 89% of the time. In contrast, a web-based product with a reminder flag when items were missing, an embedded flag for data extraction, and a reminder to order KRAS when appropriate resulted in improved completeness (100%, P = 0.005), amendment rates (0.3%, P = 0.03), KRAS ordering before sign out (100%, P = 0.23), and structured data extraction (100% P < 0.001). The contents of the data extraction was compared to that of the original report on a subset of cases (100) and found to be 100% accurate.

There were a total of 832 responses for the quiz, half with the pipe and half without. The speed with which readers were able to answer the questions was not significantly different with the pipe than without it (0.98 + 0.66 vs. 1.00 + 0.49, P = 0.65), and both formats were 100% accurate (P = 1.00). All readers agreed that after initially being shown the pipe, they ignored it when answering the questions in the test.


   Discussion Top


The results of this study consist of a range of quality performance measures comparing a variety of different methods to generate synoptic reports. The goal of these quality performance measures is to ensure that a complete and accurate report is generated that can easily be converted into a structured data file and does not interfere with the ability of the reader to accurately and rapidly extract information from the report. Several key items appear to improve performance. First, consistent formatting of the checklists makes it easier to generate a complete report. The completeness rates for all of the methods examined except the CAP format protocols, all of which have a consistent format, are well above 90%, which is a level of performance most studies have failed to achieve.[1],[2],[3],[4],[11],[12],[13],[14] Second, selection of responses from a list rather than free text typing reduces errors (amendments). Third, reminders or warning flags to identify missing items or additional studies that need to be performed improves the completeness of the report, and when these reminders are appropriately designed may reduce the need for force functions to ensure completeness. Finally, embedding a flag into the synoptic report within the surgical pathology report itself made free text extraction of data elements from the surgical pathology report into a structured data file easy and highly accurate without impacting the reader's ability to extract information from the report.

The creation of Word macros improves completeness, but not to 100%. On review, it appeared that the secretaries were not sure what to do when a pathologist skipped an item in the macro in their dictation. In some cases, they assumed that it was skipped on purpose and deleted that entry from the report. Additional training may be of value in further improving the completeness of reports with the use of macros.

However, there was no reduction in the amendment rate with the use of Word macros. On review, we noted that the majority of amendments were in the response rather than the header part of the report, and these macros did not change how this section of the report was created; secretaries still had to type in the response even when using a macro. In contrast, the web-based option reduced the number of responses that had to be typed, and this may have led to a reduction in the amendment rate. While it is possible to create drop-down boxes in Word that might have the same result, we were unable to make these compatible with our current laboratory information system.

In contrast, the web-based protocol was associated with a very high level of accuracy and completeness, including completeness of ordering additional studies, a measure that has rarely been examined in previous studies. We believe there are several reasons for this method's superior performance. First is that it was simply easier for both the pathologists and secretaries to use than other methods (improved “usability”). For both the pathologists and secretaries, the web-based method is setup with very simple rules so that each question must be answered and always requires a single response. There is very little thinking involved concerning optional questions or multiple select items that may distract the user from the task at hand. In addition, when reading from a computer (typically but not exclusively Internet Explorer in this study) the browser scrolled through the questions in a much more uniform and consistent fashion than Microsoft Word scrolled through Word-based templates. In a Word, the checklist often scrolled either too slowly or jumped from page to page, making it more likely for the pathologist to miss an item, and possibly distracting them from dictating the correct response. For the secretaries, there was significantly less free text entry, and it was more difficult for them to delete an item, and the flag made it clear that they were supposed to flag a missing item rather than simply delete it. In addition, unlike Word macros, it was also possible to embed instructions and reminders for the pathologists in the website without these appearing on the subsequent report. Our data clearly show that these reminders improved the consistent ordering of appropriate ancillary studies, without the need for force functions.

This study shows that a structured data file (“structured data”) can be extracted from free text synoptic reports and used to create a structured data set in a conventional database, but the success depends on the way those reports are structured. “Structured data” is a broad and generic term, and there are many types of “structured data” not all of which may be of value in creating a structured data file or data in a true database system. Indeed, a synoptic report is required by the CAP to have data in a particular structure (“structured data”) composed of data arranged in a tabular form, but this structure is of very little value for extracting a structured data file (i.e., CSV, TXT, Excel files) for entering into a true database which is really what most authors mean when they say “structured data” (although limited data suggest that this structure likely improves the speed with which readers can extract information [26]). The success of our free text searches was highly dependent on embedding an alternative “structure” into the synoptic report (beyond that stipulated by the CAP) in the surgical pathology report itself, although this “structured” data are not the same as a data file. When we did not embed this structure (straight free text searching using regular expressions), the algorithm to extract the data was complex and completely type specific since we had to search on the specific headers themselves. As a result, we were only able to develop an algorithm that worked for one tumor site (breast), and this method was only modestly reliable. This result is in line with the experience of others.[37],[38],[39],[40] In contrast, with our embedded structure of pipes, a single simple algorithm allowed extraction of 100% of data from all tumor sites. Thus, we believe that embedding an appropriate structure into the synoptic report can significantly improve the success of extracting a structured data file from that report using free text searches. We have also shown that the embedded structure that is used in the synoptic report does not interfere with the reader's ability to extract information.

In addition, the structure that we employ (pipes “\\”) to extract a structured data file using text-based searches is not dependent on the tabular format that is required by the CAP. One could easily list one or more items on a line, or even embed these pipes within narrative text, and still our system would be able to extract the data into a structured data file. We believe it can also easily be extended to data within an addendum, for example, ancillary studies that may take some time to come back. In addition, any cases with amendments can be re-extracted and a corrected structured data file created. While other methods may be able to re-extract data from an amended report where the synoptic report itself is edited and fixed, most other report methods can only extract data from the synoptic report area, and can not extract any data from other areas of the report (addendums, amendments where the new information is in a separate amended area, etc.). Finally, as we have shown the use of a pipe as we describe is in no way limited to the website we use, and any interested laboratory could use this technique regardless of how their synoptic reports are generated.

While we compared the performance of four different methods in this study, these are not all the methods that are currently available. CAP also offers the eCC product through a variety of different laboratory information system vendors, but this product was not available to us to compare. We would imagine that in some ways the performance of this product would be similar to that of our web-based product. Since the eCC typically contains a force function to ensure that all elements are included, one would expect the reports that are generated to be complete by necessity. In addition, the eCC by design generates a structured data report from the case just as our web-based product does. However, since our method ensures completeness and allowed extraction of virtually 100% of the data, it would appear difficult for the eCC to show an advantage in either of these two features.

Nevertheless, there are important differences between the web-based product we used in this study and the eCC. Stylistically, the two programs are very different. The web page is designed to only include required questions (including questions required by our clinicians) so all questions must be answered and all question are single question, single answer questions. As such it is as simple a synoptic report as we can make and requires as few “clicks” as possible on the part of the pathologist. In addition, we allow free text answers to every single question (although the majority of responses were not free text) allowing the user a very wide range of answers. The downside to this is that the final report may include more questions that are “Not Applicable” than other types of programs do, though there is some evidence that this may, in fact, make the reports easier to read.[27] In contrast, the eCC is built on logic provided by the CAP. It contains single answer questions, multi-answer questions, conditional questions all of which are multi-question answers (the user needs to answer more than one question to answer a single question that appears in the report) and optional questions and answers. As such there are fewer “Not Applicable” responses in the final report, but it takes more questions and more “clicks” on the part of the pathologist to create it. In addition, the responses to some questions are restricted to specific types (distances must be a number, regardless of the complexity of a particular case). As such if a pathologist is uncomfortable answering a question in the way that the program is structured, the only real choice is to state that the answer cannot be determined. One suspects that different pathologists may prefer different formats. Additional testing of different formats to determine which ones are preferred by the end users may be appropriate.

Perhaps more importantly, the eCC generates its structured data at the time the data is entered, whereas our web based product extracts this data from the report itself after it is signed out. As a result while we can be sure that the data we extract exactly matches the data in the final surgical pathology report (and we have confirmed this in a subset of our cases), the eCC may not always be able to do this (since the surgical report may be altered after the structured data is extracted), depending on the exact design used by an individual vendor. The CAP laboratory accreditation program already has standards to ensure the accuracy of all electronic interfaces. Whether confirming the accuracy of the data that is extracted by these programs might be an appropriate topic for standards in this program might be worth considering.

In addition, the eCC can only generate data from the synoptic report itself, and it cannot extract data from addendums or any other field of the report. The only way to extract this data for any purpose (research, tumor registries, etc.,) is to include it in the synoptic report itself. However, we already know that error rates as measured by amendments are associated with the number of RDEs in a synoptic report.[24] We also know that the amount of text in the report is associated with the speed with which readers of the report can extract information.[26] Thus, this practice may be both reducing the accuracy of the synoptic report and its utility to convey information to the clinician. In contrast, our web-based product can extract data from any section of the report, including notes, addendums, or even the gross, as long as our embedded structure is present. As a result, it is possible to have multiple “synoptic reports” within a single surgical pathology report, each designed for the needs of a different set of users.

On the other hand, the eCC may have additional functionality that our website may not yet offer. It is possible that the eCC in addition to extracting the data provides additional tools to allow that data to be linked to other data sets such as SNOMED codes. Such functionality, if desired, is currently not offered on our website.

The use of our website led to the timely ordering of the most appropriate ancillary studies every time before the cases was signed out. In addition, while not the subject of our study, during this study the ancillary studies that our clinicians wanted for a number of different tumors changed several times as they responded to the very rapidly changing world of “precision” medicine. Given this rapid pace of change, it is becoming harder and harder for pathologists and the laboratories that they work in to ensure that the most up to date and appropriate tests are being ordered in every case. We have found the ability to embed notes reminding the pathologists about which tests to order in which situation invaluable to the success of our practice. Since the website was designed so that changes could easily be made to accommodate these differences in practice, we were able to make these changes for all pathologists in all sites in a matter of minutes from a single source, without going through our laboratory information system vendor and without having to make changes to multiple different source documents that pathologists and secretaries were using. Such flexibility may be of value as pathologists try to keep pace with the rapidly evolving field of “precision” medicine.

There are several limitations to this study. First, we used the opinion of the people in the tumor registry as a gold standard for completeness. In a few cases, there were disagreements about whether the report was complete or not. Furthermore, this study takes place in a busy community hospital with general pathologists. It may be possible that different results could be obtained by specialists or in practices that are less busy. Perhaps most importantly, we cannot exclude that at least some of the improvement in performance is related to increasing use and experience over time and not entirely related to the methods used for synoptic report generation. However, the sudden improvement in performance with the web-based product to essentially perfect performance after over a decade of creating synoptic reports by other methods without such success would suggest that increased practice and experience are unlikely to be the sole cause of this improvement in performance.


   Conclusion Top


Completeness, amendment rates, ancillary test ordering rates, and data extraction rates vary significantly with the method used to construct the synoptic report. Our web-based protocol appears to be very competitive with most other synoptic report generating methods.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 
   References Top

1.
Messenger DE, McLeod RS, Kirsch R. What impact has the introduction of a synoptic report for rectal cancer had on reporting outcomes for specialist gastrointestinal and nongastrointestinal pathologists? Arch Pathol Lab Med 2011;135:1471-5.  Back to cited text no. 1
    
2.
Zarbo RJ. Interinstitutional assessment of colorectal carcinoma surgical pathology report adequacy. A College of American Pathologists Q-Probes study of practice patterns from 532 laboratories and 15,940 reports. Arch Pathol Lab Med 1992;116:1113-9.  Back to cited text no. 2
    
3.
Idowu MO, Bekeris LG, Raab S, Ruby SG, Nakhleh RE. Adequacy of surgical pathology reporting of cancer: A College of American Pathologists Q-Probes study of 86 institutions. Arch Pathol Lab Med 2010;134:969-74.  Back to cited text no. 3
    
4.
Gephardt GN, Baker PB. Lung carcinoma surgical pathology report adequacy: A College of American Pathologists Q-Probes study of over 8300 cases from 464 institutions. Arch Pathol Lab Med 1996;120:922-7.  Back to cited text no. 4
    
5.
Hassell LA, Parwani AV, Weiss L, Jones MA, Ye J. Challenges and opportunities in the adoption of College of American Pathologists checklists in electronic format: Perspectives and experience of Reporting Pathology Protocols Project (RPP2) participant laboratories. Arch Pathol Lab Med 2010;134:1152-9.  Back to cited text no. 5
    
6.
Renshaw SA, Mena-Allauca M, Touriz M, Renshaw A, Gould EW. The impact of template format on the completeness of surgical pathology reports. Arch Pathol Lab Med 2014;138:121-4.  Back to cited text no. 6
    
7.
Baretton GB, Tannapfel A, Schmitt W. Standardized and structured histopathological evaluation of colorectal polyps: A practical checklist against the background of the new WHO classification. Pathologe 2011;32:289-96.  Back to cited text no. 7
    
8.
Casati B, Bjugn R. Structured electronic template for histopathology reporting on colorectal carcinoma resections: Five-year follow-up shows sustainable long-term quality improvement. Arch Pathol Lab Med 2012;136:652-6.  Back to cited text no. 8
    
9.
Daniel C, Macary F, Rojo MG, Klossa J, Laurinavicius A, Beckwith BA, et al. Recent advances in standards for Collaborative Digital Anatomic Pathology. Diagn Pathol 2011;6 Suppl 1:S17.  Back to cited text no. 9
    
10.
Srigley JR, McGowan T, Maclean A, Raby M, Ross J, Kramer S, et al. Standardized synoptic cancer pathology reporting: A population-based approach. J Surg Oncol 2009;99:517-24.  Back to cited text no. 10
    
11.
Cross SS, Feeley KM, Angel CA. The effect of four interventions on the informational content of histopathology reports of resected colorectal carcinomas. J Clin Pathol 1998;51:481-2.  Back to cited text no. 11
    
12.
Hammond EH, Flinner RL. Clinically relevant breast cancer reporting: Using process measures to improve anatomic pathology reporting. Arch Pathol Lab Med 1997;121:1171-5.  Back to cited text no. 12
    
13.
Gill AJ, Johns AL, Eckstein R, Samra JS, Kaufman A, Chang DK, et al. Synoptic reporting improves histopathological assessment of pancreatic resection specimens. Pathology 2009;41:161-7.  Back to cited text no. 13
    
14.
Karim RZ, van den Berg KS, Colman MH, McCarthy SW, Thompson JF, Scolyer RA. The advantage of using a synoptic pathology report format for cutaneous melanoma. Histopathology 2008;52:130-8.  Back to cited text no. 14
    
15.
Lam E, Vy N, Bajdik C, Strugnell SS, Walker B, Wiseman SM. Synoptic pathology reporting for thyroid cancer: A review and institutional experience. Expert Rev Anticancer Ther 2013;13:1073-9.  Back to cited text no. 15
    
16.
Kang HP, Devine LJ, Piccoli AL, Seethala RR, Amin W, Parwani AV. Usefulness of a synoptic data tool for reporting of head and neck neoplasms based on the College of American Pathologists cancer checklists. Am J Clin Pathol 2009;132:521-30.  Back to cited text no. 16
    
17.
McCluggage WG, Colgan T, Duggan M, Hacker NF, Mulvany N, Otis C, et al. Data set for reporting of endometrial carcinomas: Recommendations from the International Collaboration on Cancer Reporting (ICCR) between United Kingdom, United States, Canada, and Australasia. Int J Gynecol Pathol 2013;32:45-65.  Back to cited text no. 17
    
18.
Merlin T, Weston A, Tooher R. Extending an evidence hierarchy to include topics other than treatment: Revising the Australian 'levels of evidence'. BMC Med Res Methodol 2009;9:34.  Back to cited text no. 18
    
19.
Scolyer RA, Judge MJ, Evans A, Frishberg DP, Prieto VG, Thompson JF, et al. Data set for pathology reporting of cutaneous invasive melanoma: Recommendations from the international collaboration on cancer reporting (ICCR). Am J Surg Pathol 2013;37:1797-814.  Back to cited text no. 19
    
20.
Jones KD, Churg A, Henderson DW, Hwang DM, Ma Wyatt J, Nicholson AG, et al. Data set for reporting of lung carcinomas: Recommendations from International Collaboration on Cancer Reporting. Arch Pathol Lab Med 2013;137:1054-62.  Back to cited text no. 20
    
21.
Kench JG, Delahunt B, Griffiths DF, Humphrey PA, McGowan T, Trpkov K, et al. Dataset for reporting of prostate carcinoma in radical prostatectomy specimens: Recommendations from the International Collaboration on Cancer Reporting. Histopathology 2013;62:203-18.  Back to cited text no. 21
    
22.
Valenstein PN. Formatting pathology reports: Applying four design principles to improve communication and patient safety. Arch Pathol Lab Med 2008;132:84-94.  Back to cited text no. 22
    
23.
College of American Pathologists. Cancer Case Summaries. Available from: http://www.cap.org/. [Last accessed on 2016 Nov 15].  Back to cited text no. 23
    
24.
Renshaw AA, Gould EW. The cost of synoptic reporting. Arch Pathol Lab Med 2017;141:15-6.  Back to cited text no. 24
    
25.
Strickland-Marmol LB, Muro-Cacho CA, Barnett SD, Banas MR, Foulis PR. College of American pathologists cancer protocols: Optimizing format for accuracy and efficiency. Arch Pathol Lab Med 2016;140:578-87.  Back to cited text no. 25
    
26.
Renshaw AA, Gould EW. Comparison of accuracy and speed of information identification by nonpathologists in synoptic reports with different formats. Arch Pathol Lab Med 2017; [In press].  Back to cited text no. 26
    
27.
Renshaw AA, Mena-Allauca M, Gould EW. Reporting Gleason grade/score in synoptic reports of radical prostatectomies. J Pathol Inform 2016;7:54.  Back to cited text no. 27
  [Full text]  
28.
College of American Pathologists. Inspection Checklist in Anatomic Pathology; 2016. Available from: http://www.cap. [ Last accessed on 2016 Nov 01].  Back to cited text no. 28
    
29.
Baskovich BW, Allan RW. Web-based synoptic reporting for cancer checklists. J Pathol Inform 2011;2:16.  Back to cited text no. 29
[PUBMED]  [Full text]  
30.
de Baca ME, Madden JF, Kennedy M. Electronic pathology reporting: Digitizing the College of American Pathologists cancer checklists. Arch Pathol Lab Med 2010;134:663-4.  Back to cited text no. 30
    
31.
Washington MK, Baker TP, Simpson J. Checklists, protocols, and the “gold standard” approach. Arch Pathol Lab Med 2014;138:159-60.  Back to cited text no. 31
    
32.
Simpson RW, Berman MA, Foulis PR, Divaris DX, Birdsong GG, Mirza J, et al. Cancer biomarkers: The role of structured data reporting. Arch Pathol Lab Med 2015;139:587-93.  Back to cited text no. 32
    
33.
Bjugn R, Casati B, Haugland HK. Structured electronic health records. Tidsskr Nor Laegeforen 2014;134:431-3.  Back to cited text no. 33
    
34.
Hassell L, Aldinger W, Moody C, Winters S, Gerlach K, Schwenn M, et al. Electronic capture and communication of synoptic cancer data elements from pathology reports: Results of the Reporting Pathology Protocols 2 (RPP2) project. J Registry Manag 2009;36:117-24.  Back to cited text no. 34
    
35.
Mohanty SK, Piccoli AL, Devine LJ, Patel AA, William GC, Winters SB, et al. Synoptic tool for reporting of hematological and lymphoid neoplasms based on World Health Organization classification and College of American Pathologists checklist. BMC Cancer 2007;7:144.  Back to cited text no. 35
    
36.
Ellis DW, Srigley J. Does standardised structured reporting contribute to quality in diagnostic pathology? The importance of evidence-based datasets. Virchows Arch 2016;468:51-9.  Back to cited text no. 36
    
37.
Buckley JM, Coopey SB, Sharko J, Polubriaginof F, Drohan B, Belli AK, et al. The feasibility of using natural language processing to extract clinical information from breast pathology reports. J Pathol Inform 2012;3:23.  Back to cited text no. 37
[PUBMED]  [Full text]  
38.
Burger G, Abu-Hanna A, de Keizer N, Cornet R. Natrual language processing in pathology: A scoping review. J Clin Pathol 2016;69:949-55.  Back to cited text no. 38
    
39.
Ye JJ. Pathology report data extraction from relational database using R, with extraction from reports on melanoma of skin as an example. J Pathol Inform 2016;7:44.  Back to cited text no. 39
[PUBMED]  [Full text]  
40.
Boag A. Extraction and analysis of discrete synoptic pathology report data using R. J Pathol Inform 2015;6:62.  Back to cited text no. 40
[PUBMED]  [Full text]  


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4]
 
 
    Tables

  [Table 1]



 

 
Top
  

    

 
  Search
 
   Browse articles
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
   Introduction
   Methods
   Results
   Discussion
   Conclusion
    References
    Article Figures
    Article Tables

 Article Access Statistics
    Viewed417    
    Printed6    
    Emailed0    
    PDF Downloaded83    
    Comments [Add]    

Recommend this journal