|J Pathol Inform 2012,
How useful are delta checks in the 21st century? A stochastic-dynamic model of specimen mix-up and detection
Katie Ovens1, Christopher Naugler2
1 Bachelor of Health Sciences Program, Faculty of Medicine, Room G503, O'Brien Centre for the BHSc, 3330 Hospital Drive N.W, Calgary, Alberta T2N 4N1, Canada
2 Department of Pathology and Laboratory Medicine, University of Calgary and Calgary Laboratory Services C414, Diagnostic and Scientific Centre 9, 3535 Research Road NW, Calgary, Canada
|Date of Submission||02-Nov-2011|
|Date of Acceptance||22-Nov-2011|
|Date of Web Publication||29-Feb-2012|
Department of Pathology and Laboratory Medicine, University of Calgary and Calgary Laboratory Services C414, Diagnostic and Scientific Centre 9, 3535 Research Road NW, Calgary
© 2012 Ovens et al; This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
| Abstract|| |
Introduction: Delta checks use two specimen test results taken in succession in order to detect test result changes greater than expected physiological variation. One of the most common and serious errors detected by delta checks is specimen mix-up errors. The positive and negative predictive values of delta checks for detecting specimen mix-up errors, however, are largely unknown. Materials and Methods: We addressed this question by first constructing a stochastic dynamic model using repeat test values for five analytes from approximately 8000 inpatients in Calgary, Alberta, Canada. The analytes examined were sodium, potassium, chloride, bicarbonate, and creatinine. The model simulated specimen mix-up errors by randomly switching a set number of pairs of second test results. Sensitivities and specificities were then calculated for each analyte for six combinations of delta check equations and cut-off values from the published literature. Results: Delta check specificities obtained from this model ranged from 50% to 99%; however the sensitivities were generally below 20% with the exception of creatinine for which the best performing delta check had a sensitivity of 82.8%. Within a plausible incidence range of specimen mix-ups the positive predictive values of even the best performing delta check equation and analyte became negligible. Conclusion: This finding casts doubt on the ongoing clinical utility of delta checks in the setting of low rates of specimen mix-ups.
Keywords: Computer model, delta check, laboratory error
|How to cite this article:|
Ovens K, Naugler C. How useful are delta checks in the 21st century? A stochastic-dynamic model of specimen mix-up and detection. J Pathol Inform 2012;3:5
|How to cite this URL:|
Ovens K, Naugler C. How useful are delta checks in the 21st century? A stochastic-dynamic model of specimen mix-up and detection. J Pathol Inform [serial online] 2012 [cited 2014 Jul 24];3:5. Available from: http://www.jpathinformatics.org/text.asp?2012/3/1/5/93402
| Introduction|| |
Laboratory error remains a serious problem,  with specimen mix-up errors constituting one of the most serious preanalytic errors. In an attempt to detect specimen mix-up errors, it is common practice for laboratories to use delta check algorithms. , Delta checks compare the current test result with a previous result for the same test obtained over a short period of time (within 96 hours) from the same patient. If the change in the value of the analyte exceeds an expected physiological range, the result is flagged as a possible error.  Different delta check algorithms employ various equations including absolute differences in analyte levels, percentage changes, and rate changes (see the Results section). The calculation of these values is generally automated within analyzer software or Laboratory Information Systems.
A variety of delta check algorithms have been described; ,,, however brief perusal of the references to this paper will show that much of the work on delta checks was performed several decades ago. Modern technologies such as bar-coding and automated specimen processing have undoubtedly decreased the incidence of specimen mix-up errors but the potential effect of this on the positive and negative predictive values of delta checks has not been explored.
In order to determine these positive and negative predictive values two pieces of information are necessary. First we must know the sensitivity and specificity of the various delta checks for detecting specimen mix-ups. Second, we need to know the expected incidence of specimen mix-ups in the population studied. The sensitivity of delta checks could be estimated in part from data on the follow-up of positive delta checks. Previous work has suggested false-positive rates as high as 70%.  However the determination of the specificity of delta checks is much more difficult with little information available.  The reason for this, of course, is that other mechanisms to detect these mix-up errors are lacking and so there is no practical mechanism to detect false-negative delta checks. The only viable method to estimate the specificity of delta checks is to employ a modeling approach which intentionally introduces errors into a database of repeat measurements and then tests the ability of delta checks to detect these errors. 
In this paper we follow this modeling approach by first obtaining a data set of repeated measurements of actual patient results and then using a simple computer program to introduce mix-up errors into this dataset (by switching pairs of second results). Because these introduced errors are known, we can then apply delta check equations to determine both sensitivity and specificity for each equation and each analyte. Finally, using literature values for the range of actual specimen mix-up errors, we can estimate positive and negative predictive values.
| Materials and Methods|| |
Acquisition of Patient Data
Following ethics approval by the University of Calgary Conjoint Health Research Ethics Board, data were obtained from the Laboratory Information System (LIS) of Calgary Laboratory Services. Calgary Laboratory Services is the sole provider of laboratory services to the 1.4 million residents of Calgary, Alberta, Canada and the surrounding area, performing approximately 15.8 million chemistry tests per year. We queried the LIS for patient test results for potassium, sodium, chloride, bicarbonate, and creatinine, performed within the previous 12 months on hospitalized patients where two measurements of the same analyte were available within a 96-hour period. Test results were then anonymized by removing all identifying information.
Between 8135 and 8432 pairs of test results were obtained for each analyte considered. These results were entered into an Excel database along with the time interval between the two tests. A stochastic dynamic model was then written in Visual Basic which ran as a macro within the spreadsheet. This program simulated specimen mix-up errors by switching a subset of the pairs of second results. This resulting "error rate" could be changed by the operator and was set at 1% for the subsequent modeling, a number chosen so as to be higher than any reasonable estimate of real specimen mix-up errors already existing in the patient data. The data were then run through a series of delta check algorithms in order to attempt to detect the errors that had been introduced. The results of each algorithm were presented in truth table format to calculate sensitivity and specificity. Because errors were introduced at random, we were concerned that certain iterations of the model may have introduced simulated mix-up errors that were more or less amenable to detection by certain delta checks. Therefore, each iteration of the model was run 10 times and the mean values and ranges were reported. A copy of the Visual Basic program (without patient data) is available by emailing the corresponding author.
| Results|| |
Six combinations of delta check equations and cut-off values were obtained from the published literature and tested with our model. Mean sensitivities and specificities from 10 replicated model runs are given in [Table 1]. The specificities of individual delta checks ranged from 50% to 99%, with most equations giving results above 90%. However, the sensitivities were generally much lower with most values falling below 20%. The exception to this was creatinine, in which the best performing delta check had a sensitivity of 82.8%.
|Table 1: Comparison of the sensitivity and specificity of four delta check equations (two with two different cut offs)|
Click here to view
As [Table 1] also shows, the optimal delta check equations tended to be different for each analyte. No delta check equation provided both the best sensitivity and specificity for an individual analyte.
The positive and negative predictive values of delta checks may be of greater practical interest than sensitivities and specificities. Positive and negative predictive values are contingent upon the prevalence of the condition in the population (here the rate of specimen mix-ups). We do not know the true rate of mix-up errors in modern laboratories but the rate is thought to be less than 1 in 1000. , [Table 2] shows that as the rate of mix-ups drops below 1 in 1000 the associated positive predictive value of even the best performing delta check becomes negligible. A positive predictive value of 58% with a 1% specimen mix-up rate is comparable to the positive predictive value of 50% estimated in an older study also using a 1% mix-up rate. 
|Table 2: Example positive predictive values and negative predictive values of the highest sensitivity (82.8%) and highest specificity (99.4%) delta checks|
Click here to view
| Discussion|| |
Delta checks are commonly used in a laboratory setting to detect specimen mix-up errors. , This makes it important to determine the most effective delta check equations for detecting these important errors. In this paper, we used a stochastic dynamic modeling approach to provide estimates of delta check sensitivities and specificities for five common analytes. We found that with the exception of creatinine, sensitivities tended to be low. An obvious weakness in our approach was that we tested only single analytes and did not use delta checks from multiple analytes in combination.  The number of possible combinations of equations and cut-off values is very large in a multivariate approach and therefore it would not be practical to test all possible combinations. However, preliminary analyses showed that combinations of delta checks showed only marginal increases above the sensitivity of creatinine alone. Additionally, it should be noted that our analysis refers only to the ability of delta checks to detect specimen mix-up errors and not the ability to detect other preanalytical or analytical errors.
We also ran the model with variations in cut-off levels for each analyte. Examination of the resulting receiver operator curves showed that the literature cut-off values we used in the model were close to the optimum cut-off values for each analyte.
The finding of superior sensitivities for creatinine compared to the other analytes examined could be explained by the fact that electrolytes are actively regulated within a narrow physiologic range but creatinine, as a metabolic by-product dependent on both muscle mass and renal function, would be expected to show greater interindividual variation. Similarly, the tight homeostatic control of electrolytes within an individual could explain the high specificities observed when specimen mix-up errors occur.
Based on plausible specimen mix-up rates obtained from the published literature, our model suggests that the positive predictive values of delta checks are currently very low. As modern techniques such as specimen bar-coding further reduce the rate of mix-up errors, the very low positive predictive values estimated by our model must be weighed against the effort involved in investigating false-positive delta checks to determine if this quality assurance strategy warrants ongoing use by clinical laboratories
| Conclusion|| |
Our model suggests that delta checks have widely varying sensitivities depending on the particular equation and analyte examined. In all cases the positive predictive values are anticipated to be very low and will decrease as modern technologies further reduce the incidence of specimen mix-up errors.
| Competing Interests|| |
The authors declare that they have no competing interests.
| References|| |
|1.||Kazmierczak SC. Laboratory quality control: Using patient data to assess analytical performance. Clin Chem Lab Med 2003;41:617-27. |
|2.||Ladenson JH. Patients as their own controls: use of the computer to identify "laboratory error". Clin Chem 1975;21:1648-53. |
|3.||Iizuka Y, Kume H, Kitamura M. Multivariate delta check method for detecting specimen mix-up. Clin Chem 1982;28:2244-8. |
|4.||Wheeler L, Sheiner L. A clinical evaluation of various delta check methods. Clin Chem 1981;27:5-9. |
|5.||Whitehurst P, Di Silvio TV, Boyadjian G. Evaluation of discrepancies in patients' results-an aspect of computer-assisted quality control. Clin Chem 1975;21:87-92. |
|6.||Nosanchuk JS, Gottmann AW. CUMs and delta checks. A systematic approach to quality control. Am J Clin Pathol 1974;62:707-12. |
|7.||Lacher D, Connelly D. Rate and delta checks compared for selected chemistry tests. Clin Chem 1988;34:1966-70. |
|8.||Sheiner L, Wheeler L, Moore J. The performance of delta check methods. Clin Chem 1979;25:2034-7. |
|9.||Wagar EA, Tamashiro L, Yasin B, Hilborne L, Bruckner DA. Patient safety in the clinical laboratory: A longitudinal analysis of specimen identification errors. Arch Pathol Lab Med 2006;130:1662-8. |
|10.||Murphy MF, Stearn BE, Dzik WH. Current performance of patient sample collection in the UK. Transfus Med 2004;14:113-21. |
|11.||Rheem I, Lee KN. The multi-item univariate delta check method: A new approach. Stud Health Technol Inform 1998;52:859-63. |
[Table 1], [Table 2]