Introduction
It is widely acknowledged that the purpose of diagnostic testing is to reduce diagnostic uncertainty (e.g. by 0%, if the test is useless , or up to 100%, when the test is perfect) 1. However, the current metrics of diagnostic performance [i.e. sensitivity (S), specificity (C), positive and negative likelihood ratios (LR+; LR-), diagnostic odds ratio (DOR), and area under curve (AUC)] cannot provide a direct assessment of the amount by which diagnostic uncertainty is reduced. Despite lacking this crucial clinical usefulness, these “traditional” diagnostic metrics are widely used as the preferred evidence-based medicine (EBM) diagnostic test measures2,3.
Meanwhile, there is a long tradition of quantifying diagnostic test performance in the field of information theory 4 . Although, conceptually speaking, the problems associated with medical diagnostic testing are similar to the problems faced in communication and information theory, for some reasons the field of EBM diagnostics has not embraced measures typically found in information theory.
One such measure, mutual information (MI) 5, used to evaluate association between two random variables, is considered the best metric to quantify diagnostic uncertainty and therefore test performance. 6 It has been used in a number of studies in medicine to explain the relationship between test results and disease states 7-14. Yet it has been surpassingly missing from the EBM literature.
The most significant properties that establish MI as superior to traditional measures of diagnostic performance can be summarized as follows:
In this paper, we promote the notion that MI is a better measure for evaluation of diagnostic performance 8, both on theoretical and practical grounds. We extent the current work by explaining how MI can be meta-analyzed, and provide two illustrative examples of diagnostic test meta-analysis using MI.