Abstract
AbstractPooled (or group) testing has been widely used for the surveillance of infectious diseases of low prevalence. The potential benefits of pooled testing include savings in testing time and costs, reducing false positive tests, and estimating models or making predictions from limited observed data information (e.g., only initial pooled responses). However, realizing these benefits often critically depends on the pool size used. Statistical methods introduced in the literature for optimal pool size determination have been developed mainly to accommodate simpler pooling protocols or perfect diagnostic assays. In this article, we study these issues with the goal of presenting a general optimization technique. We evaluate the efficiency of the estimators of disease prevalence (i.e., the proportion of diseased individuals in a population) while accounting for testing costs. Then, we determine the optimal pool size by minimizing the measures of optimality, such as screening efficiency and estimation efficiency. Our findings are illustrated using data from an ongoing screening application at the Louisiana Department of Health. We show that when a pooling application is properly designed, substantial advantages can be realized. We provide an package and a software application to facilitate the implementation of our optimization techniques. Supplementary materials accompanying this paper appear online.
Publisher
Springer Science and Business Media LLC