BinomCI.Rd
Compute confidence intervals for binomial proportions according to a number of the most common proposed methods.
BinomCI(x, n, conf.level = 0.95, sides = c("two.sided", "left", "right"),
method = c("wilson", "wald", "waldcc", "agresti-coull", "jeffreys",
"modified wilson", "wilsoncc","modified jeffreys",
"clopper-pearson", "arcsine", "logit", "witting", "pratt",
"midp", "lik", "blaker"),
rand = 123, tol = 1e-05, std_est = TRUE)
number of successes.
number of trials.
confidence level, defaults to 0.95.
a character string specifying the side of the confidence interval, must be one of "two.sided"
(default),
"left"
or "right"
. You can specify just the initial letter. "left"
would be analogue to a hypothesis of
"greater"
in a t.test
.
character string specifing which method to use; this can be one out of:
"wald"
, "wilson"
(default), "wilsoncc"
, "agresti-coull"
, "jeffreys"
,
"modified wilson"
, "modified jeffreys"
, "clopper-pearson"
,
"arcsine"
, "logit"
, "witting"
, "pratt"
, "midp"
, "lik"
and "blaker"
.
Abbreviation of method is accepted. See details.
seed for random number generator; see details.
tolerance for method "blaker"
.
logical, specifying if the standard point estimator for the proportion value x/n
should be returned (TRUE
, default) or the method-specific internally used alternative point estimate (FALSE
).
All arguments are being recycled.
The Wald interval is obtained by inverting the acceptance region of the Wald large-sample normal test.
The Wald with continuity correction interval is obtained by adding the term 1/(2*n) to the Wald interval.
The Wilson interval, which here is the default method, was introduced by Wilson (1927) and is the inversion of the CLT approximation to the family of equal tail tests of p = p0.
The Wilson interval is recommended by Agresti and Coull (1998) as well as by
Brown et al (2001). It is also returned as conf.int
from the function prop.test
with the correct
option set to FALSE
.
The Wilson cc interval is a modification of the Wilson interval adding a continuity correction term. This is returned as conf.int
from the function prop.test
with the correct
option set to TRUE
.
The modified Wilson interval is a modification of the Wilson interval for x close to 0 or n as proposed by Brown et al (2001).
The Agresti-Coull interval was proposed by Agresti and Coull (1998) and is a slight modification of the Wilson interval. The Agresti-Coull intervals are never shorter than the Wilson intervals; cf. Brown et al (2001). The internally used point estimator p-tilde is returned as attribute.
The Jeffreys interval is an implementation of the equal-tailed Jeffreys prior interval as given in Brown et al (2001).
The modified Jeffreys interval is a modification of the Jeffreys interval for
x == 0 | x == 1
and x == n-1 | x == n
as proposed by
Brown et al (2001).
The Clopper-Pearson interval is based on quantiles of corresponding beta distributions. This is sometimes also called exact interval.
The arcsine interval is based on the variance stabilizing distribution for the binomial distribution.
The logit interval is obtained by inverting the Wald type interval for the log odds.
The Witting interval (cf. Beispiel 2.106 in Witting (1985)) uses randomization to obtain uniformly optimal lower and upper confidence bounds (cf. Satz 2.105 in Witting (1985)) for binomial proportions.
The Pratt interval is obtained by extremely accurate normal approximation. (Pratt 1968)
The Mid-p approach is used to reduce the conservatism of the Clopper-Pearson, which is known to be very pronounced. The method midp accumulates the tail areas. The lower bound \(p_l\) is found as the solution to the equation $$\frac{1}{2} f(x;n,p_l) + (1-F(x;m,p_l)) = \frac{\alpha}{2}$$ where \(f(x;n,p)\) denotes the probability mass function (pmf) and \(F(x;n,p)\) the (cumulative) distribution function of the binomial distribution with size \(n\) and proportion \(p\) evaluated at \(x\). The upper bound \(p_u\) is found as the solution to the equation $$\frac{1}{2} f(x;n,p_u) + F(x-1;m,p_u) = \frac{\alpha}{2}$$ In case x=0 then the lower bound is zero and in case x=n then the upper bound is 1.
The Likelihood-based approach is said to be theoretically appealing. Confidence intervals are based on profiling the binomial deviance in the neighbourhood of the MLE.
For the Blaker method refer to Blaker (2000).
For more details we refer to Brown et al (2001) as well as Witting (1985).
Some approaches for the confidence intervals are capable of violating the [0, 1] boundaries and potentially yield negative results or values beyond 1. These would be truncated such as not to exceed the valid range of [0, 1].
So now, which interval should we use? The Wald interval often has inadequate coverage, particularly for small n and values of p close to 0 or 1. Conversely, the Clopper-Pearson Exact method is very conservative and tends to produce wider intervals than necessary. Brown et al. recommends the Wilson or Jeffreys methods for small n and Agresti-Coull, Wilson, or Jeffreys, for larger n as providing more reliable coverage than the alternatives.
For the methods "wilson"
, "wilsoncc"
, "modified wilson"
, "agresti-coull"
and "arcsine"
the internally used alternative point estimator for the proportion value can be returned (by setting std_est = FALSE
). The point estimate typically is slightly shifted towards 0.5 compared to the standard estimator. See the literature for the more details.
A vector with 3 elements for estimate, lower confidence intervall and upper for the upper one.
For more than one argument each, a 3-column matrix is returned.
The base of this function once was binomCI()
from the SLmisc package. In the meantime, the code has been updated on several occasions and it has undergone numerous extensions and bug fixes.
Agresti A. and Coull B.A. (1998) Approximate is better than "exact" for interval estimation of binomial proportions. American Statistician, 52, pp. 119-126.
Brown L.D., Cai T.T. and Dasgupta A. (2001) Interval estimation for a binomial proportion Statistical Science, 16(2), pp. 101-133.
Witting H. (1985) Mathematische Statistik I. Stuttgart: Teubner.
Pratt J. W. (1968) A normal approximation for binomial, F, Beta, and other common, related tail probabilities Journal of the American Statistical Association, 63, 1457- 1483.
Wilcox, R. R. (2005) Introduction to robust estimation and hypothesis testing. Elsevier Academic Press
Newcombe, R. G. (1998) Two-sided confidence intervals for the single proportion: comparison of seven methods, Statistics in Medicine, 17:857-872 https://pubmed.ncbi.nlm.nih.gov/16206245/
Blaker, H. (2000) Confidence curves and improved exact confidence intervals for discrete distributions, Canadian Journal of Statistics 28 (4), 783-798
binom.test
, binconf
, MultinomCI
, BinomDiffCI
, BinomRatioCI
BinomCI(x=37, n=43,
method=eval(formals(BinomCI)$method)) # return all methods
#> est lwr.ci upr.ci
#> wilson 0.8604651 0.7273641 0.9344428
#> wald 0.8604651 0.7568980 0.9640322
#> waldcc 0.8604651 0.7452701 0.9756601
#> agresti-coull 0.8604651 0.7235600 0.9382469
#> jeffreys 0.8604651 0.7348110 0.9395927
#> modified wilson 0.8604651 0.7273641 0.9344428
#> wilsoncc 0.8604651 0.7137335 0.9419725
#> modified jeffreys 0.8604651 0.7348110 0.9395927
#> clopper-pearson 0.8604651 0.7206752 0.9470234
#> arcsine 0.8604651 0.7346862 0.9424696
#> logit 0.8604651 0.7224337 0.9359412
#> witting 0.8604651 0.7493378 0.9273288
#> pratt 0.8604651 0.7661306 0.9472522
#> midp 0.8604651 0.7321815 0.9414281
#> lik 0.8604651 0.7372546 0.9420472
#> blaker 0.8604651 0.7255152 0.9374534
prop.test(x=37, n=43, correct=FALSE) # same as method wilson
#>
#> 1-sample proportions test without continuity correction
#>
#> data: 37 out of 43, null probability 0.5
#> X-squared = 22.349, df = 1, p-value = 0.000002274
#> alternative hypothesis: true p is not equal to 0.5
#> 95 percent confidence interval:
#> 0.7273641 0.9344428
#> sample estimates:
#> p
#> 0.8604651
#>
prop.test(x=37, n=43, correct=TRUE) # same as method wilsoncc
#>
#> 1-sample proportions test with continuity correction
#>
#> data: 37 out of 43, null probability 0.5
#> X-squared = 20.93, df = 1, p-value = 0.000004763
#> alternative hypothesis: true p is not equal to 0.5
#> 95 percent confidence interval:
#> 0.7137335 0.9419725
#> sample estimates:
#> p
#> 0.8604651
#>
# the confidence interval computed by binom.test
# corresponds to the Clopper-Pearson interval
BinomCI(x=42, n=43, method="clopper-pearson")
#> est lwr.ci upr.ci
#> [1,] 0.9767442 0.8771095 0.9994114
binom.test(x=42, n=43)$conf.int
#> [1] 0.8771095 0.9994114
#> attr(,"conf.level")
#> [1] 0.95
# all arguments are being recycled:
BinomCI(x=c(42, 35, 23, 22), n=43, method="wilson")
#> est lwr.ci upr.ci
#> x.1 0.9767442 0.8794101 0.9958829
#> x.2 0.8139535 0.6738300 0.9025825
#> x.3 0.5348837 0.3891564 0.6748894
#> x.4 0.5116279 0.3675231 0.6538255
BinomCI(x=c(42, 35, 23, 22), n=c(50, 60, 70, 80), method="jeffreys")
#> est lwr.ci upr.ci
#> x.1:n.1 0.8400000 0.7206737 0.9213325
#> x.2:n.2 0.5833333 0.4571040 0.7017365
#> x.3:n.3 0.3285714 0.2272016 0.4437899
#> x.4:n.4 0.2750000 0.1863875 0.3795587
# example Table I in Newcombe (1998)
meths <- c("wald", "waldcc", "wilson", "wilsoncc",
"clopper-pearson","midp", "lik")
round(cbind(
BinomCI(81, 263, m=meths)[, -1],
BinomCI(15, 148, m=meths)[, -1],
BinomCI(0, 20, m=meths)[, -1],
BinomCI(1, 29, m=meths)[, -1]), 4)
#> lwr.ci upr.ci lwr.ci upr.ci lwr.ci upr.ci lwr.ci upr.ci
#> wald 0.2522 0.3638 0.0527 0.1500 0 0.0000 0.0000 0.1009
#> waldcc 0.2503 0.3657 0.0494 0.1534 0 0.0250 0.0000 0.1181
#> wilson 0.2553 0.3662 0.0624 0.1605 0 0.1611 0.0061 0.1718
#> wilsoncc 0.2535 0.3682 0.0598 0.1644 0 0.2005 0.0018 0.1963
#> clopper-pearson 0.2527 0.3676 0.0578 0.1617 0 0.1684 0.0009 0.1776
#> midp 0.2544 0.3658 0.0601 0.1581 0 0.1391 0.0017 0.1585
#> lik 0.2543 0.3655 0.0596 0.1567 0 0.0916 0.0020 0.1432
# returning p.tilde for agresti-coull ci
BinomCI(x=81, n=263, meth="agresti-coull", std_est = c(TRUE, FALSE))
#> est lwr.ci upr.ci
#> TRUE 0.3079848 0.2552207 0.3662774
#> FALSE 0.3107490 0.2552207 0.3662774