Skip to main content

Unbiased bootstrap error estimation for linear discriminant analysis

Abstract

Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

1Introduction

The bootstrap method [1]–[7] has been used in a wide range of statistical problems. The asymptotic behavior of bootstrap has been studied [8]–[11], while small-sample properties have been studied under simplifying assumptions, such as considering the estimator based on all possible bootstrap samples (the ‘complete’ bootstrap) [12]–[14]. The small-sample properties of the usual bootstrap are not well understood, in particular when it comes to estimating the error rates of classification rules [15],[16].

There has been, on the other hand, interest in the application of bootstrap to error estimation in classification problems and, in particular, gene expression classification studies [17]–[20]. Of particular interest is the issue of classifier error estimation [21],[22]. Bootstrap methods have generally been shown to outperform more traditional error estimation techniques, such as resubstitution and cross-validation, in terms of root-mean-square (RMS) error [4],[5],[7],[23]–[35]. Bootstrap error estimation is typically performed via a convex combination of the (generally) pessimistic basic bootstrap estimator, known as the zero bootstrap, and the (generally) optimistic resubstitution estimator. A basic problem is how to choose the weight that yields an unbiased estimator.

The problem of unbiased convex error estimation was previously considered in [36]–[38] for a convex combination of resubstitution and cross-validation estimators, and in [4],[7],[23] for a combination between resubstitution and the basic bootstrap estimator. In the former case, a fixed suboptimal weight of 0.5 was proposed in [36],[38], while an asymptotic analysis to find the optimal weight was provided in [37]. In the latter case, our case of interest, a fixed suboptimal weight of 0.632 was proposed in [4], leading to the well-known 0.632 bootstrap estimator, while in [7], a suboptimal weight is computed by means of a sample-based procedure, which attempts to counterbalance the effect of overfitting on the bias, leading to the so-called 0.632+ bootstrap error estimator; the problem of finding the optimal weight for finite sample cases was addressed via a numerical approach in [23].

Here, we determine the optimal weight for finite sample cases analytically, in the case of linear discriminant analysis under Gaussian populations. In the univariate case, no other assumptions are made. In the multivariate case, it is assumed that the populations are homoskedastic and that the common covariance matrix is known and used in the discriminant. In either case, no simplifications are introduced to the bootstrap error estimator; it is the usual one, based on a finite number of random bootstrap samples.

The analysis in this paper follows in the steps of previous papers that have provided analytical representations for the moments of error-estimator distributions [39],[40]. In the univariate case, exact expressions are given for the expectation of the zero bootstrap error estimator, in the general heteroskedastic (general-variance) Gaussian case. By using similar expressions for the expected true and resubstitution error [39], this allows the exact calculation of the required weight. In the multivariate case, the expectation of the zero bootstrap error estimator is expressed as a probability involving the ratio of two noncentral chi-square variables, in the homoskedastic Gaussian case, assuming that the true common covariance matrix is used in the discriminant. The resulting expression is exact but necessitates approximation for its numerical computation. This is done in this paper via the Imhof-Pearson three-moment method, which is accurate in small-sample cases [41]. Use of similar expressions for the expected true and resubstitution error [40] then allows the exact calculation of the required weight.

In the homoskedastic case, the required weight for unbiasedness is shown to be a function only of the Bayes error and sample size. Accordingly, plots and tables of the required weight for varying values of Bayes error and sample size are presented; if the Bayes error can be estimated for a problem, this provides a way to obtain the optimal weight to use. In the univariate case, it was observed that as the sample size increases, the optimal weight settles on an asymptotic value of around 0.675, thus slightly over the heuristic value 0.632; by contrast, in the multivariate case (d=2), the asymptotic value appears to be strongly dependent on the Bayes error, being as a rule significantly smaller than 0.632, except for very small Bayes error.

This paper is organized as follows. The ‘Bootstrap classification’ section defines linear discriminant analysis as well as its application under bootstrap sampling. The ‘Bootstrap error estimation’ section reviews convex bootstrap error estimation. The ‘Unbiased bootstrap error estimation’ section contains the main theoretical results in the paper, providing the analytical expressions for the computation of the required convex bootstrap weight in the univariate and multivariate cases. The ‘Gene expression classification example’ section contains a demonstration of the usage of the optimal weight in bootstrap error estimation using data from the breast cancer classification study in [42],[43]. Lastly, the ‘Conclusions’ section contains a summary and concluding remarks.

All the proofs are presented in the Appendix.

2Bootstrap classification

Classification involves a predictor vector XRd, also known as a feature vector, which represents an individual from one of two populations Π0 and Π1 (we consider here only this binary classification problem). The classification problem is to assign X correctly to its population of origin. The populations are coded into a discrete label Y{0,1}. Therefore, given a feature vector X, classification attempts to predict the corresponding value of the label Y. We assume that there is a joint feature-label distribution F XY for the pair (X,Y) characterizing the classification problem. In particular, it determines the probabilities c0=P(XΠ0)=P(Y=0) and c1=P(XΠ1)=P(Y=1), which are called the prior probabilities.

Given a fixed sample size n, the sample data is an i.i.d. sample S n ={(X1,Y1),…,(X n ,Y n )} from F XY . The population-specific sample sizes are given by n 0 = i = 1 n I Y i = 0 and n 1 = i = 1 n I Y i = 1 =n n 0 , which are random variables, with n0Binomial(n,c0) and n1Binomial(n,c1). When we need to emphasize that n0 and n1 are random variables, we will use capital letters N0 and N1, respectively. This sampling design, which is the most commonly found one in contemporary pattern recognition, is known as mixture sampling[44].

A classification rule Ψ n is used to map the training data S n into a designed classifier ψ n =Ψ n (S n ), where ψ n is a function taking on values in the set {0,1}, such that X is assigned to population Π0 or Π1 according to whether ψ n (X)=0 or 1, respectively. The classification error rate ε n of classifier ψ n is the probability that the assignment is erroneous:

ε n = c 0 P ( ψ n ( X ) = 1 Y = 0 ) + c 1 P ( ψ n ( X ) = 0 Y = 1 ) = def c 0 ε n 0 + c 1 ε n 1 ,
(1)

where (X,Y) is an independent test point and ε n i =P( ψ n (X)=1iY=i) is the error rate specific to population Π i , for i=0,1. Since the training set S n is random, ε n is a random variable, with expected classification error rate E[ ε n ]; this gives the average performance over all possible training sets S n , for fixed sample size n.

Linear discriminant analysis (LDA) employs Anderson’s W discriminant [45], which is defined as follows:

W(X)= X μ ̂ 0 + μ ̂ 1 2 T Σ 1 μ ̂ 0 μ ̂ 1
(2)

where

μ ̂ 0 = 1 n 0 i = 1 n X i I Y i = 0 and μ ̂ 1 = 1 n 1 i = 1 n X i I Y i = 1
(3)

are the sample means relative to each population, and Σ is a matrix, which can be either (1) the true common covariance matrix of the populations, assuming it is known (this is the approach followed, for example, in [39],[40],[46]), or (2) the sample covariance matrix based on the pooled sample S n , which leads to the general LDA case. In this paper, we will assume case (1) throughout.

The corresponding LDA classifier is given by

ψ n (X)= 1 , if W ( X ) < 0 0 , if W ( X ) 0 ,
(4)

that is, the sign of W(X) determines the classification of X.

A bootstrap sample S n contains n instances drawn uniformly, with replacement, from S n . Hence, some of the instances in S n may appear multiple times in S n , whereas others may not appear at all. Let C be a vector of size n, where the i th component C(i) equals the number of appearances in S n of the i th instance in S n . The vector C will be referred to as a bootstrap vector.

For a given S n , the vector C uniquely determines a bootstrap sample S n , which we denote by S n C . Note that the original sample itself is included: if C=(1,,1) = def 1 n , then S n C = S n , since each original instance appears once in the bootstrap sample. Note also that the number of distinct bootstrap samples, i.e., values for C, is equal to 2 n 1 n ; even for small n, this is a large number. For example, the total number of possible bootstrap samples of size n=20 is larger than 6.8×1010.

The vector C has a multinomial distribution with parameters (n,1/n,…,1/n),

P(C=( i 1 ,, i n ))= 1 n n n ! i 1 ! i n ! , i 1 ++ i n =n.
(5)

Starting from a classification rule Ψ n , one may design a classifier ψ n C = Ψ n ( S C ) on a bootstrap training set SC. Its classification error ε n C is given as in (1), namely, ε n C = c 0 ε n C , 0 + c 1 ε n C , 1 where ε n C , i =P( ψ n C (X)=1iY=i) is the error rate specific to population Π i , for i=0,1. In this paper, we apply this scheme to the LDA classification rule defined previously. Notice the distinction between a bootstrap LDA classifier and a ‘bagged’ (bootstrap-aggregated) LDA classifier [47],[48]; these correspond to distinct classification rules. The bootstrap LDA classifier is employed here as an auxiliary tool to analyze the problem of unbiased bootstrap error estimation for the plain LDA classifier.

3Bootstrap error estimation

Since the feature-label distribution is typically unknown, the classification error rate ε n has to be estimated by a sample-based statistic ε ̂ n , commonly referred to as an error estimator. Data in practice are often limited, and the training sample S n has to be used for both designing the classifier ψ n and as the basis for the error estimator ε ̂ n . The simplest and fastest way to estimate the error of a designed classifier ψ n is to compute its error on the sample data itself:

ε ̂ n r = 1 n i = 1 n I ψ n ( X i ) = 1 I Y i = 0 + I ψ n ( X i ) = 0 I Y i = 1 .
(6)

This resubstitution estimator, or apparent error, is often optimistically biased, that is, it is often the case that Bias ε ̂ n r =E ε ̂ n r E[ ε n ]<0, though this is not always so. The bias tends to worsen with more complex classification rules [49].

The basic bootstrap error estimator is the zero bootstrap error estimator [4], which is introduced next. Given the training data S n , B bootstrap samples are randomly drawn from it. Denote the corresponding (random) bootstrap vectors by {C1,…,C B }. The zero bootstrap error estimator is defined as the average error committed by the B bootstrap classifiers on sample points that do not appear in the bootstrap samples:

ε ̂ n boot = 1 B i = 1 B 1 n ( C i ) j : C i ( j ) = 0 I ψ n C ( X j ) = 1 I Y j = 0 + I ψ n C ( X j ) = 0 I Y j = 1 ,
(7)

where n(C) is the number of zeros in C.

The bootstrap zero estimator tends to be pessimistically biased, since the amount of distinct training instances available for designing the classifier is on average (1−e−1)n≈0.632n<n. Pessimistic bias in an error estimator can be mitigated by forming a convex combination with an optimistic error estimator [23]. In the case of bootstrap error estimation, the standard approach is to form a convex combination of the zero bootstrap with resubstitution,

ε ̂ n conv =(1w) ε ̂ n r +w ε ̂ n boot .
(8)

Selecting the appropriate weight w=w leads to an unbiased error estimator, E[ ε ̂ n conv ]=E[ ε n ].

In [4], the weight w is heuristically set to w=0.632 to reflect the average ratio of original training instances that appear in a bootstrap sample. This is known as the .632 bootstrap estimator

ε ̂ n b 632 =(10.632) ε ̂ n r +0.632 ε ̂ n boot ,
(9)

which has been heavily employed in the machine learning field.

4Unbiased bootstrap error estimation

The 0.632 bootstrap error estimator reviewed in the previous section is not guaranteed to be unbiased. In this section, we will examine the necessary conditions for setting the weight w=w in (8) to achieve unbiasedness. We will then particularize the analysis to the Gaussian linear discriminant case, where exact expressions for w will be derived, both in the univariate and multivariate cases.

The bias of the convex estimator in (8) is given by

E ε ̂ n conv ε n =(1w)E ε ̂ n r +wE ε ̂ n boot E ε n .
(10)

Setting this to zero yields the exact weight

w = E ε ̂ n r E ε n E ε ̂ n r E ε ̂ n boot
(11)

that produces an unbiased error estimator.

Now, applying expectation on both sides of (7) produces

E ε ̂ n boot = C E ε n C C p(C),
(12)

where p(C) is given by (5) and the sum is taken over all possible values of C (an efficient procedure for listing all multinomial vectors is provided by the NEXCOM routine given in [50], Chapter 5). Equations (11) and (12) allow the computation of the weight w given the knowledge of E[ε n ], E ε ̂ n r , and E ε n C C . We will present next exact formulas for these expectations in the case of the LDA classification rule under Gaussian populations.

4.1 Univariate case

In the univariate case, the common variance term cancels and the W statistic and LDA classifier become greatly simplified, with

ψ n (X)= 1 , if X μ ̂ 0 + μ ̂ 1 2 ( μ ̂ 0 μ ̂ 1 ) < 0 0 , otherwise .
(13)

The following functions will be useful. Let Φ(u)=P(Zu) and Φ(u,v;ρ)=P((Z1,Z2)≤(u,v)), where Z is a zero-mean, unit-variance Gaussian random variable, and Z1, Z2 are zero-mean, unit-variance random variables that are jointly Gaussian distributed, with correlation coefficient ρ.

Assume that population Π i is distributed as N(μ i ,σ i ), for i=0,1, where σ0σ1 in general.

Under these conditions, John obtained in [39] an exact expression for the expectation of the true classification error for fixed sample sizes n0 and n1 (this is known as separate sampling [44]). John’s result can be written as follows:

E ε n 0 N 0 = n 0 =Φ(a,b; ρ e )+Φ(a,b; ρ e ),
(14)

where

a = μ 1 μ 0 σ 0 2 n 0 + σ 1 2 n 1 , b = μ 0 μ 1 4 + 1 n 0 σ 0 2 + σ 1 2 n 1 , ρ e = σ 0 2 n 0 σ 1 2 n 1 σ 0 2 n 0 + σ 1 2 n 1 4 + 1 n 0 σ 0 2 + σ 1 2 n 1 .
(15)

The corresponding result for E[ ε n 1 N 0 = n 0 ] is obtained by simply interchanging all indices 0 and 1 in the previous expressions. The expected error rate can then be found by using conditioning and Equation (1):

E [ ε n ] = n 0 = 0 n E [ ε n N 0 = n 0 ] P ( N 0 = n 0 ) = n 0 = 0 n c 0 E ε n 0 N 0 = n 0 + c 1 E ε n 1 N 0 = n 0 × P ( N 0 = n 0 ) .
(16)

where

P( N 0 = n 0 )= n n 0 c 0 n 0 c 1 n 1 .
(17)

As for resubstitution, Hills provided in [51] exact expressions for the expected error for fixed n0 and n1. However, his expression applies only to the case σ0=σ1. Theorem 3 in [52] provides a generalization of this result to the case of populations of unequal variances. First, note that

ε ̂ n r = n 0 n ε ̂ n r , 0 + n 1 n ε ̂ n r , 1 ,
(18)

where

ε ̂ n r , 0 = 1 n 0 i = 1 n I ψ ( X i ) = 1 I Y i = 0 and ε ̂ n r , 1 = 1 n 1 i = 1 n I ψ ( X i ) = 0 I Y i = 1
(19)

are the apparent error rates specific to class 0 and 1, respectively. The result in [52] can be written as

E ε ̂ n r , 0 N 0 = n 0 =Φ(c,d; ρ r )+Φ(c,d; ρ r ),
(20)

where

c = μ 1 μ 0 σ 0 2 n 0 + σ 1 2 n 1 , d = μ 0 μ 1 4 3 n 0 σ 0 2 + σ 1 2 n 1 , ρ r = σ 0 2 n 0 + σ 1 2 n 1 σ 0 2 n 0 + σ 1 2 n 1 4 3 n 0 σ 0 2 + σ 1 2 n 1 .
(21)

The corresponding result for E[ ε ̂ n r , 1 N 0 = n 0 ] is obtained by interchanging all indices 0 and 1. The expected resubstitution error rate can then be found by using conditioning and Equation (18):

E ε ̂ n r = n 0 = 0 n E ε ̂ n r N 0 = n 0 P ( N 0 = n 0 ) = n 0 = 0 n n 0 n E ε ̂ n r , 0 N 0 = n 0 + n 1 n E ε ̂ n r , 1 N 0 = n 0 × P ( N 0 = n 0 ) .
(22)

Finally, let us consider the expected bootstrap error. Given C, the bootstrap LDA classifier is obtained by replacing μ ̂ i by μ ̂ i C , i=0,1, in (13):

ψ n C (X)= 1 , if X μ ̂ 0 C + μ ̂ 1 C 2 μ ̂ 0 C μ ̂ 1 C < 0 0 , otherwise ,
(23)

where

μ ̂ 0 C = i = 1 n C ( i ) X i I Y i = 0 i = 1 n C ( i ) I Y i = 0 and μ ̂ 1 C = i = 1 n C ( i ) X i I Y i = 1 i = 1 n C ( i ) I Y i = 1
(24)

are bootstrap sample means.

Now, note that with N0=n0 fixed, the training data labels Y i , i=1,…,n, are no longer random. Since all classification rules of interest are invariant to reordering of the training data, we can, without loss of generality, reorder the sample points so that Y i =0 for i=1,…,n0, and Y1=1 for i=n0+1,…,n. Let the same reordering be applied to a given bootstrap vector C. The next theorem extends John’s result to the classification error of the bootstrapped LDA classification rule defined by (23).

Theorem 1.

Assume that population Π i is distributed as N μ i , σ i 2 , for i=0,1. Then the expected error rate of the bootstrap LDA classification rule defined by (23) is given by:

E ε n C , 0 N 0 = n 0 , C = Φ ( e , f ; ρ c ) + Φ ( e , f ; ρ c ) ,
(25)

where

e = μ 1 μ 0 s 0 σ 0 2 + s 1 σ 1 2 , f = μ 0 μ 1 ( 4 + s 0 ) σ 0 2 + s 1 σ 1 2 , ρ c = s 0 σ 0 2 s 1 σ 1 2 ( 4 + s 0 ) σ 0 2 + s 1 σ 1 2 s 0 σ 0 2 + s 1 σ 1 2 ,
(26)

with

s 0 = i = 1 n 0 C ( i ) 2 i = 1 n 0 C ( i ) 2 and s 1 = i = 1 n 1 C ( n 0 + i ) 2 i = 1 n 1 C ( n 0 + i ) 2 ,
(27)

The corresponding result for E ε n C , 1 N 0 = n 0 , C is obtained by interchanging all indices 0 and 1.

Proof. See the Appendix.

It is easy to check that the result in Theorem 1 reduces to the one in (14) and (15) when C=1 n . Following (16), we can then write

E ε n C C = n 0 = 0 n E ε n C N 0 = n 0 , C P ( N 0 = n 0 ) = n 0 = 0 n c 0 E ε n C , 0 N 0 = n 0 , C + c 1 E ε n C , 1 N 0 = n 0 , C P ( N 0 = n 0 ) .
(28)

The expected bootstrap error rate E[ ε ̂ n boot ] can now be computed via (12).

The weight w for unbiased bootstrap error estimation can now be computed exactly by means of Equations (11), (12), (14) to (17), (20) to (22), and (25) to (28).

In the special case σ0=σ1=σ (homoskedasticity), it follows easily from the previous expressions that E[ε n ], E[ ε ̂ n r ], and E[ ε ̂ n boot ] depend only on the sample size n and on the Mahalanobis distance between the populations δ=|μ1μ0|/σ, and therefore so does the weight w, through (11). Since the optimal (Bayes) classification error in this case is ε=Φ(−δ/2), there is a one-to-one correspondence between Bayes error and the Mahalanobis distance. Therefore, in the homoskedastic case, the weight wis a function only of the Bayes error εand the sample size n.

Figure 1 and Table 1 display the value of w in the homoskedastic case, for several sample sizes and Bayes errors. In order to extend the plots up to n=200, it is necessary to approximate E[ ε ̂ n boot ] in (12) by a Monte Carlo procedure; this is done by generating M=100×n2 independent random vectors {C i i=1,…,M} and letting E[ ε ̂ n boot ](1/M) i = 1 M E[ ε n C i C i ]. We find that this value of M is large enough to obtain an accurate approximation. All other quantities are computed exactly, as described previously. One can see in Figure 1a that w varies wildly and can be very far from the heuristic 0.632 weight; however, as the sample size increases, w appears to settle around an asymptotic fixed value. This asymptotic value is approximately 0.675, being thus slightly larger than 0.632. In addition, Figure 1b allows one to see that convergence to the asymptotic value is faster for smaller Bayes errors. These facts help explain the good performance of the original convex 0.632 bootstrap error estimator with moderate sample sizes and small Bayes errors.

Figure 1
figure 1

Univariate case. Required weight w for unbiased convex bootstrap estimation plotted against (a) sample size and (b) Bayes error.

Table 1 Univariate case: required weight w for unbiased convex bootstrap estimation

4.2 Multivariate case

Assume that population Π i is distributed as a multivariate Gaussian N(μ i ,Σ), for i=0,1. Under these conditions, John obtained in [39] an exact expression for the expectation of the error of the LDA classification rule, defined by (2) to (4), for the case where N0=n0 is fixed. This result is stated by Moran in [40] as follows:

E ε n 0 N 0 = n 0 =P W 1 W 2 > 1 ρ e 1 + ρ e ,
(29)

where W1 and W2 are independently distributed as noncentral chi-square variables with d degrees of freedom(d being the dimensionality) and noncentrality parameters λ1 and λ2, with

λ 1 = n 0 n 1 2 ( 1 + ρ e ) 1 n 0 + n 1 1 n 0 + n 1 + 4 n 0 n 1 2 δ 2 , λ 2 = n 0 n 1 2 ( 1 ρ e ) 1 n 0 + n 1 + 1 n 0 + n 1 + 4 n 0 n 1 2 δ 2 , ρ e = n 1 n 0 ( n 0 + n 1 ) ( n 0 + n 1 + 4 n 0 n 1 ) ,
(30)

where δ2 = (μ1μ0)TΣ−1(μ1μ0) is the squared Mahalanobis distance between the populations. The corresponding result for E[ ε n 1 N 0 = n 0 ] is obtained by interchanging n0 and n1. The expected true error rate can then be found by using (16).

Moran also provided the following expression for the expectation of the resubstitution error estimator in the multivariate case, for fixed N0=n0[40]:

E ε ̂ n r , 0 N 0 = n 0 =P W 3 W 4 > 1 ρ r 1 + ρ r ,
(31)

where W3 and W4 are independently distributed as noncentral chi-square variables with d degrees of freedom and noncentrality parameters λ3 and λ4, with

λ 3 = n 0 n 1 2 ( 1 + ρ r ) 1 n 0 + n 1 1 n 0 3 n 1 + 4 n 0 n 1 2 δ 2 , λ 4 = n 0 n 1 2 ( 1 ρ r ) 1 n 0 + n 1 + 1 n 0 3 n 1 + 4 n 0 n 1 2 δ 2 , ρ r = n 0 + n 1 n 0 3 n 1 + 4 n 0 n 1 ,
(32)

The corresponding result for E[ ε ̂ n r , 1 ] is obtained by interchanging n0 and n1. The expected resubstitution error rate can then be found by using (22).

The bootstrap LDA classifier in the multivariate case is given by

ψ n C (X)= 1 , if X μ ̂ 0 C + μ ̂ 1 C 2 T Σ 1 μ ̂ 0 C μ ̂ 1 C < 0 0 , otherwise ,
(33)

where μ ̂ 0 C and μ ̂ 1 C are defined in (24). The next theorem generalizes John’s result for the multivariate classification error to the case of the bootstrapped LDA classification rule.

Theorem 2.

Assume that population Π i is distributed as N(μ i ,Σ), for i=0,1. Then, the expected error rate of the bootstrap LDA classification rule defined by (33) is given by

E ε n C , 0 N 0 = n 0 , C =P W 5 W 6 > 1 ρ c 1 + ρ c ,
(34)

where W5 and W6 are independently distributed as noncentral chi-square variables with d degrees of freedom and noncentrality parameters λ5 and λ6, with

λ 5 = 1 2 ( 1 + ρ c ) 1 s 0 + s 1 1 s 0 + s 1 + 4 2 δ 2 , λ 6 = 1 2 ( 1 ρ c ) 1 s 0 + s 1 + 1 s 0 + s 1 + 4 2 δ 2 , ρ c = s 0 s 1 ( s 0 + s 1 ) ( s 0 + s 1 + 4 ) ,
(35)

where s0 and s1 are defined in (27). The corresponding result for E[ ε n C , 1 N 0 = n 0 ,C] is obtained by interchanging s0 and s1.

Proof. See the Appendix.

It is easy to check that the result in Theorem 2 reduces to the one in (29) and (30) when C=1 n .

As in the univariate case, Theorem 2 can be used in conjunction with Equations (12) and (28) to compute E[ ε ̂ n boot ].

The weight w for unbiased bootstrap error estimation can now be computed exactly by means of Equations (11), (12), (16) to (17), (22), (28), (29) to (32), and (34) to (35).

An issue that arises in the multivariate case is the computation of the probabilities in (29), (31), and (34). This computation is very difficult since it involves the ratio of noncentral chi-square random variables, which has a doubly noncentral F distribution. Computation of this distribution is a hard problem. Moran proposes in [40] a complex procedure, based on work by Price [53], to compute this probability, which only applies to even dimensionality d. We employ a simpler procedure, namely, the Imhof-Pearson three-moment method, which is applicable to even and odd dimensionality [41]. This consists of approximating a noncentral χ d 2 (λ) random variable with a central χ h 2 random variable, by equating the first three moments of their distributions. This approach was also employed in [52], where it was found to be very accurate. To fix ideas, we consider (29). The Imhof-Pearson three-moment approximation is given by

E ε n 0 =P W 1 W 2 > 1 ρ e 1 + ρ e P χ h 2 > y ,
(36)

where χ h 2 is a central chi-square random variable with h degrees of freedom, with

h = c 2 3 c 3 2 , y = h c 1 h c 2 ,
(37)

and

c i = 1 + ρ e i ( d + i λ 1 ) + ( 1 ) i 1 ρ e i ( d + i λ 2 ) , i = 1 , 2 , 3 .
(38)

The approximation is valid only for c3>0 [41]. If c3<0, one uses the approximation

E ε n 0 =P W 1 W 2 > 1 ρ e 1 + ρ e P χ h 2 < y ,
(39)

where h and y are as in (37), and

c i = ( 1 ) i 1 + ρ e i ( d + i λ 1 ) + 1 ρ e i ( d + i λ 2 ) , i = 1 , 2 , 3 .
(40)

The same approximation method applies to (31) and (34) by substituting the appropriate values.

As in the univariate case, the assumption of a common covariance matrix Σ makes the expectations E[ε n ], E[ ε ̂ n r ], and E[ ε ̂ n boot ] and thus also the weight w, functions only of n and δ. Since ε=Φ(−δ/2), this means that the weight w is a function only of the Bayes error ε and the sample size n.

Figure 2 and Table 2 display the value of w computed with the previous expressions in this section, for several sample sizes and Bayes errors. As in the univariate case, E[ ε ̂ n boot ] in (12) is approximated by a Monte Carlo procedure, with the same number M=100×n2 of MC vectors. All other quantities are computed exactly, as described previously, save for the Imhof-Pearson approximation. We can see in Figure 2 that there is considerable variation in the value of w and it can be far from the heuristic 0.632 weight; however, as the sample size increases, w appears to settle around an asymptotic fixed value. In contrast to the univariate case, these asymptotic values here appear to be strongly dependent on the Bayes error and are significantly smaller than the heuristic 0.632 except for very small Bayes errors. As in the univariate case, convergence to the apparent asymptotic value is faster for smaller Bayes errors. These facts again help explain the good performance of the original convex 0.632 bootstrap error estimator for moderate sample sizes and small Bayes errors.

Figure 2
figure 2

Bivariate case. Required weight w for unbiased convex bootstrap estimation plotted against (a) sample size and (b) Bayes error.

Table 2 Bivariate case: required weight w for unbiased convex bootstrap estimation

5Gene expression classification example

Here we demonstrate the application of the previous theory in comparing the performance of the bootstrap error estimator using the optimal weight versus the use of the fixed w=0.632 weight, using gene expression data from the well-known breast cancer classification study in [42], which analyzed expression profiles from 295 tumor specimens, divided into N0=115 specimens belonging to the ‘good-prognosis’ population (class 1 here) and N1=180 specimens belonging to the ‘poor-prognosis’ population (class 0).

Our experiment was set up in the following way. We selected two genes among the previously published 70-gene prognosis profile [43]. These genes were selected for their approximate homoskedastic Gaussian distributions (see Figure 3). Since the real prior probabilities c0 and c1 for the good- and poor-prognosis populations are unknown, we assumed three different scenarios corresponding to c0=1/3, c0=1/2, and c0=2/3 and downsampled randomly one or the other set of specimens to obtain new sample sizes (90,180), (115,115), and (115,68), respectively, so as to reflect the assumed prior probabilities. In each of the three cases, we then drew 2,000 random samples of size n=30 from the pooled data, computed for each the true error, resubstitution, basic bootstrap, and convex bootstrap error rates. Bias and root-mean-square (RMS) error for each estimator were estimated by averaging over the 2,000 repetitions. We considered both the fixed 0.632 weight and the optimal weight prescribed by our analysis. For the latter, we estimated for each value of c0 the Bayes error using the full data set and read off Table 2 the optimal weight corresponding to the estimated Bayes error and sample size n=30. The results are displayed in Table 3. Despite the approximate nature of the results, given that the simulated training samples are not independent from each other, we can see that the bias and RMS were always smaller for the estimator using the optimal weight than using the fixed 0.632 weight (all bootstrap estimators vastly outperforming resubstitution).

Figure 3
figure 3

Data used in the gene expression experiment. The plot shows the optimal (linear) classifier superimposed on the sample for the genes OXCT and WISP1, from the breast cancer study in [42]. We can see that both populations are approximately Gaussian with equal dispersion. Bad prognosis = red. Good prognosis = blue.

Table 3 Bias and RMS of estimators considered in the experiment with expression data from genes ‘OXCT’ and ‘WISP1’

6Conclusions

Exact expressions were derived for the required weight for unbiased convex bootstrap error estimation in the finite sample case, for linear discriminant analysis of Gaussian populations. The results not only provide the practitioner with a recommendation of what weight to use given the sample size and problem difficulty, but also offer insight into the choice of the 0.632 weight for the classic 0.632 bootstrap error estimator. It was observed that the required weight for unbiasedness can deviate significantly from the 0.632 weight, particularly in the multivariate case, where the required weight for unbiasedness appears to settle on an asymptotic value that is strongly dependent on the Bayes error, being as a rule smaller than 0.632. The results were illustrated by application to gene expression data from a well-known breast cancer study.

7Appendix

Proof of Theorem 1

Following the same technique used in [40], we write

E ε C 0 C = P ψ n C ( X ) = 1 X Π 0 , C = P μ ̂ 1 C > μ ̂ 0 C , X > μ ̂ 0 C + μ ̂ 1 C 2 X Π 0 , C + P μ ̂ 1 C μ ̂ 0 C , X μ ̂ 0 C + μ ̂ 1 C 2 X Π 0 , C = P ( UV > 0 X Π 0 , C ) ,
(41)

where U= μ ̂ 1 C μ ̂ 0 C and V=X μ ̂ 0 C + μ ̂ 1 C 2 . From (303), it is clear that, given C, μ ̂ 0 C and μ ̂ 1 C are independent Gaussian random variables, such that μ ̂ i C N( μ i , s i σ i 2 ), for i=0,1, where s1 and s2 are defined in (27). It follows that U and V are jointly Gaussian random variables, with the following parameters:

E [ U X Π 0 , C ] = μ 1 μ 0 , Var ( U X Π 0 , C ) = s 0 σ 0 2 + s 1 σ 1 2 , E [ V X Π 0 , C ] = μ 0 μ 1 2 , Var ( V X Π 0 , C ) = 1 + s 0 4 σ 0 2 + s 1 4 σ 1 2 , Cov ( U , V X Π 0 , C ) = s 0 σ 0 2 s 1 σ 1 2 2 .
(42)

The result then follows after some algebraic manipulation. By symmetry, to obtain E[ ε C 1 C], one needs only to interchange all indices 0 and 1. □

Proof of Theorem 2

Following the same technique used in [32], we write

E [ ε C 0 C ] = P ( ψ n C ( X ) = 1 X Π 0 , C ) = P ( μ ̂ 1 C μ ̂ 0 C ) T Σ 1 X μ ̂ 0 C + μ ̂ 1 C 2 > 0 X Π 0 , C = P U T V > 0 X Π 0 , C = P ( ( U + V ) T ( U + V ) ( U V ) T ( U V ) > 0 X Π 0 , C ) = P ( U + V ) T ( U + V ) ( U V ) T ( U V ) > 1 X Π 0 , C ,
(43)

where U= ( s 0 + s 1 ) 1 2 Σ 1 2 ( μ ̂ 1 C μ ̂ 0 C ) and V=2 ( s 0 + s 1 + 4 ) 1 2 Σ 1 2 X μ ̂ 0 C + μ ̂ 1 C 2 . It can be readily checked that U+V and UV are independent Gaussian random vectors, such that

E U + V X Π 0 , C = ( s 0 + s 1 ) 1 2 ( s 0 + s 1 + 4 ) 1 2 × Σ 1 / 2 ( μ 1 μ 0 ) , E U V X Π 0 , C = ( s 0 + s 1 ) 1 2 + ( s 0 + s 1 + 4 ) 1 2 × Σ 1 / 2 ( μ 1 μ 0 ) , Σ U + V X Π 0 , C = 2 ( 1 + ρ c ) I , Σ U V X Π 0 , C = 2 ( 1 ρ c ) I ,
(44)

where ρ c is defined as in (35) and I denotes the identity matrix of dimension d. It follows that

W 5 = 1 2 ( 1 + ρ c ) ( U + V ) T ( U + V ) , W 6 = 1 2 ( 1 ρ c ) ( U V ) T ( U V )
(45)

are independent noncentralchi-squared random variables with d degrees of freedom and noncentrality parameters λ5 and λ6 defined in (35). The result then follows from (62). Following along the same lines, one can show that E[ ε C 1 C] is obtained by interchanging s0 and s1 in the result for E[ ε C 0 C] (the details are omitted for brevity). □

References

  1. Efron B: Bootstrap methods: another look at the jackknife. Ann. Stat 1979,7(1):1-26. [Online]. [http://projecteuclid.org/euclid.aos/1176344552]

    Article  MathSciNet  Google Scholar 

  2. Efron B: Computers and the theory of statistics: thinking the unthinkable. SIAM Rev 1979,21(4):460-480. [Online]. [http://www.jstor.org/stable/2030104]

    Article  MathSciNet  Google Scholar 

  3. Efron B: Nonparametric standard errors and confidence intervals. Can. J. Stat. 1981,9(2):139-158. 10.2307/3314608

    Article  MathSciNet  Google Scholar 

  4. Efron B: Estimating the error rate of a prediction rule: improvement on cross-validation. J. Am. Stat. Assoc 1983,78(382):316-331. [Online]. [http://dx.doi.org/10.2307/2288636]

    Article  MathSciNet  Google Scholar 

  5. Efron B, Gong G: A leisurely look at the bootstrap, the jackknife, and cross-validation. Am. Stat 1983,37(1):36-48. [Online]. [http://dx.doi.org/10.2307/2685844]

    MathSciNet  Google Scholar 

  6. Efron B, Tibshirani R: An Introduction to the Bootstrap. Chapman & Hall, New York; 1993.

    Book  Google Scholar 

  7. Efron B, Tibshirani R: Improvements on cross-validation: the.632+ bootstrap method. J. Am. Stat. Assoc 1997,92(438):548-560. [Online]. [http://dx.doi.org/10.2307/2965703]

    MathSciNet  Google Scholar 

  8. Singh K: On the asymptotic accuracy of Efron’s bootstrap. Ann. Stat 1981, 9: 1187-1195. 10.1214/aos/1176345636

    Article  Google Scholar 

  9. Bickel P, Freedman D: Some asymptotic theory for the bootstrap. Ann. Stat 1981, 9: 1196-1217. 10.1214/aos/1176345637

    Article  MathSciNet  Google Scholar 

  10. Beran R: Estimated sampling distributions: the bootstrap and competitors. Ann. Stat 1982,10(1):212-225. [Online]. [http://www.jstor.org/stable/2240513]

    Article  MathSciNet  Google Scholar 

  11. Hall P: The Bootstrap and Edgeworth Expansion. Springer, New York; 1992.

    Book  Google Scholar 

  12. Scholz F: The Bootstrap Small Sample Properties. University of, Washington, Seattle; 2007.

    Google Scholar 

  13. Porter P, Rao S, Ku J-Y, Poirot R, Dakins M: Small sample properties of nonparametric bootstrap t confidence intervals. J. Air Waste Manag. Assoc 1997,47(11):1197-1203. 10.1080/10473289.1997.10464062

    Article  Google Scholar 

  14. Chan K, Lee S: An exact iterated bootstrap algorithm for small-sample bias reduction. Comput. Stat. Data Anal 2001,36(1):1-13. 10.1016/S0167-9473(00)00029-3

    Article  MathSciNet  Google Scholar 

  15. Young G: Bootstrap: more than a stab in the dark? With discussion and a rejoinder by the author. Stat. Sci 1994,9(3):382-415. 10.1214/ss/1177010383

    Article  Google Scholar 

  16. Shao J, Tu D: The Jackknife and Bootstrap. Springer, New York; 1995.

    Book  Google Scholar 

  17. D Pils, D Tong, G Hager, E Obermayr, S Aust, G Heinze, M Kohl, E Schuster, A Wolf, J Sehouli, I Braicu, I Vergote, T Van Gorp, S Mahner, N Concin, P Speiser, R Zeillinger, A combined blood based gene expression and plasma protein abundance signature for diagnosis of epithelial ovarian cancer–a study of the OVCAD consortium. BMC Cancer. 13(178) (2013). doi: 10.1186/1471-2407-13-178.

  18. S Paul, P Maji, muHEM for identification of differentially expressed miRNAs using hypercuboid equivalence partition matrix. BMC Bioinformatics. 14(266) (2013). doi:10.1186/1471-2105-14-266.

  19. Student S: K Fujarewicz, Stable feature selection and classification algorithms for multiclass microarray data. Biol Direct. 2012, 7: 33. doi:10.1186/1745-6150-7-33 10.1186/1745-6150-7-33

    Article  Google Scholar 

  20. T Hwang, CH Sun, T Yun, GS Yi, FiGS: a filter-based gene selection workbench for microarray data. BMC Bioinformatics. 11(50) (2010). doi:10.1186/1471-2105-11-50.

  21. McLachlan G: Discriminant Analysis and Statistical Pattern Recognition. Wiley, New York; 1992.

    Book  Google Scholar 

  22. Devroye L, Gyorfi L, Lugosi G: A Probabilistic Theory of Pattern Recognition. Springer, New York; 1996.

    Book  Google Scholar 

  23. Sima C, Dougherty E: Optimal convex error estimators for classification. Pattern Recognit 2006,39(6):1763-1780. 10.1016/j.patcog.2006.03.020

    Article  Google Scholar 

  24. Chernick M, Murthy V, Nealy C: Application of bootstrap and other resampling techniques: evaluation of classifier performance. Pattern Recognit. Lett 1985,3(3):167-178. [Online] [http://www.sciencedirect.com/science/article/B6V15-48MPVCK-55/2/32754228bc17ac0655b9fa9a7a60ca90]

    Article  Google Scholar 

  25. Fukunaga K, Hayes R: Estimation of classifier performance. IEEE Trans. Pattern Anal. Mach. Intell 1989,11(10):1087-1101. 10.1109/34.42839

    Article  Google Scholar 

  26. G McLachlan, Error rate estimation in discriminant analysis: recent advancesAdv. Multivariate Stat. Anal, 233–252 (1987).

  27. Davison A, Hall P: On the bias and variability of bootstrap and cross-validation estimates of error rate in discrimination problems. Biometrika 1992,79(2):279-284. [Online] [http://www.jstor.org/stable/2336839]

    Article  MathSciNet  Google Scholar 

  28. Chernick M: Bootstrap Methods: A Guide for Practitioners and Researchers (Wiley Series in Probability and Statistics), 2nd ed.. Wiley-Interscience, Hoboken; 2007.

    Book  Google Scholar 

  29. Chatterjee S, Chatterjee S: Estimation of misclassification probabilities by bootstrap methods. Comput 1983, 12: 645-656.

    Google Scholar 

  30. Jain A, Dubes R, Chen C: Bootstrap techniques for error estimation. IEEE Trans. Pattern Anal. Mach. Intell 1987,9(5):628-633. 10.1109/TPAMI.1987.4767957

    Article  Google Scholar 

  31. S Raudys, in Proceedings of Ninth International Joint Conference on Pattern Recognition,. On the accuracy of a bootstrap estimate of the classification erro (Rome 14–17 Nov 1988, p. 1230–1232(1988).

  32. Braga-Neto U, Dougherty E: Bolstered error estimation. Pattern Recognit 2004,37(6):1267-1281. [Online] [http://www.sciencedirect.com/science/article/B6V14-4BNMG7H-1/2/752fe2e9105d351b8850e48577ba182c]

    Article  Google Scholar 

  33. Braga-Neto U, Hashimoto R, Dougherty E, Nguyen D, Carroll R: Is cross-validation better than re-substitution for ranking genes? Bioinformatics 2004,20(2):253-258. [Online] [http://bioinformatics.oxfordjournals.org/cgi/content/abstract/20/2/253]

    Article  Google Scholar 

  34. Braga-Neto U, Dougherty E: Is cross-validation valid for small-sample microarray classification? Bioinformatics 2004,20(3):374-380. [Online]. [http://bioinformatics.oxfordjournals.org/cgi/content/abstract/20/3/374]

    Article  Google Scholar 

  35. R Kohavi, A study of cross-validation and bootstrap for accuracy estimation and model selection. (IJCAI), 1137–1145 (1995). [Online]. ., [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.529]

  36. Toussaint G: An efficient method for estimating the probability of misclassification applied to a problem in medical diagnosis. Comput. Biol. Med. 1975, 4: 269. 10.1016/0010-4825(75)90038-4

    Article  Google Scholar 

  37. McLachlan G: A note on the choice of a weighting function to give an efficient method for estimating the probability of misclassification. Pattern Recognit. 1977,9(2):147-149. 10.1016/0031-3203(77)90012-7

    Article  MathSciNet  Google Scholar 

  38. Raudys S, Jain A: Small sample size effects in statistical pattern recognition: recommendations for practitioners. IEEE Trans. Pattern Anal. Mach. Intell 1991,13(3):4-37. 10.1109/34.75512

    Article  Google Scholar 

  39. John S: Errors in discrimination. Ann. Math. Stat 1961,32(4):1125-1144. [Online]. [http://www.jstor.org/stable/2237911]

    Article  Google Scholar 

  40. Moran M: On the expectation of errors of allocation associated with a linear discriminant function. Biometrika 1975,62(1):141-148. [Online]. [http://www.jstor.org/stable/2334496]

    Article  MathSciNet  Google Scholar 

  41. Imhof J: Computing the distribution of quadratic forms in normal variables. Biometrika 1961,48(3/4):419-426. 10.2307/2332763

    Article  MathSciNet  Google Scholar 

  42. van de Vijver MJ, He YD, van’t Veer LJ, Dai H, Hart AAM, Voskuil DW, Schreiber GJ, Peterse JL, Roberts C, Marton MJ, Parrish M, Astma D, Witteveen A, Glas A, Delahaye L, van der Velde T, Bartelink H, Rodenhuis S, Rutgers ET, Friend SH, Bernards R: A gene-expression signature as a predictor of survival in breast cancer. N. Engl. J. Med 2002,347(25):1999-2009. 10.1056/NEJMoa021967

    Article  Google Scholar 

  43. van’t Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AAM, Mao M, Peterse HL, van der Kooy K, Marton MJ, Witteveen AT, Schreiber GJ, Kerkhoven RM, Roberts C, Linsley PS, Bernards R, Friend SH: Gene expression profiling predicts clinical outcome of breast cancer. Nature 2002, 415: 530-536. 10.1038/415530a

    Article  Google Scholar 

  44. UM Braga-Neto, A Zollanvari, ER Dougherty, Cross-validation under separate sampling: strong bias and how to correct it. Bioinformatics (2014). doi:10.1093/bioinformatics/btu527.

  45. Anderson T: Classification by multivariate analysis. Psychometrika 1951, 16: 31-50. 10.1007/BF02313425

    Article  MathSciNet  Google Scholar 

  46. S Raudys, in Proc. 4th Int. Conf. Pattern Recognition. Comparison of the estimates of the probability of misclassificationKyoto, Japan, 1978), pp. 280–282.

  47. Breiman L: Bagging predictors. Mach. Learn. 1996,24(2):123-140.

    MathSciNet  Google Scholar 

  48. Vu T, Braga-Neto U: Is bagging effective in the classification of small-sample genomic and proteomic data? URASIP J. Bioinformatics Syst. Biol 2009, 2009: Article ID 158368. 10.1155/2009/158368

    Article  Google Scholar 

  49. Vapnik V: Statistical Learning Theory. Wiley, New York; 1998.

    Google Scholar 

  50. Nijenhuis A, Wilf H: Combinatorial Algorithms, 2nd ed. Academic Press, New York; 1978.

    Google Scholar 

  51. Hills M: Allocation rules and their error rates. J. R. Stat. Soc. Series B (Methodological) 1966,28(1):1-31. [Online]. [http://www.jstor.org/stable/2984268]

    MathSciNet  Google Scholar 

  52. Zollanvari A, Braga-Neto U, Dougherty E: On the sampling distribution of resubstitution and leave-one-out error estimators for linear classifiers. Pattern Recognit 2009,42(11):2705-2723. 10.1016/j.patcog.2009.05.003

    Article  Google Scholar 

  53. Price R: Some non-central f -distributions expressed in closed form. Biometrika 1964, 51: 107-122. 10.1093/biomet/51.1-2.107

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors acknowledge the support of the National Science Foundation, through NSF awards CCF-0845407 (Braga-Neto) and CCF-0634794 (Dougherty).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ulisses M Braga-Neto.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

TV proved Theorems 1 and 2. TV and SC conducted numerical experiments to compute Figures 1 and 2 and Tables 1 and 2. SC conducted the numerical experiments with gene expression data. UMB conceived the study and wrote the first draft of the manuscript. ERD contributed ideas on convex error estimation and revised the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vu, T., Sima, C., Braga-Neto, U.M. et al. Unbiased bootstrap error estimation for linear discriminant analysis. J Bioinform Sys Biology 2014, 15 (2014). https://doi.org/10.1186/s13637-014-0015-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13637-014-0015-0

Keywords