Automated Analysis of Experiments using Hierarchical Garrote
Abstract
In this work, we propose an automatic method for the analysis of experiments that incorporates hierarchical relationships between the experimental variables. We use a modified version of nonnegative garrote method for variable selection which can incorporate hierarchical relationships. The nonnegative garrote method requires a good initial estimate of the regression parameters for it to work well. To obtain the initial estimate, we use generalized ridge regression with the ridge parameters estimated from a Gaussian process prior placed on the underlying input-output relationship. The proposed method, called HiGarrote, is fast, easy to use, and requires no manual tuning. Analysis of several real experiments are presented to demonstrate its benefits over the existing methods.
Keywords: Gaussian process; Generalized ridge regression; Nonnegative garrote; Variable selection.
1 Introduction
Traditionally, the modeling and analysis of experimental data are done using analysis of variance (ANOVA) and regression techniques (Box et al.,, 1978; Montgomery,, 2017; Wu and Hamada,, 2021). Regression is more general and the preferred method because some of the factors in the experiments can be continuous and the regression methodology allows us to fit the response surface using smooth polynomial models. When some factors are categorical, they can also be incorporated into the regression methodology by introducing dummy variables. However, because “orthogonality” is present in many experimental designs such as full factorial designs, the regression analysis of experimental data is much simpler than that of observational data. The -statistics do not change when an effect is added or removed and therefore, the significant effects can be identified with ease. However, this simplicity has more or less disappeared as the experimental design methods evolved over time and became more complex. Fractional factorial designs and nonregular designs have become more common in practice where the effects can be aliased, which necessitates the need of more sophisticated regression techniques for the analysis of experiments.
There are certain characteristics of experiments that warrants a different treatment of regression analysis compared to those commonly used in observational studies. Experiments are usually expensive and therefore, the data are smaller in size. However, experiments are conducted in controlled environments, and therefore, higher order effects such as interactions and nonlinear effects can be entertained in the modeling. The need to model high order effects with small data immediately introduces certain challenges in the regression analysis. First, the total number of effects can exceed and can be much higher than the number of experimental runs. Second, experimenters over time have gained some understanding of the importance of various effects, which are formulated into certain hierarchical principles such as effect hierarchy and effect heredity (Wu and Hamada,, 2021). The effect hierarchy principle states that lower-order effects are more important than higher-order effects. Under strong heredity principle, an interaction is considered active only if both of its parent effects are active; whereas under weak heredity, only one of its parent effects needs to be active. These principles are supported by the findings of Li et al., (2006) and Ockuly et al., (2017), who provide comprehensive meta-analyses of two-level and response surface experiments, respectively. These principles need to be used intelligently to identify the significant effects from the small experimental data.
Analysis of regular designs can be done by first deriving all the aliasing relationships, estimating the aliased effects using regression, and identifying the significant aliased effects using -statistics or half-normal plots and Lenth’s method (Lenth,, 1989). Then the effect hierarchy and heredity principles are utilized to break the aliases and identify the significant effects. This approach cannot be used for nonregular designs because of the complex aliasing relationships. Hamada and Wu, (1992) proposed several forward selection strategies that incorporates the hierarchy and heredity principles. However, these strategies do not explore the complete space of models and therefore, can miss important effects. To search the model space more thoroughly, Chipman et al., (1997) proposed a Bayesian variable selection method that integrates the stochastic search variable selection (SSVS) algorithm of George and McCulloch, (1993) with the hierarchical priors described in Chipman, (1996). One major issue with the method is the challenge of specifying priors which contain several tuning parameters. In addition, the approach remains computationally intensive because the SSVS algorithm usually requires a large number of Markov chain Monte Carlo (MCMC) samples to attain convergence.
Several extensions of the regularization-based variable selection methods that preserve effect heredity have been proposed. Yuan et al., (2007) presented a fast algorithm by modifying the least angle regression (LARS) technique (Efron et al.,, 2004). However, their method is not convenient for dealing with situations when one group of factors follows strong heredity and another follows weak heredity. To provide a more flexible approach, Yuan et al., (2009) incorporated heredity constraints into nonnegative garrote (NG) method of Breiman, (1995). However, NG requires least squares estimates of the parameters to use as initial estimates, which cannot be obtained in fractionated experiments because the number of parameters to estimate exceeds the number of runs. Extensions of LASSO (Tibshirani,, 1996) can be found in Choi et al., (2010), Bien et al., (2013), and Hao et al., (2018). However, LASSO is not consistent in terms of estimation and variable selection, which is called the oracle property in Fan and Li, (2001). Vazquez et al., (2021) built a best-subset selection algorithm that can take any user-specified restriction, including heredity constraints. Due to the nature of best-subset selection, the algorithm necessitates searching for the optimal model across all-possible model sizes, making it computationally demanding. More recently, Singh and Stufken, (2024) proposed an aggregated version of the Gauss-Dantzig Selector (Candes and Tao,, 2007). However, their method is limited to two-level designs only.
The main aim of this article is to develop an automatic method to analyze experimental data which requires little or no manual tuning. To achieve this goal, we will extend the nonnegative garrote method proposed in Yuan et al., (2009) and make it suitable for the analysis of experiments. We chose nonnegative garrote because it possesses the oracle property (Zou,, 2006) and can be automatically tuned (Xiong,, 2010). The major hurdle in using nonnegative garrote is the non-existence of least squares estimates. Yuan and Lin, (2007) have shown that ridge regression (Hoerl and Kennard,, 1970) can be used to obtain initial estimates when the least squares estimates do not exist. However, in this article, we will show that the usual ridge regression estimates do not perform well in the analysis of experiments because the ridge estimates do not obey the hierarchical relationships. A better initial estimate can be obtained using generalized ridge regression, which has a separate ridge parameter for each effect and can be chosen to reflect the hierarchical relationships. However, there are too many ridge parameters, which makes the specification of them challenging. By exploiting the connection between generalized ridge regression and Bayesian regression, we will use the functionally induced priors proposed in Joseph, (2006) and Joseph and Delaney, (2007) to obtain estimates of the ridge parameters which respect the hierarchy and heredity principles. Integrating this with the nonnegative garrote will give us the proposed method, which we call Hierarchical Garrote or in short HiGarrote.
The article is organized as follows. In Section 2, We will first review the nonnegative garrote method and its limitations. We will then develop our proposed method in Section 3. Several examples are provided in Section 4 to illustrate the advantages of the proposed method. We conclude the article with some remarks in Section 5.
2 Nonnegative Garrote and Its limitations
Suppose that there are factors . Denote the data as and let the response , for all . Assume that
(1) |
where stands for the random error resulting from uncontrollable factors and measurement error. Throughout the paper, we center all variables so that the observed mean is 0 and scale all predictors so that their sample standard deviations are the same.
2.1 Original Nonnegative Garrote
Consider approximate the latent function by a main-effect-only linear model,
Let . The least squares estimator . The original nonnegative garrote (NG) (Breiman,, 1995) works by scaling the least squares estimate with shrinkage factors controlled by a tuning parameter . The NG estimate is defined as , where . For an , the shrinkage factors are obtained by solving a quadratic programming problem:
(2) | ||||
By selecting suitably, some elements of can be exactly zero. Since with a probability of 1, is considered active if and only if . In other words, serves as an indicator of including in the selected model.
2.2 Nonnegative Garrote with Effect Heredity
In several problems, main effects models are inadequate to approximate the underlying function in (1). To improve the approximation, we can include the quadratic effects and two-factor interactions:
The shrinkage factors in the nonnegative garrote are obtained by
(3) | ||||
In this optimization, it is possible to select a model that includes an interaction term while excluding both of its main effects, that is, and . To preserve the effect heredity principle in nonnegative garrote, Yuan et al., (2009) proposed to add constraints that enforce strong or weak heredity among the effects.
Under the strong heredity principle, we must have and if . This can be enforced by requiring which is equivalent to two convex constraints
(4) |
Thus, if , (4) will guarantee that and , ensuring that and are part of the chosen model.
Under the weak heredity principle, we must have at least one of or to be larger than 0 if . This can be enforced by requiring However, this constraint is nonconvex, and the optimization in (3) cannot be solved using quadratic programming. To simplify the optimization, Yuan et al., (2009) proposed a convex relaxation of the constraint:
(5) |
Thus, if , (5) will ensure that at least one of or holds, thereby guaranteeing that at least one of or is included in the selected model.
2.3 Nonnegative Garrote in Experimental Analysis
It is more common in the analysis of experiments (Wu and Hamada,, 2021) to approximate using a linear model with both main effects (linear and quadratic) and their cross product terms (two-factor interactions):
(6) |
where , and . For an experiment, the data size is typically much smaller than the number of parameters , making it impossible to obtain the least squares estimate required in the original nonnegative garrote. Therefore, Yuan and Lin, (2006) suggested using the ridge regression to obtain an initial estimate in NG.
Let be an model matrix. The ridge regression estimate is given by
(7) |
where is a tuning parameter. Then, the NG estimate is defined as , where the shrinkage factors are obtained as
(8) | ||||
To tune the hyperparameter , we employ the generalized cross-validation (GCV) criterion introduced by Golub et al., (1979), which was extended by Xiong, (2010) to handle NG with ridge estimate. The GCV criterion is defined as
(9) |
where is the degrees of freedom of . The degrees of freedom is computed as
(10) |
where is the entry of the matrix . After constructing the solution path across a series of , the optimal is selected by minimizing the GCV criterion,
The upper bound of is set to to ensure that the number of selected terms is not too high compared to the degrees of freedom in the data.
However, we will show that ridge regression estimate is inadequate in experimental data analysis. Consider a toy example with the 12-run Plackett-Burman (PB) design. The response is generated by the true model . See Table 1 for the design matrix and data. We use the R package glmnet to perform the ridge regression and apply the weak heredity constraint when solving the quadratic programming problem (8). This approach identifies only the effect with and misses the two interactions and that are present in the true model. The issue arises from the underestimation of . Due to the presence of numerous independent and irrelevant effects in the model matrix, we found that is often overestimated, masking the effects of important terms. In the next section, we will propose an improved initial estimate for NG to overcome the foregoing issue.
Run | Factor | |||||||||||
A | B | C | D | E | F | G | H | I | J | K | ||
1 | 1 | 1 | -1 | 1 | 1 | 1 | -1 | -1 | -1 | 1 | -1 | 25 |
2 | 1 | -1 | 1 | 1 | 1 | 1 | -1 | -1 | 1 | 1 | -1 | 15 |
3 | -1 | 1 | 1 | 1 | 1 | -1 | -1 | -1 | 1 | 1 | 1 | -35 |
4 | 1 | 1 | 1 | -1 | -1 | -1 | -1 | 1 | 1 | 1 | -1 | 35 |
5 | 1 | 1 | -1 | 1 | -1 | 1 | -1 | 1 | -1 | -1 | 1 | 25 |
6 | 1 | -1 | -1 | 1 | 1 | -1 | 1 | -1 | 1 | 1 | 1 | 5 |
7 | -1 | -1 | -1 | 1 | -1 | 1 | -1 | 1 | 1 | 1 | 1 | -5 |
8 | -1 | 1 | -1 | 1 | -1 | 1 | 1 | 1 | 1 | 1 | -1 | -15 |
9 | -1 | -1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | -1 | -1 | -25 |
10 | 1 | -1 | 1 | 1 | -1 | -1 | -1 | 1 | 1 | -1 | -1 | 15 |
11 | -1 | 1 | 1 | 1 | 1 | -1 | 1 | -1 | -1 | 1 | 1 | -35 |
12 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -1 | -5 |
3 Methodology
The ridge regression estimate in (7) is the solution to the following optimization problem:
(11) |
Clearly ridge regression gives equal importance to all the effects, which is not good because the effects are likely to follow the hierarchy and heredity principles. One approach to overcome this issue is to use generalized ridge regression (Hoerl and Kennard,, 1970)
(12) |
where ’s can be chosen to obey the hierarchical relationships. Although generalized ridge regression is known to perform better than ridge regression in high-dimensional problems (Ishwaran and Rao,, 2014), it becomes an arduous task to tune parameters. Moreover, it is also not clear how to enforce the hierarchical relationships on the ridge parameters. We will propose an idea to do this.
First note that the generalized ridge regression solution can be viewed as the posterior mean of a Bayesian linear regression with prior:
(13) |
where is a diagonal matrix with . Thus, we only need to determine how to specify the matrix in (13). Joseph, (2006) has proposed an idea to do this by first postulating a Gaussian Process (GP) prior (Santner et al.,, 2018) on the underlying function in (1) and then inducing a prior for the linear model parameters. It is shown in Joseph, (2006) and Joseph and Delaney, (2007) that such priors satisfy effect hierarchy and heredity principles under a product correlation structure. Moreover, the prior usually contains only unknown parameters, which is much less than , and there are well-developed methods to estimate them. This is explained in the next section.
3.1 Functionally Induced Prior
Let the factor have levels in the experiment, . Consider the model in (6). Although it has only the main effects (linear and quadratic) and their cross product terms (two-factor interactions, 2fi), it is easier to construct the prior for all possible effects. Thus, we approximate by a linear model including main effects and all the interactions, i.e.,
(14) |
where and To avoid confusion, we let .
The main effects of the factor can be described by dummy variables, and the interactions can be expressed as the products of these dummy variables. For example, consider an experiment with two factors each at three levels. Let and be the main effects of the first factor and and be the main effects of the second factor. The interaction terms are , , , and . Many popular coding systems can be used to define dummy variables, such as treatment coding, Helmert coding, and orthogonal polynomial coding (Wu and Hamada,, 2021; Harville,, 1998).
The idea of functionally induced priors (Joseph,, 2006) is to postulate a GP prior for and use that to induce the prior for . Thus, assume
(15) |
where is the covariance function of the Gaussian process defined as . Thus, we have where is the vector of function values for the full factorial design and the model matrix corresponding to (14). Note that has a multivariate normal distribution with and , where is the corresponding correlation matrix. Since , we obtain
(16) |
Assume that the correlation function has a product correlation structure, i.e.,
Under the product correlation structure, Joseph and Delaney, (2007) have shown that
(17) |
where denotes the Kronecker product, is the model matrix for factor , and the corresponding correlation matrix. For example, suppose that factor is a three-level factor with levels 1, 2, and 3, the model matrix using orthogonal polynomial coding is
and the correlation matrix is
The benefit of this structure is that we can select the most appropriate coding system and correlation function for each factor according to our modeling requirements. Due to its popularity, we use the Gaussian correlation function for each factor (Santner et al.,, 2018): , Let . Then, the correlation function can be written as
(18) |
Although the form of the correlation function is the same for all factors, the definition of should depend on the type of factors.
For quantitative factors, we define as , for two runs and . Suppose that there are quantitative three-level factors and their model matrices are encoded using orthogonal polynomial coding. Then, (16) simplifies to (Joseph and Delaney,, 2007):
(19) |
where
if includes the linear effect of factor and 0 otherwise, and if includes the quadratic effect of factor and 0 otherwise. Although the prior for does includes covariance terms, we do not need them to construct the diagonal matrix in (13).
Since , the variance of an interaction effect will be smaller than its parent effects. In other words, the lower-order effects are likely to be more important than the higher-order effects, and thus the priors satisfy the effect hierarchy principle. Moreover, since the variance of an interaction effect is proportional to the product of the variances of its parent effects, the priors also satisfy the effect heredity principle.
For qualitative factors where the relative distances between levels cannot be quantified, Joseph and Delaney, (2007) defines if and 1 otherwise, i.e., , where is the Hamming distance. However, this definition will assign equal importance to all the main effects of even if not all of them are active. Since the main effects are represented by dummy variables, we define the relative distance for each dummy variable of as , where the subscript stands for the th dummy variable. Then, according to (18), the form of the corresponding correlation function would be
Suppose that there are qualitative three-level factors and their model matrices are encoded according to Helmert coding. Then, as shown in the Appendix, the marginals of the prior in (16) will become
(20) |
where
if includes the first main effect of factor and 0 otherwise, and if includes the second main effect of factor and 0 otherwise. As in the case of quantitative factors, effect hierarchy and heredity are built-in in these priors.
3.2 Initial Estimate
Our aim is to obtain an initial estimate of from , where and . Here is a diagonal matrix, which is constructed as in (19) and (20). The posterior mean of is given by
(21) | |||||
To obtain the initial estimate of using (21), several parameters need to be specified, which include , , and that define . Let where can be a vector. A good estimate of can be obtained if the experiment contains replicates. Suppose that there are replicates in each run and let be the sample variance for the th run, . Then can be a good estimate of . However, if the experiment is unreplicated, there will be no information to estimate . Instead of assuming that there is no noise, we assume that the noise would contribute at least of the variation in the response to prevent overfitting. Therefore, we reparameterize (21) to
(22) |
where . This form can also be used for replicated experiments as long as we replace with the sample means and divide by .
Joseph and Delaney, (2007) have shown that if we use orthogonal coding for each factor,
where is the sum of all elements in . Therefore, we only need to focus on the specifications of and . These hyperparameters can be estimated by minimizing the negative log-likelihood of (Santner et al.,, 2018). Thus,
(23) | |||||
where is the correlation matrix of the runs.
Different from the usual applications of GP modeling, we are not only interested in the accurate prediction of the output but also the selection of important effects. Therefore, it is very important for us to obtain the global optimum of (23). This is not easy because when is small, several models can fit the data well and therefore, the likelihood can be multi-modal with several local optima. While numerous global optimization algorithms are available for this purpose, we need one that requires only a few function evaluations and is robust to the choice of starting points. Therefore, we adopt a gradient-based multi-start global optimization strategy.
For numerical stability, we put mild constraints on the hyperparameters: for all , and . First, we generate a space-filling design with starting points in the feasible region. Here, we use the MaxPro design in Joseph et al., (2015). Second, for each starting point, we search for the local optimum using a local optimization algorithm. Here we use the Method of Moving Asymptotes in Svanberg, (2002). Among those local optima, the one with the lowest negative log-likelihood value will be used as the estimate of the hyperparameters. Importantly, we choose a derivative-based approach not only because gradients help us reduce the number of function evaluations but also because it gives us a more robust result than other derivative-free algorithms.
3.3 HiGarrote
Once the hyperparameters are estimated from (23), the shrinkage factors and the regression coefficients can be obtained as in (8) using the initial estimates from (22). The tuning parameter in (8) can be selected as described in Section 2.3. A solution path across a range of is first constructed. The optimal is then selected by minimizing the GCV criterion (9), where the degrees of freedom is calculated as in (10) with
The upper bound of used in the optimization is based on the findings of Box and Meyer, (1986) and Wu and Hamada, (2021, p. 424), which indicate that the proportion of active effects (with intercept term included) typically ranges between 0.13 and 0.27 of the design’s run size.
For replicated experiments where a reliable estimate of can be obtained, the shrinkage factors can be directly computed by solving (Xiong,, 2010):
(24) | ||||
Algorithm 1 represents the proposed Hierarchical Garrote (HiGarrote) method. Most of the computational overhead of the method is in step 1 of the algorithm, i.e., the optimization in (23).
3.4 Simulation
To demonstrate the effectiveness of HiGarrote, a simulation study is conducted using the toy example described in Section 2.3, based on a 12-run PB design. The response is generated from the model
The simulation is repeated 100 times, with new data generated each time. For each simulated dataset, both HiGarrote and the nonnegative garrote (NG) with ridge estimate (Yuan et al.,, 2009) are fitted using weak heredity.
Figure 1 presents the boxplots of the estimates of selected effects using NG with ridge estimate and HiGarrote.

As shown in Figure 1, the NG with ridge estimate consistently identifies only the main effect A and occasionally AB. However, the estimate for A shows large variability, and the estimate for AB remains close to 0. Importantly, the crucial interaction AC is never selected, indicating its limitation in identifying weaker yet significant interactions. In contrast, HiGarrote successfully identifies the important effects A, AB, and AC, with estimates close to their true values of 20, 10, and 5. While HiGarrote occasionally selects additional effects such as B, C, and BC, their estimates are generally very small (near 0) with low variability.
4 Real Data Examples
In this section we evaluate HiGarrote’s performance compared to several existing methods using different experimental designs such as regular, nonregular, and screening designs. The following methods are chosen for comparison.
stepH: This is the stepwise variable selection strategy proposed in Hamada and Wu, (1992).
BayesH: This is the Bayesian variable selection technique proposed in Chipman et al., (1997), which incorporates heredity principles in the SSVS algorithm.
hierNet: This is the LASSO-based method proposed in Bien et al., (2013), which is implemented in the R package hierNet.
RAMP: This is the LASSO-based method proposed in Hao et al., (2018), which is implemented in the R package RAMP.
GDSARM: This is the Gauss-Danzig selector aggregation proposed in Singh and Stufken, (2024), which is implemented in the R package GDSARM.
To make the exposition simple, all the methods are implemented using the weak heredity principle. We are not aware of any software packages that implement stepH and BayesH. Therefore, comparisons with these two methods are limited to the cases available in the literature. The two methods hierNet and RAMP are developed for general-purpose regression problems and therefore, are not suitable for the analysis of certain experimental designs such as mixed-level designs. Therefore, their results are reported only when we are able to implement them in the example. GDSARM can be applied only to two-level designs; so this is implemented only in the example in Section 4.1.1 and 4.2.1. On the other hand, HiGarrote is general and flexible, and can be used with any type of experimental designs.
4.1 Regular Designs
4.1.1 Two-Level Regular Design
Consider the experiment reported by Raghavarao and Altan, (2003), where the nine factors are labeled A to J, excluding I. See Table S1 in the supplementary materials for details of the design matrix and data. Using half-normal plots and Lenth’s method, the aliased effects AH, J, E, G, and CH have smaller p-values while none of them are significant at 0.05 level. Due to the properties of the design matrix, all the two-factor interactions (2fi) are fully aliased with either main effects or other 2fis. By ignoring three- and higher-order interactions, we can establish the following aliasing relationships among these five effects:
By applying effect hierarchy principle, we can select J, E, and G from the first three aliasing relationships. Now using effect heredity, the last two aliasing relationships become and . These effects cannot be entangled because G, J, and E are active. Therefore, Raghavarao and Altan, (2003) conducted a heuristic analysis, and ultimately concluded that the effects J, E, G, EJ, and GJ are active. Their findings are supported by scientists, who believed that the main effects of E, G, and J, as well as their potential interactions, are likely to be active.
Now consider HiGarrote. As summarized in Table 2, HiGarrote automatically identifies the five active effects without requiring prior knowledge of the aliasing relationships or domain expertise from scientists. This is no accident and the reason behind its selection can be explained as follows. The prior variance of is , whereas that of is . The observed effect of is divided among the two aliased effects proportional to its prior variance (Joseph,, 2006). Therefore, the initial estimate of is much larger than , which leads to the automatic selection of EJ by the nonnegative garrote instead of DG. The same thing happens between GJ and DE. Although HiGarrote selects three additional effects: H, HJ, and B, their estimates are relatively low compared to the five active effects. In terms of computational time tested on a laptop equipped with an Apple M2 chip, HiGarrote takes 0.54 seconds to analyze the experiment. For ease of comparison, the for all the methods are calculated based on least squares. We also applied hierNet, RAMP, and GDSARM to this experiment. However, none of these methods identified any effects.
Method | Model | |
---|---|---|
Raghavarao and Altan, (2003) | , , , , | 70% |
HiGarrote | , , , , , | |
, , | 89% |
4.1.2 Mixed-Level Regular Design
Phadke, (1986) described a 32-run experiment aimed at increasing the lifespan of router bits used in a routing process to cut printed wiring boards from a panel. The experiment contains seven two-level factors and two four-level qualitative factors which are denoted by A through J with the exclusion of I. See Table S2 and Table S3 in the supplementary materials for details of factors, design matrix, and data. For the two qualitative factors, we adopt a convenient coding scheme offered by Wu and Hamada, (2021). The coding scheme generates effects that can be interpreted as the average difference between pairs of levels. Thus, the coding matrix for factors D and E is
The main effects of factor D and E are labeled as , , and and , , and .
This is a regular design and therefore, one can write down all the 15 aliasing relationships. Wu and Hamada, (2021) used effect hierarchy and heredity principles to identify the following effects: , , , , and . Note that the last aliased effect could not be disentangled because both and are significant. However, using the physical understanding of the routing process, they argued that is unlikely to be significant because the four spindles of the routing machine are synchronized. Thus, they finally chose .
On the other hand, as summarized in Table 3, HiGarrote automatically identifies the five effects without using any physical understanding of the routing process. In addition, it also selects a few more effects , , , , , . In terms of computational time, HiGarrote takes 6.38 seconds to analyze the experiment. As mentioned before, since hierNet and RAMP do not generate interactions that are suitable for this experiment, their performance is not reported here.
Method | Model | |
---|---|---|
Wu and Hamada, (2021) | , , , , | 62% |
HiGarrote | , , , , , | |
, , ,, , | ||
90% |
4.2 Nonregular Designs
4.2.1 Two-Level Nonregular Design
Hunter et al., (1982) used a 12-run Plackett-Burman design to investigate the effects of seven two-level factors on the fatigue life of weld-repaired castings. The seven factors are denoted by capital letters A through G. The details of the factors, experimental design, and data are given in Table S4 and Table S5 of the supplementary materials. According to the analysis conducted by Hunter et al., (1982), the main effects of factors D and F were found to be significant, but the of the model is only 59%. In a very influential work, Hamada and Wu, (1992) reanalyzed the experiment using an iterative stepwise regression strategy that incorporates effect heredity (stepH). They found that by just adding the two-factor interaction FG in the model, increases to 92%. They also found that if we remove the main effect D from the model, the only decreases to 89%, which implies that the effect of D is negligible. Thus, they claim that effects F and FG are active.
For hierNet where the model is selected by leave-one-out cross-validation, the two important effects F and FG can be successfully identified; however, it also includes too many insignificant effects in the model. On the other hand, we found that RAMP performs well in this experiment, which selected the same model as the one from stepH. However, an issue with RAMP is that its algorithm includes quadratic effects such as , , , . Since there are only two levels in the experiment, these quadratic effects become the intercept effect which will not be selected, albeit unnecessary computations. GDSARM, using the settings provided in Singh and Stufken, (2024), successfully identified the effects F and FG. However, the algorithm includes several hyperparameters that may require manual tuning for optimal performance.
HiGarrote identifies five effects: F, FG, D, G, DG, with . Compared to the model selected by stepH, the inclusion of effects D, G, and DG only increases the by 7%. Although HiGarrote selects three insignificant effects in the model, the estimates of the effects are low compared to the important effects. The comparisons are summarized in Table 4, with the numbers in parentheses indicating the effects estimates. The computational time is also reported in the table. HiGarrote is the second slowest among the methods considered in this example, but 0.26 seconds will not be a computational burden for the analyst!
Method | Model | Runtime | |
---|---|---|---|
Hunter et al., (1982) | , | 59% | – |
stepH | , | 89% | .13 sec |
hierNet | , , , , | ||
, , , , | |||
, , | 100% | .08 sec | |
RAMP | , | 89% | .06 sec |
GDSARM | , | 89% | .33 sec |
HiGarrote | , , , , | 96% | .26 sec |
4.2.2 Mixed-Level Nonregular Design
Hamada and Wu, (1992) analyzed an 18-run experiment designed to study blood glucose readings of a clinical testing device. The experiment contains one two-level factor and seven three-level quantitative factors, which are denoted by A through H. The details of the factors, design matrix, and data are given in Table S6 and Table S7 of the supplementary materials. Since the experiment contains three-level quantitative factors, we can entertain quadratic terms and their interactions. The interactions include four components, i.e., linear-by-linear, linear-by-quadratic, quadratic-by-linear, and quadratic-by-quadratic. So, we consider the search for active effects among 15 main effects (one linear effect, and seven linear and quadratic effects) and 98 interactions. In the following comparison, the subscripts of and indicate the linear and quadratic terms, respectively.
Analysis carried out by Hamada and Wu, (1992) using stepH chooses the effects , , and . The model only achieves an of 68%. On the other hand, Chipman et al., (1997) using BayesH identified , , , and with . We observed that removing from the model only marginally reduces the to 85%, suggesting that the effect of is negligible.
It is not ideal to use hierNet and RAMP to analyze the experiment because the way they construct interactions is to cross input variables. If we treat factors A to H as input variables, the interactions between linear and quadratic effects would not be entertained. On the other hand, if we treat all linear and quadratic effects as input variables, interactions between effects of the same factor–such as –would be included, potentially leading to the selection of cubic effects. Although treating all linear and quadratic effects as input variables may initially seem reasonable, our analysis revealed that neither hierNet nor RAMP selected any effects. This suggests that both methods are not suitable for analyzing mixed-level designs.
For HiGarrote, we found that it can identify more significant effects than the existing strategies. The model selected by HiGarrote contains eight effects, including , , , , , , , , with an of 97%. In the selected model, all the effects except and are significant. In terms of computational time, it takes 1.40 seconds for HiGarrote to analyze the experiment. The results are summarized in Table 5.
Method | Model | |
---|---|---|
stepH | , , | 68% |
BayesH | , , , | 86% |
HiGarrote | , , , , | |
, , , | 97% |
4.3 Screening Designs
4.3.1 Definitive Screening Design
Jones and Lanzerath, (2021) described a 21-run experiment aimed at creating a special resin for car vents designed to prevent moisture accumulation within a car’s housing. The experiment is carried out using a definitive screening design (DSD) with nine continuous factors, labeled A through J, with the exclusion of I. Details of factors and dataset are provided in Table S8 and Table S9 of the supplementary materials. The response is taken as the logarithm of Impurity.
According to the analysis carried out by Jones and Lanzerath, (2021), the linear effects A and F, and their interaction AF are identified as significant with an of 93%. In addition to the three effects selected by Jones and Lanzerath, (2021), HiGarrote identifies five more effects, among which four of them are significant. The increases to 99%. In terms of computational time, it takes 1.24 seconds for HiGarrote to analyze the experiment. Since both hierNet and RAMP can handle the full quadratic model, their performance can be examined here. It turns out that hierNet does not perform well in this experiment because there are too many effects being selected. In total, the selected model contains 16 effects comprised of 7 linear effects and 9 interactions. On the other hand, RAMP only identifies F. Since F is highly important, RAMP’s model is good with an of 88%. The results are summarized in Table 6. To make the table concise, the selected model of hierNet is not reported.
Method | Model | |
---|---|---|
Jones and Lanzerath, (2021) | , , | 93% |
RAMP | 88% | |
HiGarrote | , , , , | |
, , , | 99% |
4.3.2 Supersaturated Design
Lin, (1993) employed a half-fraction of a 28-run PB design to investigate an experiment aimed at developing an epoxy adhesive system, as described by Williams, (1968). The original design matrix contains 14 runs and 24 factors. Since factors 13 and 16 were assigned to the same columns in the original design matrix, only factor 13 is listed here. The design matrix and data are given in Table S10 of the supplementary materials. Since supersaturated designs are generally used for factor screening, we focus only on the main effects. Therefore, we will exclude hierNet, RAMP, and GDSARM in this example, as these methods are designed to identify models that encompass quadratic effects and two-factor interactions. Although HiGarrote is also intended for models with hierarchical effects, we apply it in this example to show its generality. The results are summarized in Table 7. We can see that HiGarrote identifies the same set of factors as in Lin, (1993) and Chipman et al., (1997). In terms of computational time, HiGarrote takes about 5.80 seconds to perform the analysis, in which most of the time is spent on optimizing the 24 hyperparameters of the correlation matrix and the noise ratio.
Method | Model | |
---|---|---|
Lin, (1993) | , , , , | 97% |
BayesH | , , , | 95% |
HiGarrote | , , , , | 97% |
5 Conclusion
Regression analysis of experimental data requires special care because of two main reasons: (i) the data size is small and (ii) there is a need to estimate higher order effects that satisfy effect hierarchy and heredity principles. We have proposed a modified version of nonnegative garrote method, called HiGarrote, that incorporates the hierarchical relationships among the effects. A major innovation in this new technique is to carefully devise an approach to obtain an initial estimate for the nonnegative garrote. For this purpose we used generalized ridge regression with the ridge parameters chosen based on a Gaussian process prior on the underlying function. The main computational cost of our method comes from the fitting of the Gaussian process, which increases at the rate of , where is the size of the data. However, since is small in experiments, this is not of much concern.
We have applied our method to many different kinds of experiments report in the literature and found that it works well. The most attractive feature of the method is that it is very general and requires no manual tuning. Thus, it can be used to automate the analysis of experiments. We plan to provide the implementation of the code in an R package. Some practitioners may still prefer to analyze their experiments manually. However, even in such cases, HiGarrote could be tried so that the accuracy of their analyses can be quickly verified.
Another novelty of the proposed method is that it makes a connection to the Gaussian process modeling which is widely applied to the analysis of computer experiments. Practitioners have so far stayed away from Gaussian process regression in physical experiments because they felt that such a complex nonparameteric regression model is unnecessary to analyze the small data from experiments, which is also corrupted by noise. Nevertheless, we found Gaussian process modeling to be useful, at least to get a good initial estimate for the linear regression parameters. One may think then why not just use Gaussian process regression in the analysis of physical experiments and why we still need to stick with linear regression. The reason is that linear regression does a better job in identifying significant effects that are interpretable, whereas variable selection with Gaussian process regression is not trivial (Linkletter et al.,, 2006). However, some recent advances in global sensitivity analysis (Huang and Joseph,, 2024) offer some promising alternatives in this direction, which needs more investigation.
Appendix: Proof of (20)
Suppose that there are qualitative three-level factors and their model matrices are encoded according to Helmert coding. Then, the model matrix for factor is
Given that each factor is represented by two dummy variables, and based on the model matrix, these dummy variables are expressed as and . Then, the two correlation matrices are
According to the product correlation structure, the correlation matrix of factor is
where is the Hadamard product. Then,
Now using (17), the marginal distributions of ’s can be obtained from (16). This gives (20).
Supplementary Materials
The file contains the details of the factors and their levels, design matrix, and data for the examples in Section 4.
Disclosure of Interest
No potential conflict of interest was reported by the authors.
Data Availability Statement
The data are provided in the supplementary materials.
Funding
This research is supported by U.S. National Science Foundation grants DMS-2310637.
References
- Bien et al., (2013) Bien, J., Taylor, J., and Tibshirani, R. (2013). A Lasso for Hierarchical Interactions. Annals of statistics, 41(3):1111–1141.
- Box and Meyer, (1986) Box, G. and Meyer, D. (1986). An Analysis for Unreplicated Experiments. Technometrics, 28:11–18.
- Box et al., (1978) Box, G. E., Hunter, S., and Hunter, W. (1978). Statistics for Experimenters. John Wiley and sons, New York.
- Breiman, (1995) Breiman, L. (1995). Better Subset Regression Using the Nonnegative Garrote. Technometrics, 37(4):373–384.
- Candes and Tao, (2007) Candes, E. and Tao, T. (2007). The Dantzig Selector: Statistical Estimation When p Is Much Larger Than n. The Annals of Statistics, 35(6):2313–2351.
- Chipman, (1996) Chipman, H. (1996). Bayesian Variable Selection With Related Predictors. Canadian Journal of Statistics, 24(1):17–36.
- Chipman et al., (1997) Chipman, H., Hamada, M., and Wu, C. F. J. (1997). A Bayesian Variable-Selection Approach for Analyzing Designed Experiments With Complex Aliasing. Technometrics, 39(4):372–381.
- Choi et al., (2010) Choi, N. H., Li, W., and Zhu, J. (2010). Variable Selection With the Strong Heredity Constraint and Its Oracle Property. Journal of the American Statistical Association, 105(489):354–364.
- Efron et al., (2004) Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R. (2004). Least Angle Regression. The Annals of Statistics, 32(2):407–499.
- Fan and Li, (2001) Fan, J. and Li, R. (2001). Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties. Journal of the American Statistical Association, 96(456):1348–1360.
- George and McCulloch, (1993) George, E. I. and McCulloch, R. E. (1993). Variable Selection via Gibbs Sampling. Journal of the American Statistical Association, 88(423):881–889.
- Golub et al., (1979) Golub, G. H., Heath, M., and Wahba, G. (1979). Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter. Technometrics, 21(2):215–223.
- Hamada and Wu, (1992) Hamada, M. and Wu, C. F. J. (1992). Analysis of Designed Experiments With Complex Aliasing. Journal of Quality Technology, 24(3):130–137.
- Hao et al., (2018) Hao, N., Feng, Y., and Zhang, H. H. (2018). Model Selection for High-Dimensional Quadratic Regression via Regularization. Journal of the American Statistical Association, 113(522):615–625.
- Harville, (1998) Harville, D. A. (1998). Matrix Algebra From a Statistician’s Perspective. Springer-Verlag, New York.
- Hoerl and Kennard, (1970) Hoerl, A. E. and Kennard, R. W. (1970). Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics, 12(1):55–67.
- Huang and Joseph, (2024) Huang, C. and Joseph, V. R. (2024). Factor Importance Ranking and Selection Using Total Indices. arXiv preprint arXiv:2401.00800.
- Hunter et al., (1982) Hunter, G. B., Hodi, F. S., and Eagar, T. W. (1982). High Cycle Fatigue of Weld Repaired Cast Ti-6AI-4V. Metallurgical Transactions A, 13(9):1589–1594.
- Ishwaran and Rao, (2014) Ishwaran, H. and Rao, J. S. (2014). Geometry and Properties of Generalized Ridge Regression in High Dimensions. Contemp. Math, 622:81–93.
- Jones and Lanzerath, (2021) Jones, B. and Lanzerath, M. (2021). A Novel Application of a Definitive Screening Design: A Case Study. Quality Engineering, 33(3):563–569.
- Joseph, (2006) Joseph, V. R. (2006). A Bayesian Approach to the Design and Analysis of Fractionated Experiments. Technometrics, 48(2):219–229.
- Joseph and Delaney, (2007) Joseph, V. R. and Delaney, J. D. (2007). Functionally Induced Priors for the Analysis of Experiments. Technometrics, 49(1):1–11.
- Joseph et al., (2015) Joseph, V. R., Gul, E., and Ba, S. (2015). Maximum Projection Designs for Computer Experiments. Biometrika, 102(2):371–380.
- Lenth, (1989) Lenth, R. V. (1989). Quick and Easy Analysis of Unreplicated Factorials. Technometrics, 31(4):469–473.
- Li et al., (2006) Li, X., Sudarsanam, N., and Frey, D. D. (2006). Regularities in Data From Factorial Experiments. Complexity, 11(5):32–45.
- Lin, (1993) Lin, D. K. J. (1993). A New Class of Supersaturated Designs. Technometrics, 35(1):28–31.
- Linkletter et al., (2006) Linkletter, C., Bingham, D., Hengartner, N., Higdon, D., and Ye, K. Q. (2006). Variable Selection for Gaussian Process Models in Computer Experiments. Technometrics, 48(4):478–490.
- Montgomery, (2017) Montgomery, D. C. (2017). Design and Analysis of Experiments. John wiley & sons.
- Ockuly et al., (2017) Ockuly, R. A., Weese, M. L., Smucker, B. J., Edwards, D. J., and Chang, L. (2017). Response Surface Experiments: A Meta-Analysis. Chemometrics and Intelligent Laboratory Systems, 164:64–75.
- Phadke, (1986) Phadke, M. S. (1986). Design Optimization Case Studies. AT&T Technical Journal, 65(2):51–68.
- Raghavarao and Altan, (2003) Raghavarao, D. and Altan, S. (2003). A Heuristic Analysis of Highly Fractionated Factorial Experiments. Metrika, 58:185–191.
- Santner et al., (2018) Santner, T. J., Williams, B. J., and Notz, W. I. (2018). The Design and Analysis of Computer Experiments. Springer, New York.
- Singh and Stufken, (2024) Singh, R. and Stufken, J. (2024). Factor Selection in Screening Experiments by Aggregation Over Random Models. Computational Statistics & Data Analysis, 194:107940.
- Svanberg, (2002) Svanberg, K. (2002). A Class of Globally Convergent Optimization Methods Based on Conservative Convex Separable Approximations. SIAM Journal on Optimization, 12(2):555–573.
- Tibshirani, (1996) Tibshirani, R. (1996). Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267–288.
- Vazquez et al., (2021) Vazquez, A. R., Schoen, E. D., and Goos, P. (2021). A Mixed Integer Optimization Approach for Model Selection in Screening Experiments. Journal of Quality Technology, 53(3):243–266.
- Williams, (1968) Williams, K. R. (1968). Designed Experiments. Rubber Age, 100:65–71.
- Wu and Hamada, (2021) Wu, C. F. J. and Hamada, M. S. (2021). Experiments: Planning, Analysis, and Optimization, 3rd Edition. Wiley.
- Xiong, (2010) Xiong, S. (2010). Some Notes on the Nonnegative Garrote. Technometrics, 52(3):349–361.
- Yuan et al., (2007) Yuan, M., Joseph, V. R., and Lin, Y. (2007). An Efficient Variable Selection Approach for Analyzing Designed Experiments. Technometrics, 49(4):430–439.
- Yuan et al., (2009) Yuan, M., Joseph, V. R., and Zou, H. (2009). Structured Variable Selection and Estimation. The Annals of Applied Statistics, 3(4):1738–1757.
- Yuan and Lin, (2006) Yuan, M. and Lin, Y. (2006). Model Selection and Estimation in Regression with Grouped Variables. Journal of the Royal Statistical Society Series B: Statistical Methodology, 68(1):49–67.
- Yuan and Lin, (2007) Yuan, M. and Lin, Y. (2007). On the Non-Negative Garrotte Estimator. Journal of the Royal Statistical Society Series B: Statistical Methodology, 69(2):143–161.
- Zou, (2006) Zou, H. (2006). The Adaptive Lasso and Its Oracle Properties. Journal of the American statistical association, 101(476):1418–1429.