# Dominance Analysis

Dominance analysis is a method that allows to compare the relative importance of predictors in multiple regression models: ordinary least squares, generalized linear models, hierarchical linear models, beta regression and dynamic linear models. The main principles and methods of dominance analysis are described in Budescu, D. V. (1993) and Azen, R., & Budescu, D. V. (2003) for ordinary least squares regression. Subsequently, the extensions for multivariate regression, logistic regression and hierarchical linear models were described in Azen, R., & Budescu, D. V. (2006) , Azen, R., & Traxel, N. (2009) and Luo, W., & Azen, R. (2013) , respectively.

Dominance Analysis (Azen and Budescu, 2003, 2006; Azen and Traxel, 2009; Budescu, 1993; Luo and Azen, 2013), for multiple regression models: Ordinary Least Squares, Generalized Linear Models and Hierarchical Linear Models.

Features:

• Provides complete, conditional and general dominance analysis for lm (univariate and multivariate), lmer and glm (family=binomial) models.
• Covariance / correlation matrixes could be used as input for OLS dominance analysis, using `lmWithCov()` and `mlmWithCov()` methods, respectively.
• Multiple criteria can be used as fit indices, which is useful especially for HLM.

# Examples

## Linear regression

We could apply dominance analysis directly on the data, using lm (see Azen and Budescu, 2003).

The attitude data is composed of six predictors of the overall rating of 35 clerical employees of a large financial organization: complaints, privileges, learning, raises, critical and advancement. The method `dominanceAnalysis()` can retrieve all necessary information directly from a lm model.

Using `print()` method on the dominanceAnalysis object, we can see that complaints completely dominates all other predictors, followed by learning (lrnn). The remaining 4 variables (prvl,rass,crtc,advn) don't show a consistent pattern for complete and conditional dominance.

The `print()` method uses `abbreviate`, to allow complex models to be visualized at a glance.

The `summary()` method provides the average contribution of each variable. This contribution defines general dominance. Also, shows the complete dominance analysis matrix, that presents all fit differences between levels.

To evaluate the robustness of our results, we can use bootstrap analysis (Azen and Budescu, 2006).

We applied a bootstrap analysis using `bootDominanceAnalysis()` method with R2 as a fit index and 100 permutations. For precise results, you need to run at least 1000 replications.

The `summary()` method presents the results for the bootstrap analysis. Dij shows the original result, and mDij, the mean for Dij on bootstrap samples and SE.Dij its standard error. Pij is the proportion of bootstrap samples where i dominates j, Pji is the proportion of bootstrap samples where j dominates i and Pnoij is the proportion of samples where no dominance can be asserted. Rep is the proportion of samples where original dominance is replicated.

We can see that the value of complete dominance for complaints is fairly robust over all variables (Dij almost equal to mDij, and small SE), contrarily to learning (Dij differs from mDij, and bigger SE).

Another way to perform the dominance analysis is by using a correlation or covariance matrix. As an example, we use the ability.cov matrix which is composed of five specific skills that might explain general intelligence (general). The biggest average contribution is for predictor reading (0.152). Nevertheless, in the output of `summary()` method on level 1, we can see that picture (0.125) dominates over reading (0.077) on 'vocab' submodel.

## Hierarchical Linear Models

For Hierarchical Linear Models using lme4, you should provide a null model (see Luo and Azen, 2013).

As an example, we use npk dataset, which contains information about a classical N, P, K (nitrogen, phosphate, potassium) factorial experiment on the growth of peas conducted on 6 blocks.

Using `print()` method, we can see that random effects are modeled as a constant (1 | block).

The fit indices used in the analysis were rb.r2.1 (R&B R12: Level-1 variance component explained by predictors), rb.r2.2 (R&B R22: Level-2 variance component explained by predictors), sb.r2.1 (S&B R12: Level-1 proportional reduction in error predicting scores at Level-1), and sb.r2.2 (S&B R22: Level-2 proportional reduction in error predicting scores at Level-1). We can see that using rb.r2.1 and sb.r2.1 index, that shows influence of predictors on Level-1 variance, clearly nitrogen dominates over potassium and phosphate, and potassium dominates over phosphate.

## Logistic regression

Dominance analysis can be used in logistic regression (see Azen and Traxel, 2009).

As an example, we used the esoph dataset, that contains information about a case-control study of (o)esophageal cancer in Ille-et-Vilaine, France.

Looking at the report for standard glm summary method, we can see that the linear effect of each variable was significant (p < 0.05 for agegp.L, alcgp.L and tobgp.L), such as the quadratic effect of predictor age (p < 0.05 for agegp.Q). Even so,it is hard to identify which variable is more important to predict esophageal cancer.

We performed dominance analysis on this dataset and the results are shown below. The fit indices were r2.m (RM2: McFadden's measure), r2.cs (RC**S2: Cox and Snell's measure), r2.n (RN2: Nagelkerke's measure) and r2.e (RE2: Estrella's measure). For all fit indices, we can conclude that age and alcohol completely dominate tobacco, while age shows general dominance over both alcohol and tobacco.

Then, we performed a bootstrap analysis. Using McFadden's measure (r2.m), we can see that bootstrap dominance of age over tobacco, and of alcohol over tobacco have standard errors (SE.Dij) of 0 and reproducibility (Rep) equal to 1, so are fairly robust on all levels.Dominance values of age over alcohol are not easily reproducible and require more research

## Set of predictors

Budescu (1993) shows that dominance analysis can be applied to groups or set of inseparable predictors.

## Installation

You can install the github version of dominanceanalysis from github with:

## Authors

• Claudio Bustos Navarrete: Creator and maintainer
• Filipa Coutinho Soares: Documentation and testing

## References

• Budescu, D. V. (1993). Dominance analysis: A new approach to the problem of relative importance of predictors in multiple regression. Psychological Bulletin, 114(3), 542-551. https://doi.org/10.1037/0033-2909.114.3.542

• Azen, R., & Budescu, D. V. (2003). The dominance analysis approach for comparing predictors in multiple regression. Psychological Methods, 8(2), 129-148. https://doi.org/10.1037/1082-989X.8.2.129

• Azen, R., & Budescu, D. V. (2006). Comparing Predictors in Multivariate Regression Models: An Extension of Dominance Analysis. Journal of Educational and Behavioral Statistics, 31(2), 157-180. https://doi.org/10.3102/10769986031002157

• Azen, R., & Traxel, N. (2009). Using Dominance Analysis to Determine Predictor Importance in Logistic Regression. Journal of Educational and Behavioral Statistics, 34(3), 319-347. https://doi.org/10.3102/1076998609332754

• Luo, W., & Azen, R. (2013). Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis. Journal of Educational and Behavioral Statistics, 38(1), 3-31. https://doi.org/10.3102/1076998612458319

# dominanceanalysis 1.0.0

• First official version. Code coverage of 99% and complete documentation of all methods.
• Added vignette "Exploring predictors' importance in binomial logistic regressions", by Filipa Coutinho Soares.
• Added dominanceMatrix() as a generic for matrix, data.frame and dominanceAnalysis methods
• New retrieval methods for dominanceAnalysis object: getFits(), contributionAverage() and contributionByLevel()
• Removed support for nlme (it never worked well)

# dominanceanalysis 0.1.2

• Added parameter `terms` on dominanceAnalysis, to manually define set of variables
• Updated documentation, thanks to Filipa Coutinho Soares
• Test for all relevant functions

# dominance analysis 0.1.1

• Allowed multivariate regression with covariance matrix
• Added documentation for multivariate methods
• Added bootstrap analysis for average contribution
• Resolved bug on bootstrap analysis
• Resolved bug on glm.da.fit

# dominance analysis 0.1.0

• Complete support for OLS, logistic regression and HLM
• Support for bootstrap analysis

# Reference manual

install.packages("dominanceanalysis")

2.0.0 by Claudio Bustos Navarrete, 4 months ago

Browse source code at https://github.com/cran/dominanceanalysis

Authors: Claudio Bustos Navarrete [aut, cre] , Filipa Coutinho Soares [aut]

Documentation:   PDF Manual