Found 1848 packages in 0.02 seconds
Approximate Bayesian Computation via Random Forests
Performs Approximate Bayesian Computation (ABC) model choice and parameter inference via random forests.
Pudlo P., Marin J.-M., Estoup A., Cornuet J.-M., Gautier M. and Robert C. P. (2016)
Stepwise Predictive Variable Selection for Random Forest
An introduction to several novel predictive variable selection methods for random forest. They are based on various variable importance methods (i.e., averaged variable importance (AVI), and knowledge informed AVI (i.e., KIAVI, and KIAVI2)) and predictive accuracy in stepwise algorithms. For details of the variable selection methods, please see: Li, J., Siwabessy, J., Huang, Z. and Nichol, S. (2019)
A Toolbox for Conditional Inference Trees and Random Forests
Additions to 'party' and 'partykit' packages : tools for the interpretation of forests (surrogate trees, prototypes, etc.), feature selection (see Gregorutti et al (2017)
Variable Importance Measures for Multivariate Random Forests
Calculates two sets of post-hoc variable importance measures for multivariate random forests. The first set of variable importance measures are given by the sum of mean split improvements for splits defined by feature j measured on user-defined examples (i.e., training or testing samples). The second set of importance measures are calculated on a per-outcome variable basis as the sum of mean absolute difference of node values for each split defined by feature j measured on user-defined examples (i.e., training or testing samples). The user can optionally threshold both sets of importance measures to include only splits that are statistically significant as measured using an F-test.
A Unified Framework for Random Forest Prediction Error Estimation
Estimates the conditional error distributions of random forest predictions and common parameters of those distributions, including conditional misclassification rates, conditional mean squared prediction errors, conditional biases, and conditional quantiles, by out-of-bag weighting of out-of-bag prediction errors as proposed by Lu and Hardin (2021). This package is compatible with several existing packages that implement random forests in R.
Significance Level for Random Forest Impurity Importance Scores
Sets a significance level for Random Forest MDI (Mean Decrease in Impurity, Gini or
sum of squares) variable importance scores, using an empirical Bayes approach.
See Dunne et al. (2022)
ROSE Random Forests for Robust Semiparametric Efficient Estimation
ROSE (RObust Semiparametric Efficient) random forests for robust
semiparametric efficient estimation in partially parametric models (containing
generalised partially linear models).
Details can be found in the paper by Young and Shah (2024)
Bootstrap Stacking of Random Forest Models for Heterogeneous Data
Generates and predicts a set of linearly stacked Random Forest models using bootstrap sampling. Individual datasets may be heterogeneous (not all samples have full sets of features). Contains support for parallelization but the user should register their cores before running. This is an extension of the method found in Matlock (2018)
Explaining and Visualizing Random Forests in Terms of Variable Importance
A set of tools to help explain which variables are most important in a random forests. Various variable importance measures are calculated and visualized in different settings in order to get an idea on how their importance changes depending on our criteria (Hemant Ishwaran and Udaya B. Kogalur and Eiran Z. Gorodeski and Andy J. Minn and Michael S. Lauer (2010)
Mixed Effect Random Forests for Small Area Estimation
Mixed Effects Random Forests (MERFs) are a data-driven,
nonparametric alternative to current methods of Small Area Estimation
(SAE). 'SAEforest' provides functions for the estimation of regionally
disaggregated linear and nonlinear indicators using survey sample
data. Included procedures facilitate the estimation of domain-level
economic and inequality metrics and assess associated uncertainty.
Emphasis lies on straightforward interpretation and visualization of results.
From a methodological perspective, the package builds on approaches discussed in
Krennmair and Schmid (2022)