Found 1820 packages in 0.02 seconds
Multiple Imputation using Chained Random Forests
An R package for multiple imputation using chained random forests.
Implemented methods can handle missing data in mixed types of variables by
using prediction-based or node-based conditional distributions constructed
using random forests. For prediction-based imputation, the method based on
the empirical distribution of out-of-bag prediction errors of random forests
and the method based on normality assumption for prediction errors of random
forests are provided for imputing continuous variables. And the method based
on predicted probabilities is provided for imputing categorical variables.
For node-based imputation, the method based on the conditional distribution
formed by the predicting nodes of random forests, and the method based on
proximity measures of random forests are provided. More details of the
statistical methods can be found in Hong et al. (2020)
Causal Effect Random Forest of Interaction Trees
Fits a Causal Effect Random Forest of Interaction Tress (CERFIT) which is a modification of the Random Forest algorithm where each split is chosen to maximize subgroup treatment heterogeneity. Doing this allows it to estimate the individualized treatment effect for each observation in either randomized controlled trial (RCT) or observational data. For more information see L. Li, R. A. Levine, and J. Fan (2022)
Random Forests Model Selection and Performance Evaluation
Utilities for Random Forest model selection, class balance correction, significance test, cross validation and partial dependency plots.
Movement to Behaviour Inference using Random Forest
Prediction of behaviour from movement characteristics using observation and random forest for the analyses of movement data in ecology. From movement information (speed, bearing...) the model predicts the observed behaviour (movement, foraging...) using random forest. The model can then extrapolate behavioural information to movement data without direct observation of behaviours. The specificity of this method relies on the derivation of multiple predictor variables from the movement data over a range of temporal windows. This procedure allows to capture as much information as possible on the changes and variations of movement and ensures the use of the random forest algorithm to its best capacity. The method is very generic, applicable to any set of data providing movement data together with observation of behaviour.
Multiple Imputation Using MICE and Random Forest
Functions to impute using random forest under full conditional specifications (multivariate imputation by chained equations). The methods are described in Shah and others (2014)
Oblique Decision Random Forest for Classification and Regression
The oblique decision tree (ODT) uses linear combinations of predictors as partitioning variables in a decision tree. Oblique Decision Random Forest (ODRF) is an ensemble of multiple ODTs generated by feature bagging. Oblique Decision Boosting Tree (ODBT) applies feature bagging during the training process of ODT-based boosting trees to ensemble multiple boosting trees. All three methods can be used for classification and regression, and ODT and ODRF serve as supplements to the classical CART of Breiman (1984)
Fast Serializable Random Forests Based on 'ranger'
An updated implementation of R package 'ranger' by Wright et al,
(2017)
Approximate Bayesian Computation via Random Forests
Performs Approximate Bayesian Computation (ABC) model choice and parameter inference via random forests.
Pudlo P., Marin J.-M., Estoup A., Cornuet J.-M., Gautier M. and Robert C. P. (2016)
Stepwise Predictive Variable Selection for Random Forest
An introduction to several novel predictive variable selection methods for random forest. They are based on various variable importance methods (i.e., averaged variable importance (AVI), and knowledge informed AVI (i.e., KIAVI, and KIAVI2)) and predictive accuracy in stepwise algorithms. For details of the variable selection methods, please see: Li, J., Siwabessy, J., Huang, Z. and Nichol, S. (2019)
A Toolbox for Conditional Inference Trees and Random Forests
Additions to 'party' and 'partykit' packages : tools for the interpretation of forests (surrogate trees, prototypes, etc.), feature selection (see Gregorutti et al (2017)