Found 1641 packages in 0.79 seconds
Approximate False Positive Rate Control in Selection Frequency for Random Forest
Approximate false positive rate control in selection frequency for
random forest using the methods described by Ender Konukoglu and Melanie Ganz (2014)
Modeling and Map Production using Random Forest and Related Stochastic Models
Creates sophisticated models of training data and validates the models with an independent test set, cross validation, or Out Of Bag (OOB) predictions on the training data. Create graphs and tables of the model validation results. Applies these models to GIS .img files of predictors to create detailed prediction surfaces. Handles large predictor files for map making, by reading in the .img files in chunks, and output to the .txt file the prediction for each data chunk, before reading the next chunk of data.
Joint Random Forest (JRF) for the Simultaneous Estimation of Multiple Related Networks
Simultaneous estimation of multiple related networks.
Performing Monte Carlo Expectation Maximization Random Forest Imputation for Biological Data
Perform missing value imputation for biological data using the random forest algorithm, the imputation aim to keep the original mean and standard deviation consistent after imputation.
Innovative Complex Split Procedures in Random Forests Through Candidate Split Sampling
Implementations of three diversity forest (DF) (Hornung, 2022,
R Interface for the 'H2O' Scalable Machine Learning Platform
R interface for 'H2O', the scalable open source machine learning platform that offers parallelized implementations of many supervised and unsupervised machine learning algorithms such as Generalized Linear Models (GLM), Gradient Boosting Machines (including XGBoost), Random Forests, Deep Neural Networks (Deep Learning), Stacked Ensembles, Naive Bayes, Generalized Additive Models (GAM), ANOVA GLM, Cox Proportional Hazards, K-Means, PCA, ModelSelection, Word2Vec, as well as a fully automatic machine learning algorithm (H2O AutoML).
Nested Cross-Validation to Compare Cox-PH, Cox-Lasso, Survival Random Forests
Performs repeated nested cross-validation for Cox Proportionate Hazards, Cox Lasso, Survival Random Forest, and their ensemble. Returns internally validated concordance index, time-dependent area under the curve, Brier score, calibration slope, and statistical testing of non-linear ensemble outperforming the baseline Cox model. In this, it helps researchers to quantify the gain of using a more complex survival model, or justify its redundancy. Equally, it shows the performance value of the non-linear and interaction terms, and may highlight the need of further feature transformation. Further details can be found in Shamsutdinova, Stamate, Roberts, & Stahl (2022) "Combining Cox Model and Tree-Based Algorithms to Boost Performance and Preserve Interpretability for Health Outcomes"
A Laboratory for Recursive Partytioning
A computational toolbox for recursive partitioning.
The core of the package is ctree(), an implementation of
conditional inference trees which embed tree-structured
regression models into a well defined theory of conditional
inference procedures. This non-parametric class of regression
trees is applicable to all kinds of regression problems, including
nominal, ordinal, numeric, censored as well as multivariate response
variables and arbitrary measurement scales of the covariates.
Based on conditional inference trees, cforest() provides an
implementation of Breiman's random forests. The function mob()
implements an algorithm for recursive partitioning based on
parametric models (e.g. linear models, GLMs or survival
regression) employing parameter instability tests for split
selection. Extensible functionality for visualizing tree-structured
regression models is available. The methods are described in
Hothorn et al. (2006)
Transformation Trees and Forests
Recursive partytioning of transformation models with
corresponding random forest for conditional transformation models
as described in 'Transformation Forests' (Hothorn and Zeileis, 2021,
RF Variable Importance for Arbitrary Measures
Computes the random forest variable importance (VIMP) for the conditional inference random forest (cforest) of the 'party' package. Includes a function (varImp) that computes the VIMP for arbitrary measures from the 'measures' package. For calculating the VIMP regarding the measures accuracy and AUC two extra functions exist (varImpACC and varImpAUC).