Found 2768 packages in 0.08 seconds
Co-Data Learning for Bayesian Additive Regression Trees
Estimate prior variable weights for Bayesian Additive Regression
Trees (BART). These weights correspond to the probabilities of the variables
being selected in the splitting rules of the sum-of-trees.
Weights are estimated using empirical Bayes and external information on
the explanatory variables (co-data).
BART models are fitted using the 'dbarts' 'R' package.
See Goedhart and others (2023)
Bayesian Additive Regression Trees using Bayesian Model Averaging
"BART-BMA Bayesian Additive Regression Trees using Bayesian Model Averaging" (Hernandez B, Raftery A.E., Parnell A.C. (2018)
Bayesian Additive Regression Trees with Stan-Sampled Parametric Extensions
Fits semiparametric linear and multilevel models with non-parametric additive Bayesian additive regression tree (BART; Chipman, George, and McCulloch (2010)
Iterative Bayesian Additive Regression Trees Descriptor Selection Method
A statistical method based on Bayesian Additive Regression Trees with Global
Standard Error Permutation Test (BART-G.SE) for descriptor selection
and symbolic regression. It finds the symbolic formula of the regression function
y=f(x) as described in Ye, Senftle, and Li (2023)
Fit Varying Coefficient Models with Bayesian Additive Regression Trees
Fits linear varying coefficient (VC) models, which assert a linear relationship between an outcome and several covariates but allow that relationship (i.e., the coefficients or slopes in the linear regression) to change as functions of additional variables known as effect modifiers, by approximating the coefficient functions with Bayesian Additive Regression Trees. Implements a Metropolis-within-Gibbs sampler to simulate draws from the posterior over coefficient function evaluations. VC models with independent observations or repeated observations can be fit. For more details see Deshpande et al. (2024)
Mixed GAM Computation Vehicle with Automatic Smoothness Estimation
Generalized additive (mixed) models, some of their extensions and
other generalized ridge regression with multiple smoothing
parameter estimation by (Restricted) Marginal Likelihood,
Cross Validation and similar, or using iterated nested Laplace
approximation for fully Bayesian inference. See Wood (2025)
Bayesian Applied Regression Modeling via Stan
Estimates previously compiled regression models using the 'rstan' package, which provides the R interface to the Stan C++ library for Bayesian estimation. Users specify models via the customary R syntax with a formula and data.frame plus some additional arguments for priors.
Bayesian Variable Selection and Model Choice for Generalized Additive Mixed Models
Bayesian variable selection, model choice, and regularized estimation for (spatial) generalized additive mixed regression models via stochastic search variable selection with spike-and-slab priors.
Bayesian Generalized Additive Model Selection
Generalized additive model selection via approximate Bayesian inference is provided. Bayesian mixed model-based penalized splines with spike-and-slab-type coefficient prior distributions are used to facilitate fitting and selection. The approximate Bayesian inference engine options are: (1) Markov chain Monte Carlo and (2) mean field variational Bayes. Markov chain Monte Carlo has better Bayesian inferential accuracy, but requires a longer run-time. Mean field variational Bayes is faster, but less accurate. The methodology is described in He and Wand (2024)
Bayesian Regression Models using 'Stan'
Fit Bayesian generalized (non-)linear multivariate multilevel models
using 'Stan' for full Bayesian inference. A wide range of distributions
and link functions are supported, allowing users to fit -- among others --
linear, robust linear, count data, survival, response times, ordinal,
zero-inflated, hurdle, and even self-defined mixture models all in a
multilevel context. Further modeling options include both theory-driven and
data-driven non-linear terms, auto-correlation structures, censoring and
truncation, meta-analytic standard errors, and quite a few more.
In addition, all parameters of the response distribution can be predicted
in order to perform distributional regression. Prior specifications are
flexible and explicitly encourage users to apply prior distributions that
actually reflect their prior knowledge. Models can easily be evaluated and
compared using several methods assessing posterior or prior predictions.
References: Bürkner (2017)