Found 2629 packages in 0.05 seconds
Bayesian Penalized Quantile Regression
Bayesian regularized quantile regression utilizing sparse priors to
impose exact sparsity leads to efficient Bayesian shrinkage estimation, variable
selection and statistical inference. In this package, we have implemented robust
Bayesian variable selection with spike-and-slab priors under high-dimensional
linear regression models (Fan et al. (2024)
Choice Item Response Theory
Jointly model the accuracy of cognitive responses and item choices
within a Bayesian hierarchical framework as described by Culpepper and
Balamuta (2015)
Model Selection in Multivariate Longitudinal Data Analysis
An efficient Gibbs sampling algorithm is developed for Bayesian multivariate longitudinal data analysis with the focus on selection of important elements in the generalized autoregressive matrix. It provides posterior samples and estimates of parameters. In addition, estimates of several information criteria such as Akaike information criterion (AIC), Bayesian information criterion (BIC), deviance information criterion (DIC) and prediction accuracy such as the marginal predictive likelihood (MPL) and the mean squared prediction error (MSPE) are provided for model selection.
Utilising Normalisation Constant Optimisation via Edge Removal (UNCOVER)
Model data with a suspected clustering structure (either in
co-variate space, regression space or both) using a Bayesian product model
with a logistic regression likelihood. Observations are represented
graphically and clusters are formed through various edge removals or
additions. Cluster quality is assessed through the log Bayesian evidence of
the overall model, which is estimated using either a Sequential Monte Carlo
sampler or a suitable transformation of the Bayesian Information Criterion
as a fast approximation of the former. The internal Iterated Batch
Importance Sampling scheme (Chopin (2002
Contrast-Based Bayesian Network Meta Analysis
A function that facilitates fitting three types of models
for contrast-based Bayesian Network Meta Analysis. The first model is that which
is described in Lu and Ades (2006)
Plotting for Bayesian Models
Plotting functions for posterior analysis, MCMC diagnostics,
prior and posterior predictive checks, and other visualizations
to support the applied Bayesian workflow advocated in
Gabry, Simpson, Vehtari, Betancourt, and Gelman (2019)
Additive Model for Ordinal Data using Laplace P-Splines
Additive proportional odds model for ordinal data using Laplace P-splines. The combination of Laplace approximations and P-splines enable fast and flexible inference in a Bayesian framework. Specific approximations are proposed to account for the asymmetry in the marginal posterior distributions of non-penalized parameters. For more details, see Lambert and Gressani (2023)
Multivariate (Dynamic) Generalized Additive Models
Fit Bayesian Dynamic Generalized Additive Models to multivariate observations. Users can build nonlinear State-Space models that can incorporate semiparametric effects in observation and process components, using a wide range of observation families. Estimation is performed using Markov Chain Monte Carlo with Hamiltonian Monte Carlo in the software 'Stan'. References: Clark & Wells (2023)
Bayesian Distributed Lag Model Fitting for Binary and Count Response Data
Tools for fitting Bayesian Distributed Lag Models (DLMs) to longitudinal response data that is a count or binary. Count data is fit using negative binomial regression and binary is fit using quantile regression. The contribution of the lags are fit via b-splines. In addition, infers the predictor inclusion uncertainty. Multimomial models are not supported. Based on Dempsey and Wyse (2025)
A Shiny Application for End-to-End Bayesian Decision Network Analysis and Web-Deployment
A Shiny application for learning Bayesian Decision Networks from data. This package can be used for probabilistic reasoning (in the observational setting), causal inference (in the presence of interventions) and learning policy decisions (in Decision Network setting). Functionalities include end-to-end implementations for data-preprocessing, structure-learning, exact inference, approximate inference, extending the learned structure to Decision Networks and policy optimization using statistically rigorous methods such as bootstraps, resampling, ensemble-averaging and cross-validation. In addition to Bayesian Decision Networks, it also features correlation networks, community-detection, graph visualizations, graph exports and web-deployment of the learned models as Shiny dashboards.