Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 2835 packages in 0.08 seconds

pqrBayes — by Cen Wu, 3 months ago

Bayesian Penalized Quantile Regression

Bayesian regularized quantile regression utilizing two major classes of shrinkage priors (the spike-and-slab priors and the horseshoe family of priors) leads to efficient Bayesian shrinkage estimation, variable selection and valid statistical inference. In this package, we have implemented robust Bayesian variable selection with spike-and-slab priors under high-dimensional linear regression models (Fan et al. (2024) and Ren et al. (2023) ), and regularized quantile varying coefficient models (Zhou et al.(2023) ). In particular, valid robust Bayesian inferences under both models in the presence of heavy-tailed errors can be validated on finite samples. Additional models with spike-and-slab priors include robust Bayesian group LASSO and robust binary Bayesian LASSO (Fan and Wu (2025) ). Besides, robust sparse Bayesian regression with the horseshoe family of (horseshoe, horseshoe+ and regularized horseshoe) priors has also been implemented and yielded valid inference results under heavy-tailed model errors(Fan et al.(2025) ). The Markov chain Monte Carlo (MCMC) algorithms of the proposed and alternative models are implemented in C++.

BAS — by Merlise Clyde, 2 months ago

Bayesian Variable Selection and Model Averaging using Bayesian Adaptive Sampling

Package for Bayesian Variable Selection and Model Averaging in linear models and generalized linear models using stochastic or deterministic sampling without replacement from posterior distributions. Prior distributions on coefficients are from Zellner's g-prior or mixtures of g-priors corresponding to the Zellner-Siow Cauchy Priors or the mixture of g-priors from Liang et al (2008) for linear models or mixtures of g-priors from Li and Clyde (2019) in generalized linear models. Other model selection criteria include AIC, BIC and Empirical Bayes estimates of g. Sampling probabilities may be updated based on the sampled models using sampling w/out replacement or an efficient MCMC algorithm which samples models using a tree structure of the model space as an efficient hash table. See Clyde, Ghosh and Littman (2010) for details on the sampling algorithms. Uniform priors over all models or beta-binomial prior distributions on model size are allowed, and for large p truncated priors on the model space may be used to enforce sampling models that are full rank. The user may force variables to always be included in addition to imposing constraints that higher order interactions are included only if their parents are included in the model. This material is based upon work supported by the National Science Foundation under Division of Mathematical Sciences grant 1106891. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

ordgam — by Philippe Lambert, 2 years ago

Additive Model for Ordinal Data using Laplace P-Splines

Additive proportional odds model for ordinal data using Laplace P-splines. The combination of Laplace approximations and P-splines enable fast and flexible inference in a Bayesian framework. Specific approximations are proposed to account for the asymmetry in the marginal posterior distributions of non-penalized parameters. For more details, see Lambert and Gressani (2023) ; Preprint: ).

mvgam — by Nicholas J Clark, 2 months ago

Multivariate (Dynamic) Generalized Additive Models

Fit Bayesian Dynamic Generalized Additive Models to multivariate observations. Users can build nonlinear State-Space models that can incorporate semiparametric effects in observation and process components, using a wide range of observation families. Estimation is performed using Markov Chain Monte Carlo with Hamiltonian Monte Carlo in the software 'Stan'. References: Clark & Wells (2023) .

wiseR — by Tavpritesh Sethi, 7 years ago

A Shiny Application for End-to-End Bayesian Decision Network Analysis and Web-Deployment

A Shiny application for learning Bayesian Decision Networks from data. This package can be used for probabilistic reasoning (in the observational setting), causal inference (in the presence of interventions) and learning policy decisions (in Decision Network setting). Functionalities include end-to-end implementations for data-preprocessing, structure-learning, exact inference, approximate inference, extending the learned structure to Decision Networks and policy optimization using statistically rigorous methods such as bootstraps, resampling, ensemble-averaging and cross-validation. In addition to Bayesian Decision Networks, it also features correlation networks, community-detection, graph visualizations, graph exports and web-deployment of the learned models as Shiny dashboards.

DiscreteDLM — by Daniel Dempsey, a year ago

Bayesian Distributed Lag Model Fitting for Binary and Count Response Data

Tools for fitting Bayesian Distributed Lag Models (DLMs) to longitudinal response data that is a count or binary. Count data is fit using negative binomial regression and binary is fit using quantile regression. The contribution of the lags are fit via b-splines. In addition, infers the predictor inclusion uncertainty. Multimomial models are not supported. Based on Dempsey and Wyse (2025) .

ra4bayesmeta — by Manuela Ott, 2 years ago

Reference Analysis for Bayesian Meta-Analysis

Functionality for performing a principled reference analysis in the Bayesian normal-normal hierarchical model used for Bayesian meta-analysis, as described in Ott, Plummer and Roos (2021) . Computes a reference posterior, induced by a minimally informative improper reference prior for the between-study (heterogeneity) standard deviation. Determines additional proper anti-conservative (and conservative) prior benchmarks. Includes functions for reference analyses at both the posterior and the prior level, which, given the data, quantify the informativeness of a heterogeneity prior of interest relative to the minimally informative reference prior and the proper prior benchmarks. The functions operate on data sets which are compatible with the 'bayesmeta' package.

sparseGAM — by Ray Bai, 5 years ago

Sparse Generalized Additive Models

Fits sparse frequentist GAMs (SF-GAM) for continuous and discrete responses in the exponential dispersion family with the group lasso, group smoothly clipped absolute deviation (SCAD), and group minimax concave (MCP) penalties . Also fits sparse Bayesian generalized additive models (SB-GAM) with the spike-and-slab group lasso (SSGL) penalty of Bai et al. (2021) . B-spline basis functions are used to model the sparse additive functions. Stand-alone functions for group-regularized negative binomial regression, group-regularized gamma regression, and group-regularized regression in the exponential dispersion family with the SSGL penalty are also provided.

PLMIX — by Cristina Mollica, 8 months ago

Bayesian Analysis of Finite Mixture of Plackett-Luce Models

Fit finite mixtures of Plackett-Luce models for partial top rankings/orderings within the Bayesian framework. It provides MAP point estimates via EM algorithm and posterior MCMC simulations via Gibbs Sampling. It also fits MLE as a special case of the noninformative Bayesian analysis with vague priors. In addition to inferential techniques, the package assists other fundamental phases of a model-based analysis for partial rankings/orderings, by including functions for data manipulation, simulation, descriptive summary, model selection and goodness-of-fit evaluation. Main references on the methods are Mollica and Tardella (2017) and Mollica and Tardella (2014) .

DPCD — by Sam Morrissette, 3 months ago

Dirichlet Process Clustering with Dissimilarities

A Bayesian hierarchical model for clustering dissimilarity data using the Dirichlet process. The latent configuration of objects and the number of clusters are automatically inferred during the fitting process. The package supports multiple models which are available to detect clusters of various shapes and sizes using different covariance structures. Additional functions are included to ensure adequate model fits through prior and posterior predictive checks.