Fast methods for learning sparse Bayesian networks from high-dimensional data using sparse regularization. Designed to incorporate mixed experimental and observational data with thousands of variables with either continuous or discrete observations.
Methods for learning sparse Bayesian networks and other graphical models from high-dimensional data via sparse regularization. Designed to handle:
The workhorse behind
sparsebn is the
sparsebnUtils package, which provides various S3 classes and methods for representing and manipulating graphs. The basic algorithms are implemented in
The main methods for learning graphical models are:
estimate.dagfor directed acyclic graphs (Bayesian networks).
estimate.precisionfor undirected graphs (Markov random fields).
estimate.covariancefor covariance matrices.
Currently, estimation of precision and covariances matrices is limited to Gaussian data.
You can install:
the latest CRAN version with
the latest development version from GitHub with
devtools::install_github(c("itsrainingdata/sparsebn/", "itsrainingdata/sparsebnUtils/dev", "itsrainingdata/ccdrAlgorithm/dev", "gujyjean/discretecdAlgorithm"))
 Aragam, B. and Zhou, Q. (2015). Concave penalized estimation of sparse Gaussian Bayesian networks. The Journal of Machine Learning Research. 16(Nov):2273−2328.
 Fu, F., Gu, J., and Zhou, Q. (2014). Adaptive penalized estimation of directed acyclic graphs from categorical data. arXiv: 1403.2310.
 Aragam, B., Amini, A. A., and Zhou, Q. (2015). Learning directed acyclic graphs with penalized neighbourhood regression. arXiv: 1511.08963.
 Fu, F. and Zhou, Q. (2013). Learning sparse causal Gaussian networks with experimental intervention: Regularization and coordinate descent. Journal of the American Statistical Association, 108: 288-300.
NEWS.mdfile to track changes to the package
estimate.dagnow takes an optional logical argument
TRUE, then an adaptive version of the CD algorithm will be run for discrete data. This argument is ignored for continuous data.