Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 30 packages in 0.20 seconds

exact2x2 — by Michael P. Fay, a year ago

Exact Tests and Confidence Intervals for 2x2 Tables

Calculates conditional exact tests (Fisher's exact test, Blaker's exact test, or exact McNemar's test) and unconditional exact tests (including score-based tests on differences in proportions, ratios of proportions, and odds ratios, and Boshcloo's test) with appropriate matching confidence intervals, and provides power and sample size calculations. Gives melded confidence intervals for the binomial case (Fay, et al, 2015, ). Gives boundary-optimized rejection region test (Gabriel, et al, 2018, ), an unconditional exact test for the situation where the controls are all expected to fail. Gives confidence intervals compatible with exact McNemar's or sign tests (Fay and Lumbard, 2021, ). For review of these kinds of exact tests see Fay and Hunsberger (2021, ).

confoundr — by John W. Jackson, 6 years ago

Diagnostics for Confounding of Time-Varying and Other Joint Exposures

Implements three covariate-balance diagnostics for time-varying confounding and selection-bias in complex longitudinal data, as described in Jackson (2016) and Jackson (2019) . Diagnostic 1 assesses measured confounding/selection-bias, diagnostic 2 assesses exposure-covariate feedback, and diagnostic 3 assesses residual confounding/selection-bias after inverse probability weighting or propensity score stratification. All diagnostics appropriately account for exposure history, can be adapted to assess a particular depth of covariate history, and can be implemented in right-censored data. Balance assessments can be obtained for all times, selected-times, or averaged across person-time. The balance measures are reported as tables or plots. These diagnostics can be applied to the study of multivariate exposures including time-varying exposures, direct effects, interaction, and censoring.

CausalModels — by Joshua Anderson, 2 years ago

Causal Inference Modeling for Estimation of Causal Effects

Provides an array of statistical models common in causal inference such as standardization, IP weighting, propensity matching, outcome regression, and doubly-robust estimators. Estimates of the average treatment effects from each model are given with the standard error and a 95% Wald confidence interval (Hernan, Robins (2020) < https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/>).

lrd — by Nicholas Maxwell, 3 years ago

A Package for Processing Lexical Response Data

Lexical response data is a package that can be used for processing cued-recall, free-recall, and sentence responses from memory experiments.

kbal — by Borna Bateni, 25 days ago

Kernel Balancing

Provides a weighting approach that employs kernels to make one group have a similar distribution to another group on covariates. This method matches not only means or marginal distributions but also higher-order transformations implied by the choice of kernel. 'kbal' is applicable to both treatment effect estimation and survey reweighting problems. Based on Hazlett, C. (2020) "Kernel Balancing: A flexible non-parametric weighting procedure for estimating causal effects." Statistica Sinica. < https://www.researchgate.net/publication/299013953_Kernel_Balancing_A_flexible_non-parametric_weighting_procedure_for_estimating_causal_effects/stats>.

stdReg2 — by Michael C Sachs, a month ago

Regression Standardization for Causal Inference

Contains more modern tools for causal inference using regression standardization. Four general classes of models are implemented; generalized linear models, conditional generalized estimating equation models, Cox proportional hazards models, and shared frailty gamma-Weibull models. Methodological details are described in Sjölander, A. (2016) . Also includes functionality for doubly robust estimation for generalized linear models in some special cases, and the ability to implement custom models.

causaloptim — by Michael C Sachs, 6 months ago

An Interface to Specify Causal Graphs and Compute Bounds on Causal Effects

When causal quantities are not identifiable from the observed data, it still may be possible to bound these quantities using the observed data. We outline a class of problems for which the derivation of tight bounds is always a linear programming problem and can therefore, at least theoretically, be solved using a symbolic linear optimizer. We extend and generalize the approach of Balke and Pearl (1994) and we provide a user friendly graphical interface for setting up such problems via directed acyclic graphs (DAG), which only allow for problems within this class to be depicted. The user can then define linear constraints to further refine their assumptions to meet their specific problem, and then specify a causal query using a text interface. The program converts this user defined DAG, query, and constraints, and returns tight bounds. The bounds can be converted to R functions to evaluate them for specific datasets, and to latex code for publication. The methods and proofs of tightness and validity of the bounds are described in a paper by Sachs, Jonzon, Gabriel, and Sjölander (2022) .

eventglm — by Michael C Sachs, a month ago

Regression Models for Event History Outcomes

A user friendly, easy to understand way of doing event history regression for marginal estimands of interest, including the cumulative incidence and the restricted mean survival, using the pseudo observation framework for estimation. For a review of the methodology, see Andersen and Pohar Perme (2010) or Sachs and Gabriel (2022) . The interface uses the well known formulation of a generalized linear model and allows for features including plotting of residuals, the use of sampling weights, and corrected variance estimation.

nlpred — by David Benkeser, 5 years ago

Estimators of Non-Linear Cross-Validated Risks Optimized for Small Samples

Methods for obtaining improved estimates of non-linear cross-validated risks are obtained using targeted minimum loss-based estimation, estimating equations, and one-step estimation (Benkeser, Petersen, van der Laan (2019), ). Cross-validated area under the receiver operating characteristics curve (LeDell, Petersen, van der Laan (2015), ) and other metrics are included.

palaeoverse — by Lewis A. Jones, 6 months ago

Prepare and Explore Data for Palaeobiological Analyses

Provides functionality to support data preparation and exploration for palaeobiological analyses, improving code reproducibility and accessibility. The wider aim of 'palaeoverse' is to bring the palaeobiological community together to establish agreed standards. The package currently includes functionality for data cleaning, binning (time and space), exploration, summarisation and visualisation. Reference datasets (i.e. Geological Time Scales < https://stratigraphy.org/chart>) and auxiliary functions are also provided. Details can be found in: Jones et al., (2023) .