Found 2156 packages in 0.02 seconds
One-to-One Feature Matching
Statistical methods to match feature vectors between multiple datasets in a one-to-one fashion. Given a fixed number of classes/distributions, for each unit, exactly one vector of each class is observed without label. The goal is to label the feature vectors using each label exactly once so to produce the best match across datasets, e.g. by minimizing the variability within classes. Statistical solutions based on empirical loss functions and probabilistic modeling are provided. The 'Gurobi' software and its 'R' interface package are required for one of the package functions (match.2x()) and can be obtained at < https://www.gurobi.com/> (free academic license). For more details, refer to Degras (2022)
Graph/Network Visualization
Build graph/network structures using functions for stepwise addition and deletion of nodes and edges. Work with data available in tables for bulk addition of nodes, edges, and associated metadata. Use graph selections and traversals to apply changes to specific nodes or edges. A wide selection of graph algorithms allow for the analysis of graphs. Visualize the graphs and take advantage of any aesthetic properties assigned to nodes and edges.
Stratified Randomized Experiments
Estimate average treatment effects (ATEs) in stratified randomized experiments. `sreg` supports a wide range of stratification designs, including matched pairs, n-tuple designs, and larger strata with many units — possibly of unequal size across strata. 'sreg' is designed to accommodate scenarios with multiple treatments and cluster-level treatment assignments, and accommodates optimal linear covariate adjustment based on baseline observable characteristics. 'sreg' computes estimators and standard errors based on Bugni, Canay, Shaikh (2018)
Comprehensive Research Synthesis Tools for Systematic Reviews and Meta-Analysis
Functionalities for facilitating systematic reviews, data
extractions, and meta-analyses. It includes a GUI (graphical user interface)
to help screen the abstracts and titles of bibliographic data; tools to assign
screening effort across multiple collaborators/reviewers and to assess inter-
reviewer reliability; tools to help automate the download and retrieval of
journal PDF articles from online databases; figure and image extractions
from PDFs; web scraping of citations; automated and manual data extraction
from scatter-plot and bar-plot images; PRISMA (Preferred Reporting Items for
Systematic Reviews and Meta-Analyses) flow diagrams; simple imputation tools
to fill gaps in incomplete or missing study parameters; generation of random
effects sizes for Hedges' d, log response ratio, odds ratio, and correlation
coefficients for Monte Carlo experiments; covariance equations for modelling
dependencies among multiple effect sizes (e.g., effect sizes with a common
control); and finally summaries that replicate analyses and outputs from
widely used but no longer updated meta-analysis software (i.e., metawin).
Funding for this package was supported by National Science Foundation (NSF)
grants DBI-1262545 and DEB-1451031. CITE: Lajeunesse, M.J. (2016)
Facilitating systematic reviews, data extraction and meta-analysis with the
metagear package for R. Methods in Ecology and Evolution 7, 323-330
Latent Transition Cognitive Diagnosis Model with Covariates
Implementation of the three-step approach of (latent transition) cognitive diagnosis model (CDM) with covariates. This approach can be used for single time-point situations (cross-sectional data) and multiple time-point situations (longitudinal data) to investigate how the covariates are associated with attribute mastery. For multiple time-point situations, the three-step approach of latent transition CDM with covariates allows researchers to assess changes in attribute mastery status and to evaluate the covariate effects on both the initial states and transition probabilities over time using latent logistic regression. Because stepwise approaches often yield biased estimates, correction for classification error probabilities (CEPs) is considered in this approach. The three-step approach for latent transition CDM with covariates involves the following steps: (1) fitting a CDM to the response data without covariates at each time point separately, (2) assigning examinees to latent states at each time point and computing the associated CEPs, and (3) estimating the latent transition CDM with the known CEPs and computing the regression coefficients. The method was proposed in Liang et al. (2023)
Partial Identification of Causal Effects with Mostly Invalid Instruments
A tuneable and interpretable method for relaxing
the instrumental variables (IV) assumptions to infer treatment effects in the presence
of unobserved confounding.
For a treatment-associated covariate to be a valid IV, it must be (a) unconfounded with the outcome
and (b) have a causal effect on the outcome that is exclusively mediated by the exposure.
There is no general test of the validity of these IV assumptions for any particular pre-treatment
covariate.
However, if different pre-treatment covariates give differing causal effect estimates
when treated as IVs, then we know at least some of the covariates violate these assumptions.
'budgetIVr' exploits this fact by taking as input a minimum budget of pre-treatment covariates assumed
to be valid IVs and idenfiying the set of causal effects that are consistent with the user's data and budget assumption.
The following generalizations of this principle can be used in this package:
(1) a vector of multiple budgets can be assigned alongside corresponding thresholds that model degrees of IV invalidity;
(2) budgets and thresholds can be chosen using specialist knowledge or varied in a principled sensitivity analysis;
(3) treatment effects can be nonlinear and/or depend on multiple exposures (at a computational cost).
The methods in this package require only summary statistics.
Confidence sets are constructed under the "no measurement error" (NOME) assumption from the Mendelian randomization literature.
For further methodological details, please refer to Penn et al. (2024)
Easily Install and Load the 'Tidyverse'
The 'tidyverse' is a set of packages that work in harmony because they share common data representations and 'API' design. This package is designed to make it easy to install and load multiple 'tidyverse' packages in a single step. Learn more about the 'tidyverse' at < https://www.tidyverse.org>.
Lasso and Elastic-Net Regularized Generalized Linear Models
Extremely efficient procedures for fitting the entire lasso or elastic-net regularization path for linear regression, logistic and multinomial regression models, Poisson regression, Cox model, multiple-response Gaussian, and the grouped multinomial regression; see
Convert Country Names and Country Codes
Standardize country names, convert them into one of 40 different coding schemes, convert between coding schemes, and assign region descriptors.
Easy-to-Use Tools for Common Forms of Random Assignment and Sampling
Generates random assignments for common experimental designs and random samples for common sampling designs.