Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 34 packages in 0.01 seconds

EmpiricalCalibration — by Martijn Schuemie, a year ago

Routines for Performing Empirical Calibration of Observational Study Estimates

Routines for performing empirical calibration of observational study estimates. By using a set of negative control hypotheses we can estimate the empirical null distribution of a particular observational study setup. This empirical null distribution can be used to compute a calibrated p-value, which reflects the probability of observing an estimated effect size when the null hypothesis is true taking both random and systematic error into account. A similar approach can be used to calibrate confidence intervals, using both negative and positive controls. For more details, see Schuemie et al. (2013) and Schuemie et al. (2018) .

cols4all — by Martijn Tennekes, 3 months ago

Colors for all

Color palettes for all people, including those with color vision deficiency. Popular color palette series have been organized by type and have been scored on several properties such as color-blind-friendliness and fairness (i.e. do colors stand out equally?). Own palettes can also be loaded and analysed. Besides the common palette types (categorical, sequential, and diverging) it also includes cyclic and bivariate color palettes. Furthermore, a color for missing values is assigned to each palette.

CirceR — by Chris Knoll, 2 years ago

Construct Cohort Inclusion and Restriction Criteria Expressions

Wraps the 'CIRCE' (< https://github.com/ohdsi/circe-be>) 'Java' library allowing cohort definition expressions to be edited and converted to 'Markdown' or 'SQL'.

qbinplots — by Edwin de Jonge, a year ago

Quantile Binned Plots

Create quantile binned and conditional plots for Exploratory Data Analysis. The package provides several plotting functions that are all based on quantile binning. The plots are created with 'ggplot2' and 'patchwork' and can be further adjusted.

zonebuilder — by Robin Lovelace, a year ago

Create and Explore Geographic Zoning Systems

Functions, documentation and example data to help divide geographic space into discrete polygons (zones). The package supports new zoning systems that are documented in the accompanying paper, "ClockBoard: A zoning system for urban analysis", by Lovelace et al. (2022) . The functions are motivated by research into the merits of different zoning systems (Openshaw, 1977) . A flexible ClockBoard zoning system is provided, which breaks-up space by concentric rings and radial lines emanating from a central point. By default, the diameter of the rings grow according to the triangular number sequence (Ross & Knott, 2019) with the first 4 doughnuts (or annuli) measuring 1, 3, 6, and 10 km wide. These annuli are subdivided into equal segments (12 by default), creating the visual impression of a dartboard. Zones are labelled according to distance to the centre and angular distance from North, creating a simple geographic zoning and labelling system useful for visualising geographic phenomena with a clearly demarcated central location such as cities.

Eunomia — by Frank DeFalco, 4 months ago

Standard Dataset Manager for Observational Medical Outcomes Partnership Common Data Model Sample Datasets

Facilitates access to sample datasets from the 'EunomiaDatasets' repository (< https://github.com/ohdsi/EunomiaDatasets>).

ParallelLogger — by Martijn Schuemie, 3 months ago

Support for Parallel Computation, Logging, and Function Automation

Support for parallel computation with progress bar, and option to stop or proceed on errors. Also provides logging to console and disk, and the logging persists in the parallel threads. Additional functions support function call automation with delayed execution (e.g. for executing functions in parallel).

miceafter — by Martijn Heymans, 3 years ago

Data and Statistical Analyses after Multiple Imputation

Statistical Analyses and Pooling after Multiple Imputation. A large variety of repeated statistical analysis can be performed and finally pooled. Statistical analysis that are available are, among others, Levene's test, Odds and Risk Ratios, One sample proportions, difference between proportions and linear and logistic regression models. Functions can also be used in combination with the Pipe operator. More and more statistical analyses and pooling functions will be added over time. Heymans (2007) . Eekhout (2017) . Wiel (2009) . Marshall (2009) . Sidi (2021) . Lott (2018) . Grund (2021) .

psfmi — by Martijn Heymans, 3 years ago

Prediction Model Pooling, Selection and Performance Evaluation Across Multiply Imputed Datasets

Pooling, backward and forward selection of linear, logistic and Cox regression models in multiply imputed datasets. Backward and forward selection can be done from the pooled model using Rubin's Rules (RR), the D1, D2, D3, D4 and the median p-values method. This is also possible for Mixed models. The models can contain continuous, dichotomous, categorical and restricted cubic spline predictors and interaction terms between all these type of predictors. The stability of the models can be evaluated using (cluster) bootstrapping. The package further contains functions to pool model performance measures as ROC/AUC, Reclassification, R-squared, scaled Brier score, H&L test and calibration plots for logistic regression models. Internal validation can be done across multiply imputed datasets with cross-validation or bootstrapping. The adjusted intercept after shrinkage of pooled regression coefficients can be obtained. Backward and forward selection as part of internal validation is possible. A function to externally validate logistic prediction models in multiple imputed datasets is available and a function to compare models. For Cox models a strata variable can be included. Eekhout (2017) . Wiel (2009) . Marshall (2009) .

SelfControlledCaseSeries — by Martijn Schuemie, 3 months ago

Self-Controlled Case Series

Execute the self-controlled case series (SCCS) design using observational data in the OMOP Common Data Model. Extracts all necessary data from the database and transforms it to the format required for SCCS. Age and season can be modeled using splines assuming constant hazard within calendar months. Event-dependent censoring of the observation period can be corrected for. Many exposures can be included at once (MSCCS), with regularization on all coefficients except for the exposure of interest. Includes diagnostics for all major assumptions of the SCCS.