Found 26 packages in 0.02 seconds
Rendering Parameterized SQL and Translation to Dialects
A rendering tool for parameterized SQL that also translates into different SQL dialects. These dialects include 'Microsoft SQL Server', 'Oracle', 'PostgreSql', 'Amazon RedShift', 'Apache Impala', 'IBM Netezza', 'Google BigQuery', 'Microsoft PDW', 'Snowflake', 'Azure Synapse Analytics Dedicated', 'Apache Spark', 'SQLite', and 'InterSystems IRIS'.
Interpreting Time Series and Autocorrelated Data Using GAMMs
GAMM (Generalized Additive Mixed Modeling; Lin & Zhang, 1999) as implemented in the R package 'mgcv' (Wood, S.N., 2006; 2011) is a nonlinear regression analysis which is particularly useful for time course data such as EEG, pupil dilation, gaze data (eye tracking), and articulography recordings, but also for behavioral data such as reaction times and response data. As time course measures are sensitive to autocorrelation problems, GAMMs implements methods to reduce the autocorrelation problems. This package includes functions for the evaluation of GAMM models (e.g., model comparisons, determining regions of significance, inspection of autocorrelational structure in residuals) and interpreting of GAMMs (e.g., visualization of complex interactions, and contrasts).
Prediction Model Pooling, Selection and Performance Evaluation Across Multiply Imputed Datasets
Pooling, backward and forward selection of linear, logistic and Cox regression models in
multiply imputed datasets. Backward and forward selection can be done
from the pooled model using Rubin's Rules (RR), the D1, D2, D3, D4 and
the median p-values method. This is also possible for Mixed models.
The models can contain continuous, dichotomous, categorical and restricted
cubic spline predictors and interaction terms between all these type of predictors.
The stability of the models can be evaluated using (cluster) bootstrapping. The package
further contains functions to pool model performance measures as ROC/AUC, Reclassification,
R-squared, scaled Brier score, H&L test and calibration plots for logistic regression models.
Internal validation can be done across multiply imputed datasets with cross-validation or
bootstrapping. The adjusted intercept after shrinkage of pooled regression coefficients
can be obtained. Backward and forward selection as part of internal validation is possible.
A function to externally validate logistic prediction models in multiple imputed
datasets is available and a function to compare models. For Cox models a strata variable
can be included.
Eekhout (2017)
Data and Statistical Analyses after Multiple Imputation
Statistical Analyses and Pooling after Multiple Imputation. A large variety
of repeated statistical analysis can be performed and finally pooled. Statistical analysis
that are available are, among others, Levene's test, Odds and Risk Ratios, One sample
proportions, difference between proportions and linear and logistic regression models.
Functions can also be used in combination with the Pipe operator.
More and more statistical analyses and pooling functions will be added over time.
Heymans (2007)
Parentage Assignment using Bi-Allelic Genetic Markers
Can be used for paternity and maternity assignment and outperforms
conventional methods where closely related individuals occur in the pool of
possible parents. The method compares the genotypes of offspring with any
combination of potentials parents and scores the number of mismatches of these
individuals at bi-allelic genetic markers (e.g. Single Nucleotide Polymorphisms).
It elaborates on a prior exclusion method based on the Homozygous Opposite Test
(HOT; Huisman 2017
Standard Dataset Manager for Observational Medical Outcomes Partnership Common Data Model Sample Datasets
Facilitates access to sample datasets from the 'EunomiaDatasets' repository (< https://github.com/ohdsi/EunomiaDatasets>).
Routines for Performing Empirical Calibration of Observational Study Estimates
Routines for performing empirical calibration of observational
study estimates. By using a set of negative control hypotheses we can
estimate the empirical null distribution of a particular observational
study setup. This empirical null distribution can be used to compute a
calibrated p-value, which reflects the probability of observing an
estimated effect size when the null hypothesis is true taking both random
and systematic error into account. A similar approach can be used to
calibrate confidence intervals, using both negative and positive controls.
For more details, see Schuemie et al. (2013)
Synthesizing Causal Evidence in a Distributed Research Network
Routines for combining causal effect estimates and study diagnostics across multiple data sites in a distributed study, without sharing patient-level data. Allows for normal and non-normal approximations of the data-site likelihood of the effect parameter.
JAR Dependencies for the 'DatabaseConnector' Package
Provides external JAR dependencies for the 'DatabaseConnector' package.
Tools for Type S (Sign) and Type M (Magnitude) Errors
Provides tools for working with Type S (Sign) and
Type M (Magnitude) errors, as proposed in Gelman and Tuerlinckx (2000)