Found 168 packages in 0.03 seconds
Fast Symbolic Multivariate Polynomials
Fast manipulation of symbolic multivariate polynomials
using the 'Map' class of the Standard Template Library. The package
uses print and coercion methods from the 'mpoly' package but
offers speed improvements. It is comparable in speed to the 'spray'
package for sparse arrays, but retains the symbolic benefits of
'mpoly'. To cite the package in publications, use Hankin 2022
The Symmetric Group: Permutations of a Finite Set
Manipulates invertible functions from a finite set to
itself. Can transform from word form to cycle form and
back. To cite the package in publications please use
Hankin (2020) "Introducing the permutations R package",
SoftwareX, volume 11
Optimal Exact Tests for Multiple Binary Endpoints
Calculates exact hypothesis tests to compare a treatment and a reference group with respect to multiple binary endpoints. The tested null hypothesis is an identical multidimensional distribution of successes and failures in both groups. The alternative hypothesis is a larger success proportion in the treatment group in at least one endpoint. The tests are based on the multivariate permutation distribution of subjects between the two groups. For this permutation distribution, rejection regions are calculated that satisfy one of different possible optimization criteria. In particular, regions with maximal exhaustion of the nominal significance level, maximal power under a specified alternative or maximal number of elements can be found. Optimization is achieved by a branch-and-bound algorithm. By application of the closed testing principle, the global hypothesis tests are extended to multiple testing procedures.
Confounder-Adjusted Survival Curves and Cumulative Incidence Functions
Estimate and plot confounder-adjusted survival curves using
either 'Direct Adjustment', 'Direct Adjustment with Pseudo-Values',
various forms of 'Inverse Probability of Treatment Weighting', two
forms of 'Augmented Inverse Probability of Treatment Weighting',
'Empirical Likelihood Estimation' or 'Targeted Maximum Likelihood Estimation'.
Also includes a significance test for the difference
between two adjusted survival curves and the calculation of adjusted
restricted mean survival times. Additionally enables the user to
estimate and plot cause-specific confounder-adjusted cumulative
incidence functions in the competing risks setting using the same
methods (with some exceptions).
For details, see Denz et. al (2023)
Calculate the Care Density or Fragmented Care Density Given a Patient-Sharing Network
Given a patient-sharing network, calculate either the classic care density as
proposed by Pollack et al. (2013)
Datasets for "Statistics: UnLocking the Power of Data"
Datasets for the third edition of "Statistics: Unlocking the Power of Data" by Lock^5 Includes version of datasets from earlier editions.
Open GenBank Files
Opens complete record(s) with .gb extension from the NCBI/GenBank Nucleotide database and returns a list containing shaped record(s). These kind of files contains detailed records of DNA samples (locus, organism, type of sequence, source of the sequence...). An example of record can be found at < https://www.ncbi.nlm.nih.gov/nuccore/HE799070>.
Imputation of High-Dimensional Count Data using Side Information
Analysis, imputation, and multiple imputation of count data using covariates. LORI uses a log-linear Poisson model where main row and column effects, as well as effects of known covariates and interaction terms can be fitted. The estimation procedure is based on the convex optimization of the Poisson loss penalized by a Lasso type penalty and a nuclear norm. LORI returns estimates of main effects, covariate effects and interactions, as well as an imputed count table. The package also contains a multiple imputation procedure. The methods are described in Robin, Josse, Moulines and Sardy (2019)
Distances on Directed Graphs
Distances on dual-weighted directed graphs using
priority-queue shortest paths (Padgham (2019)
Variable Selection Using Random Forests
Three steps variable selection procedure based on random forests. Initially developed to handle high dimensional data (for which number of variables largely exceeds number of observations), the package is very versatile and can treat most dimensions of data, for regression and supervised classification problems. First step is dedicated to eliminate irrelevant variables from the dataset. Second step aims to select all variables related to the response for interpretation purpose. Third step refines the selection by eliminating redundancy in the set of variables selected by the second step, for prediction purpose. Genuer, R. Poggi, J.-M. and Tuleau-Malot, C. (2015) < https://journal.r-project.org/archive/2015-2/genuer-poggi-tuleaumalot.pdf>.