Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 128 packages in 0.03 seconds

Pstat — by Blondeau Da Silva Stephane, 7 years ago

Assessing Pst Statistics

Calculating Pst values to assess differentiation among populations from a set of quantitative traits is the primary purpose of such a package. The bootstrap method provides confidence intervals and distribution histograms of Pst. Variations of Pst in function of the parameter c/h^2 are studied as well. Finally, the package proposes different transformations especially to eliminate any variation resulting from allometric growth (calculation of residuals from linear regressions, Reist standardizations or Aitchison transformation).

EScvtmle — by Lauren Eyler Dang, 2 years ago

Experiment-Selector CV-TMLE for Integration of Observational and RCT Data

The experiment selector cross-validated targeted maximum likelihood estimator (ES-CVTMLE) aims to select the experiment that optimizes the bias-variance tradeoff for estimating a causal average treatment effect (ATE) where different experiments may include a randomized controlled trial (RCT) alone or an RCT combined with real-world data. Using cross-validation, the ES-CVTMLE separates the selection of the optimal experiment from the estimation of the ATE for the chosen experiment. The estimated bias term in the selector is a function of the difference in conditional mean outcome under control for the RCT compared to the combined experiment. In order to help include truly unbiased external data in the analysis, the estimated average treatment effect on a negative control outcome may be added to the bias term in the selector. For more details about this method, please see Dang et al. (2022) .

fdapace — by Yidong Zhou, 7 months ago

Functional Data Analysis and Empirical Dynamics

A versatile package that provides implementation of various methods of Functional Data Analysis (FDA) and Empirical Dynamics. The core of this package is Functional Principal Component Analysis (FPCA), a key technique for functional data analysis, for sparsely or densely sampled random trajectories and time courses, via the Principal Analysis by Conditional Estimation (PACE) algorithm. This core algorithm yields covariance and mean functions, eigenfunctions and principal component (scores), for both functional data and derivatives, for both dense (functional) and sparse (longitudinal) sampling designs. For sparse designs, it provides fitted continuous trajectories with confidence bands, even for subjects with very few longitudinal observations. PACE is a viable and flexible alternative to random effects modeling of longitudinal data. There is also a Matlab version (PACE) that contains some methods not available on fdapace and vice versa. Updates to fdapace were supported by grants from NIH Echo and NSF DMS-1712864 and DMS-2014626. Please cite our package if you use it (You may run the command citation("fdapace") to get the citation format and bibtex entry). References: Wang, J.L., Chiou, J., Müller, H.G. (2016) ; Chen, K., Zhang, X., Petersen, A., Müller, H.G. (2017) .

iNEXT — by T. C. Hsieh, 10 months ago

Interpolation and Extrapolation for Species Diversity

Provides simple functions to compute and plot two types (sample-size- and coverage-based) rarefaction and extrapolation curves for species diversity (Hill numbers) based on individual-based abundance data or sampling-unit- based incidence data; see Chao and others (2014, Ecological Monographs) for pertinent theory and methodologies, and Hsieh, Ma and Chao (2016, Methods in Ecology and Evolution) for an introduction of the R package.

SpadeR — by Anne Chao, 8 years ago

Species-Richness Prediction and Diversity Estimation with R

Estimation of various biodiversity indices and related (dis)similarity measures based on individual-based (abundance) data or sampling-unit-based (incidence) data taken from one or multiple communities/assemblages.

armada — by Aurelie Gueudin, 6 years ago

A Statistical Methodology to Select Covariates in High-Dimensional Data under Dependence

Two steps variable selection procedure in a context of high-dimensional dependent data but few observations. First step is dedicated to eliminate dependence between variables (clustering of variables, followed by factor analysis inside each cluster). Second step is a variable selection using by aggregation of adapted methods. Bastien B., Chakir H., Gegout-Petit A., Muller-Gueudin A., Shi Y. A statistical methodology to select covariates in high-dimensional data under dependence. Application to the classification of genetic profiles associated with outcome of a non-small-cell lung cancer treatment. 2018. < https://hal.archives-ouvertes.fr/hal-01939694>.

epitrix — by Thibaut Jombart, 2 years ago

Small Helpers and Tricks for Epidemics Analysis

A collection of small functions useful for epidemics analysis and infectious disease modelling. This includes computation of basic reproduction numbers from growth rates, generation of hashed labels to anonymize data, and fitting discretized Gamma distributions.

BayLum — by Anne Philippe, 7 months ago

Chronological Bayesian Models Integrating Optically Stimulated Luminescence and Radiocarbon Age Dating

Bayesian analysis of luminescence data and C-14 age estimates. Bayesian models are based on the following publications: Combes, B. & Philippe, A. (2017) and Combes et al. (2015) . This includes, amongst others, data import, export, application of age models and palaeodose model.

frechet — by Yaqing Chen, a year ago

Statistical Analysis for Random Objects and Non-Euclidean Data

Provides implementation of statistical methods for random objects lying in various metric spaces, which are not necessarily linear spaces. The core of this package is Fréchet regression for random objects with Euclidean predictors, which allows one to perform regression analysis for non-Euclidean responses under some mild conditions. Examples include distributions in 2-Wasserstein space, covariance matrices endowed with power metric (with Frobenius metric as a special case), Cholesky and log-Cholesky metrics, spherical data. References: Petersen, A., & Müller, H.-G. (2019) .

ltmle — by Joshua Schwab, 2 years ago

Longitudinal Targeted Maximum Likelihood Estimation

Targeted Maximum Likelihood Estimation ('TMLE') of treatment/censoring specific mean outcome or marginal structural model for point-treatment and longitudinal data.