Found 135 packages in 0.03 seconds
Model-Based Boosting
Functional gradient descent algorithm
(boosting) for optimizing general risk functions utilizing
component-wise (penalised) least squares estimates or regression
trees as base-learners for fitting generalized linear, additive
and interaction models to potentially high-dimensional data.
Models and algorithms are described in
Fast Gaussian Process Computation Using Vecchia's Approximation
Functions for fitting and doing predictions with
Gaussian process models using Vecchia's (1988) approximation.
Package also includes functions for reordering input locations,
finding ordered nearest neighbors (with help from 'FNN' package),
grouping operations, and conditional simulations.
Covariance functions for spatial and spatial-temporal data
on Euclidean domains and spheres are provided. The original
approximation is due to Vecchia (1988)
< http://www.jstor.org/stable/2345768>, and the reordering and
grouping methods are from Guinness (2018)
Optimally Robust Estimation
R infrastructure for optimally robust estimation in general smoothly
parameterized models using S4 classes and methods as described Kohl, M.,
Ruckdeschel, P., and Rieder, H. (2010),
Optimally Robust Influence Curves and Estimators for Location and Scale
Functions for the determination of optimally robust influence curves and estimators in case of normal location and/or scale (see Chapter 8 in Kohl (2005) < https://epub.uni-bayreuth.de/839/2/DissMKohl.pdf>).
Interactive 'tourr' Using 'python'
Extends the functionality of the 'tourr' package by an interactive
graphical user interface. The interactivity allows users to
effortlessly refine their 'tourr' results by manual intervention,
which allows for integration of expert knowledge and aids the
interpretation of results. For more information on 'tourr' see
Wickham et. al (2011)
Metabolomics Data Analysis Functions
A collection of functions for processing and analyzing metabolite data.
The namesake function mrbin() converts 1D
or 2D Nuclear Magnetic Resonance data into a matrix of values suitable for further data analysis and
performs basic processing steps in a reproducible way. Negative values, a
common issue in such data, can be replaced by positive values (
Statistical Inference of Vine Copulas
Provides tools for the statistical analysis of regular vine copula
models, see Aas et al. (2009)
Inferential Statistics
Computation of various confidence intervals (Altman et al. (2000), ISBN:978-0-727-91375-3; Hedderich and Sachs (2018), ISBN:978-3-662-56657-2) including bootstrapped versions (Davison and Hinkley (1997), ISBN:978-0-511-80284-3) as well as Hsu (Hedderich and Sachs (2018), ISBN:978-3-662-56657-2), permutation (Janssen (1997),
Descriptive Statistics
Computation of standardized interquartile range (IQR), Huber-type skipped mean (Hampel (1985),
Tools for Statistical Disclosure Control in Research Data Centers
Tools for researchers to explicitly show that their results comply to rules for statistical disclosure control imposed by research data centers. These tools help in checking descriptive statistics and models and in calculating extreme values that are not individual data. Also included is a simple function to create log files. The methods used here are described in the "Guidelines for the checking of output based on microdata research" by Bond, Brandt, and de Wolf (2015) < https://ec.europa.eu/eurostat/cros/system/files/dwb_standalone-document_output-checking-guidelines.pdf>.