Found 1134 packages in 0.10 seconds
Distance Sampling Detection Function and Abundance Estimation
A simple way of fitting detection functions to distance sampling
data for both line and point transects. Adjustment term selection, left and
right truncation as well as monotonicity constraints and binning are
supported. Abundance and density estimates can also be calculated (via a
Horvitz-Thompson-like estimator) if survey area information is provided. See
Miller et al. (2019)
Collection of Methods to Detect Dichotomous, Polytomous, and Continuous Differential Item Functioning (DIF)
Methods to detect differential item functioning (DIF) in dichotomous, polytomous,
and continuous items, using both classical and modern approaches. These include
Mantel-Haenszel procedures, logistic regression (including ordinal models), and
regularization-based methods such as LASSO. Uniform and non-uniform DIF effects
can be detected, and some methods support multiple focal groups. The package
also provides tools for anchor purification, rest score matching, effect size
estimation, and DIF simulation. See Magis, Beland, Tuerlinckx, and De Boeck
(2010, Behavior Research Methods, 42, 847–862,
Tidy Anomaly Detection
The 'anomalize' package enables a "tidy" workflow for detecting anomalies in data. The main functions are time_decompose(), anomalize(), and time_recompose(). When combined, it's quite simple to decompose time series, detect anomalies, and create bands separating the "normal" data from the anomalous data at scale (i.e. for multiple time series). Time series decomposition is used to remove trend and seasonal components via the time_decompose() function and methods include seasonal decomposition of time series by Loess ("stl") and seasonal decomposition by piecewise medians ("twitter"). The anomalize() function implements two methods for anomaly detection of residuals including using an inner quartile range ("iqr") and generalized extreme studentized deviation ("gesd"). These methods are based on those used in the 'forecast' package and the Twitter 'AnomalyDetection' package. Refer to the associated functions for specific references for these methods.
Multivariate Outlier Detection and Imputation for Incomplete Survey Data
Algorithms for multivariate outlier detection when missing values
occur. Algorithms are based on Mahalanobis distance or data depth.
Imputation is based on the multivariate normal model or uses nearest
neighbour donors. The algorithms take sample designs, in particular
weighting, into account. The methods are described in Bill and Hulliger
(2016)
Explainable Outlier Detection Through Decision Tree Conditioning
Outlier detection method that flags suspicious values within observations,
constrasting them against the normal values in a user-readable format, potentially
describing conditions within the data that make a given outlier more rare.
Full procedure is described in Cortes (2020)
Open Source OCR Engine
Bindings to 'Tesseract': a powerful optical character recognition (OCR) engine that supports over 100 languages. The engine is highly configurable in order to tune the detection algorithms and obtain the best possible results.
Sequential and Batch Change Detection Using Parametric and Nonparametric Methods
Sequential and batch change detection for univariate data streams, using the change point model framework. Functions are provided to allow nonparametric distribution-free change detection in the mean, variance, or general distribution of a given sequence of observations. Parametric change detection methods are also provided for Gaussian, Bernoulli and Exponential sequences. Both the batch (Phase I) and sequential (Phase II) settings are supported, and the sequences may contain either a single or multiple change points. A full description of this package is available in Ross, G.J (2015) - "Parametric and nonparametric sequential change detection in R" available at < https://www.jstatsoft.org/article/view/v066i03>.
Analyses of Text using Transformers Models from HuggingFace, Natural Language Processing and Machine Learning
Link R with Transformers from Hugging Face to transform text variables to word embeddings; where the word embeddings are used to statistically test the mean difference between set of texts, compute semantic similarity scores between texts, predict numerical variables, and visual statistically significant words according to various dimensions etc. For more information see < https://www.r-text.org>.
Interactive Grammar of Graphics
An implementation of an interactive grammar of graphics, taking the best parts of 'ggplot2', combining them with the reactive framework of 'shiny' and drawing web graphics using 'vega'.
Google's Compact Language Detector 2
Bindings to Google's C++ library Compact Language Detector 2 (see < https://github.com/cld2owners/cld2#readme> for more information). Probabilistically detects over 80 languages in plain text or HTML. For mixed-language input it returns the top three detected languages and their approximate proportion of the total classified text bytes (e.g. 80% English and 20% French out of 1000 bytes). There is also a 'cld3' package on CRAN which uses a neural network model instead.