Found 978 packages in 0.02 seconds
Collection of Methods to Detect Dichotomous and Polytomous Differential Item Functioning (DIF)
Methods to detect differential item functioning (DIF) in dichotomous
and polytomous items, using both classical and modern approaches. These include
Mantel-Haenszel procedures, logistic regression (including ordinal models), and
regularization-based methods such as LASSO. Uniform and non-uniform DIF effects
can be detected, and some methods support multiple focal groups. The package
also provides tools for anchor purification, rest score matching, effect size
estimation, and DIF simulation. See Magis, Beland, Tuerlinckx, and De Boeck
(2010, Behavior Research Methods, 42, 847–862,
'Arrow' Database Connectivity ('ADBC') Driver Manager
Provides a developer-facing interface to 'Arrow' Database Connectivity ('ADBC') for the purposes of driver development, driver testing, and building high-level database interfaces for users. 'ADBC' < https://arrow.apache.org/adbc/> is an API standard for database access libraries that uses 'Arrow' for result sets and query parameters.
Tidy Anomaly Detection
The 'anomalize' package enables a "tidy" workflow for detecting anomalies in data. The main functions are time_decompose(), anomalize(), and time_recompose(). When combined, it's quite simple to decompose time series, detect anomalies, and create bands separating the "normal" data from the anomalous data at scale (i.e. for multiple time series). Time series decomposition is used to remove trend and seasonal components via the time_decompose() function and methods include seasonal decomposition of time series by Loess ("stl") and seasonal decomposition by piecewise medians ("twitter"). The anomalize() function implements two methods for anomaly detection of residuals including using an inner quartile range ("iqr") and generalized extreme studentized deviation ("gesd"). These methods are based on those used in the 'forecast' package and the Twitter 'AnomalyDetection' package. Refer to the associated functions for specific references for these methods.
Detect and Check for Separation and Infinite Maximum Likelihood Estimates
Provides pre-fit and post-fit methods for detecting separation and infinite maximum likelihood estimates in generalized linear models with categorical responses. The pre-fit methods apply on binomial-response generalized liner models such as logit, probit and cloglog regression, and can be directly supplied as fitting methods to the glm() function. They solve the linear programming problems for the detection of separation developed in Konis (2007, < https://ora.ox.ac.uk/objects/uuid:8f9ee0d0-d78e-4101-9ab4-f9cbceed2a2a>) using 'ROI' < https://cran.r-project.org/package=ROI> or 'lpSolveAPI' < https://cran.r-project.org/package=lpSolveAPI>. The post-fit methods apply to models with categorical responses, including binomial-response generalized linear models and multinomial-response models, such as baseline category logits and adjacent category logits models; for example, the models implemented in the 'brglm2' < https://cran.r-project.org/package=brglm2> package. The post-fit methods successively refit the model with increasing number of iteratively reweighted least squares iterations, and monitor the ratio of the estimated standard error for each parameter to what it has been in the first iteration. According to the results in Lesaffre & Albert (1989, < https://www.jstor.org/stable/2345845>), divergence of those ratios indicates data separation.
Fast and Robust Hierarchical Clustering with Noise Points Detection
A retake on the Genie algorithm
(Gagolewski, 2021
Multivariate Outlier Detection and Imputation for Incomplete Survey Data
Algorithms for multivariate outlier detection when missing values
occur. Algorithms are based on Mahalanobis distance or data depth.
Imputation is based on the multivariate normal model or uses nearest
neighbour donors. The algorithms take sample designs, in particular
weighting, into account. The methods are described in Bill and Hulliger
(2016)
Explainable Outlier Detection Through Decision Tree Conditioning
Outlier detection method that flags suspicious values within observations,
constrasting them against the normal values in a user-readable format, potentially
describing conditions within the data that make a given outlier more rare.
Full procedure is described in Cortes (2020)
Nearest Neighbor Observation Imputation and Evaluation Tools
Performs nearest neighbor-based imputation using one or more alternative approaches to processing multivariate data. These include methods based on canonical correlation: analysis, canonical correspondence analysis, and a multivariate adaptation of the random forest classification and regression techniques of Leo Breiman and Adele Cutler. Additional methods are also offered. The package includes functions for comparing the results from running alternative techniques, detecting imputation targets that are notably distant from reference observations, detecting and correcting for bias, bootstrapping and building ensemble imputations, and mapping results.
Distance Sampling Detection Function and Abundance Estimation
A simple way of fitting detection functions to distance sampling
data for both line and point transects. Adjustment term selection, left and
right truncation as well as monotonicity constraints and binning are
supported. Abundance and density estimates can also be calculated (via a
Horvitz-Thompson-like estimator) if survey area information is provided. See
Miller et al. (2019)
Open Source OCR Engine
Bindings to 'Tesseract': a powerful optical character recognition (OCR) engine that supports over 100 languages. The engine is highly configurable in order to tune the detection algorithms and obtain the best possible results.