Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 121 packages in 0.01 seconds

PVplr — by Roger French, 2 years ago

Performance Loss Rate Analysis Pipeline

The pipeline contained in this package provides tools used in the Solar Durability and Lifetime Extension Center (SDLE) for the analysis of Performance Loss Rates (PLR) in real world photovoltaic systems. Functions included allow for data cleaning, feature correction, power predictive modeling, PLR determination, and uncertainty bootstrapping through various methods . The vignette "Pipeline Walkthrough" gives an explicit run through of typical package usage. This material is based upon work supported by the U.S Department of Energy's Office of Energy Efficiency and Renewable Energy (EERE) under Solar Energy Technologies Office (SETO) Agreement Number DE-EE-0008172. This work made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Case Western Reserve University.

klovan — by Roger H French, a year ago

Geostatistics Methods and Klovan Data

A comprehensive set of geostatistical, visual, and analytical methods, in conjunction with the expanded version of the acclaimed J.E. Klovan's mining dataset, are included in 'klovan'. This makes the package an excellent learning resource for Principal Component Analysis (PCA), Factor Analysis (FA), kriging, and other geostatistical techniques. Originally published in the 1976 book 'Geological Factor Analysis', the included mining dataset was assembled by Professor J. E. Klovan of the University of Calgary. Being one of the first applications of FA in the geosciences, this dataset has significant historical importance. As a well-regarded and published dataset, it is an excellent resource for demonstrating the capabilities of PCA, FA, kriging, and other geostatistical techniques in geosciences. For those interested in these methods, the 'klovan' datasets provide a valuable and illustrative resource. Note that some methods require the 'RGeostats' package. Please refer to the README or Additional_repositories for installation instructions. This material is based upon research in the Materials Data Science for Stockpile Stewardship Center of Excellence (MDS3-COE), and supported by the Department of Energy's National Nuclear Security Administration under Award Number DE-NA0004104.

mmrm — by Daniel Sabanes Bove, 6 months ago

Mixed Models for Repeated Measures

Mixed models for repeated measures (MMRM) are a popular choice for analyzing longitudinal continuous outcomes in randomized clinical trials and beyond; see Cnaan, Laird and Slasor (1997) for a tutorial and Mallinckrodt, Lane, Schnell, Peng and Mancuso (2008) for a review. This package implements MMRM based on the marginal linear model without random effects using Template Model Builder ('TMB') which enables fast and robust model fitting. Users can specify a variety of covariance matrices, weight observations, fit models with restricted or standard maximum likelihood inference, perform hypothesis testing with Satterthwaite or Kenward-Roger adjustment, and extract least square means estimates by using 'emmeans'.

gpclib — by Roger D. Peng, 5 years ago

General Polygon Clipping Library for R

General polygon clipping routines for R based on Alan Murta's C library.

BMconcor — by Fabio Ashtar Telarico, a year ago

CONCOR for Structural- And Regular-Equivalence Blockmodeling

The four functions svdcp() ('cp' for column partitioned), svdbip() or svdbip2() ('bip' for bipartitioned), and svdbips() ('s' for a simultaneous optimization of a set of 'r' solutions), correspond to a singular value decomposition (SVD) by blocks notion, by supposing each block depending on relative subspaces, rather than on two whole spaces as usual SVD does. The other functions, based on this notion, are relative to two column partitioned data matrices x and y defining two sets of subsets x_i and y_j of variables and amount to estimate a link between x_i and y_j for the pair (x_i, y_j) relatively to the links associated to all the other pairs. These methods were first presented in: Lafosse R. & Hanafi M.,(1997) < https://eudml.org/doc/106424> and Hanafi M. & Lafosse, R. (2001) < https://eudml.org/doc/106494>.

BANAM — by Joris Mulder, 4 months ago

Bayesian Analysis of the Network Autocorrelation Model

The network autocorrelation model (NAM) can be used for studying the degree of social influence regarding an outcome variable based on one or more known networks. The degree of social influence is quantified via the network autocorrelation parameters. In case of a single network, the Bayesian methods of Dittrich, Leenders, and Mulder (2017) and Dittrich, Leenders, and Mulder (2019) are implemented using a normal, flat, or independence Jeffreys prior for the network autocorrelation. In the case of multiple networks, the Bayesian methods of Dittrich, Leenders, and Mulder (2020) are implemented using a multivariate normal prior for the network autocorrelation parameters. Flat priors are implemented for estimating the coefficients. For Bayesian testing of equality and order-constrained hypotheses, the default Bayes factor of Gu, Mulder, and Hoijtink, (2018) is used with the posterior mean and posterior covariance matrix of the NAM parameters based on flat priors as input.

rexpokit — by Nicholas J. Matzke, a year ago

R Wrappers for EXPOKIT; Other Matrix Functions

Wraps some of the matrix exponentiation utilities from EXPOKIT (< http://www.maths.uq.edu.au/expokit/>), a FORTRAN library that is widely recommended for matrix exponentiation (Sidje RB, 1998. "Expokit: A Software Package for Computing Matrix Exponentials." ACM Trans. Math. Softw. 24(1): 130-156). EXPOKIT includes functions for exponentiating both small, dense matrices, and large, sparse matrices (in sparse matrices, most of the cells have value 0). Rapid matrix exponentiation is useful in phylogenetics when we have a large number of states (as we do when we are inferring the history of transitions between the possible geographic ranges of a species), but is probably useful in other ways as well. NOTE: In case FORTRAN checks temporarily get rexpokit archived on CRAN, see archived binaries at GitHub in: nmatzke/Matzke_R_binaries (binaries install without compilation of source code).

jointMeanCov — by Michael Hornstein, 6 years ago

Joint Mean and Covariance Estimation for Matrix-Variate Data

Jointly estimates two-group means and covariances for matrix-variate data and calculates test statistics. This package implements the algorithms defined in Hornstein, Fan, Shedden, and Zhou (2018) .

rchemo — by Marion Brandolini-Bunlon, 7 months ago

Dimension Reduction, Regression and Discrimination for Chemometrics

Data exploration and prediction with focus on high dimensional data and chemometrics. The package was initially designed about partial least squares regression and discrimination models and variants, in particular locally weighted PLS models (LWPLS). Then, it has been expanded to many other methods for analyzing high dimensional data. The name 'rchemo' comes from the fact that the package is orientated to chemometrics, but most of the provided methods are fully generic to other domains. Functions such as transform(), predict(), coef() and summary() are available. Tuning the predictive models is facilitated by generic functions gridscore() (validation dataset) and gridcv() (cross-validation). Faster versions are also available for models based on latent variables (LVs) (gridscorelv() and gridcvlv()) and ridge regularization (gridscorelb() and gridcvlb()).

rtiktoken — by David Zimmermann-Kollenda, 5 months ago

A Byte-Pair-Encoding (BPE) Tokenizer for OpenAI's Large Language Models

A thin wrapper around the tiktoken-rs crate, allowing to encode text into Byte-Pair-Encoding (BPE) tokens and decode tokens back to text. This is useful to understand how Large Language Models (LLMs) perceive text.