Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 16 packages in 0.01 seconds

fastrerandomize — by Connor Jerzak, 5 months ago

Hardware-Accelerated Rerandomization for Improved Balance

Provides hardware-accelerated tools for performing rerandomization and randomization testing in experimental research. Using a 'JAX' backend, the package enables exact rerandomization inference even for large experiments with hundreds of billions of possible randomizations. Key functionalities include generating pools of acceptable rerandomizations based on covariate balance, conducting exact randomization tests, and performing pre-analysis evaluations to determine optimal rerandomization acceptance thresholds. The package supports various hardware acceleration frameworks including 'CPU', 'CUDA', and 'METAL', making it versatile across accelerated computing environments. This allows researchers to efficiently implement stringent rerandomization designs and conduct valid inference even with large sample sizes. The package is partly based on Jerzak and Goldstein (2023) .

geodl — by Aaron Maxwell, 5 months ago

Geospatial Semantic Segmentation with Torch and Terra

Provides tools for semantic segmentation of geospatial data using convolutional neural network-based deep learning. Utility functions allow for creating masks, image chips, data frames listing image chips in a directory, and DataSets for use within DataLoaders. Additional functions are provided to serve as checks during the data preparation and training process. A UNet architecture can be defined with 4 blocks in the encoder, a bottleneck block, and 4 blocks in the decoder. The UNet can accept a variable number of input channels, and the user can define the number of feature maps produced in each encoder and decoder block and the bottleneck. Users can also choose to (1) replace all rectified linear unit (ReLU) activation functions with leaky ReLU or swish, (2) implement attention gates along the skip connections, (3) implement squeeze and excitation modules within the encoder blocks, (4) add residual connections within all blocks, (5) replace the bottleneck with a modified atrous spatial pyramid pooling (ASPP) module, and/or (6) implement deep supervision using predictions generated at each stage in the decoder. A unified focal loss framework is implemented after Yeung et al. (2022) . We have also implemented assessment metrics using the 'luz' package including F1-score, recall, and precision. Trained models can be used to predict to spatial data without the need to generate chips from larger spatial extents. Functions are available for performing accuracy assessment. The package relies on 'torch' for implementing deep learning, which does not require the installation of a 'Python' environment. Raster geospatial data are handled with 'terra'. Models can be trained using a Compute Unified Device Architecture (CUDA)-enabled graphics processing unit (GPU); however, multi-GPU training is not supported by 'torch' in 'R'.

rwig — by Fangzhou Xie, 24 days ago

Wasserstein Index Generation (WIG) Model

Efficient implementation of several Optimal Transport algorithms in Fangzhou Xie (2025) and the Wasserstein Index Generation (WIG) model in Fangzhou Xie (2020) .

dress.graph — by Eduar Castrillo Velilla, 10 days ago

DRESS - A Continuous Framework for Structural Graph Refinement

DRESS is a deterministic, parameter-free framework for continuous structural graph refinement. It iterates a nonlinear dynamical system on real-valued edge similarities and produces a graph fingerprint as a sorted edge-value vector once the iteration reaches a prescribed stopping criterion. The resulting fingerprint is self-contained, isomorphism-invariant by construction, reproducible across vertex labelings under the reference implementation, numerically robust in practice, and efficient to compute with straightforward parallelization and distribution.

rkeops — by Ghislain Durif, 2 years ago

Kernel Operations on GPU or CPU, with Autodiff, without Memory Overflows

The 'KeOps' library lets you compute generic reductions of very large arrays whose entries are given by a mathematical formula with CPU and GPU computing support. It combines a tiled reduction scheme with an automatic differentiation engine. It is perfectly suited to the efficient computation of Kernel dot products and the associated gradients, even when the full kernel matrix does not fit into the GPU memory.

gcbd — by Dirk Eddelbuettel, 2 years ago

'GPU'/CPU Benchmarking in Debian-Based Systems

'GPU'/CPU Benchmarking on Debian-package based systems This package benchmarks performance of a few standard linear algebra operations (such as a matrix product and QR, SVD and LU decompositions) across a number of different 'BLAS' libraries as well as a 'GPU' implementation. To do so, it takes advantage of the ability to 'plug and play' different 'BLAS' implementations easily on a Debian and/or Ubuntu system. The current version supports - 'Reference BLAS' ('refblas') which are un-accelerated as a baseline - Atlas which are tuned but typically configure single-threaded - Atlas39 which are tuned and configured for multi-threaded mode - 'Goto Blas' which are accelerated and multi-threaded - 'Intel MKL' which is a commercial accelerated and multithreaded version. As for 'GPU' computing, we use the CRAN package - 'gputools' For 'Goto Blas', the 'gotoblas2-helper' script from the ISM in Tokyo can be used. For 'Intel MKL' we use the Revolution R packages from Ubuntu 9.10.