Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 11 packages in 0.01 seconds

cuda.ml — by Daniel Falbel, 3 years ago

R Interface for the RAPIDS cuML Suite of Libraries

R interface for RAPIDS cuML (< https://github.com/rapidsai/cuml>), a suite of GPU-accelerated machine learning libraries powered by CUDA (< https://en.wikipedia.org/wiki/CUDA>).

cuml4r — by Yitao Li, 3 years ago

R Interface for the RAPIDS cuML Suite of Libraries

The purpose of 'cuml4r' is to provide a simple and intuitive R interface for cuML (< https://github.com/rapidsai/cuml>). CuML is a suite of GPU-accelerated machine learning libraries powered by CUDA (< https://en.wikipedia.org/wiki/CUDA>).

RViennaCL — by Charles Determan Jr, 5 years ago

'ViennaCL' C++ Header Files

'ViennaCL' is a free open-source linear algebra library for computations on many-core architectures (GPUs, MIC) and multi-core CPUs. The library is written in C++ and supports 'CUDA', 'OpenCL', and 'OpenMP' (including switches at runtime). I have placed these libraries in this package as a more efficient distribution system for CRAN. The idea is that you can write a package that depends on the 'ViennaCL' library and yet you do not need to distribute a copy of this code with your package.

DesignCTPB — by Yitao Lu, 3 years ago

Design Clinical Trials with Potential Biomarker Effect

Applying 'CUDA' 'GPUs' via 'Numba' for optimal clinical design. It allows the user to utilize a 'reticulate' 'Python' environment and run intensive Monte Carlo simulation to get the optimal cutoff for the clinical design with potential biomarker effect, which can guide the realistic clinical trials.

Rsomoclu — by Shichao Gao, 2 years ago

Somoclu

Somoclu is a massively parallel implementation of self-organizing maps. It exploits multicore CPUs and it can be accelerated by CUDA. The topology of the map can be planar or toroid and the grid of neurons can be rectangular or hexagonal . Details refer to (Peter Wittek, et al (2017)) .

MDFS — by RadosÅ‚aw Piliszek, 2 years ago

MultiDimensional Feature Selection

Functions for MultiDimensional Feature Selection (MDFS): calculating multidimensional information gains, scoring variables, finding important variables, plotting selection results. This package includes an optional CUDA implementation that speeds up information gain calculation using NVIDIA GPGPUs. R. Piliszek et al. (2019) .

TCHazaRds — by Julian O'Grady, 2 months ago

Tropical Cyclone (Hurricane, Typhoon) Spatial Hazard Modelling

Methods for generating modelled parametric Tropical Cyclone (TC) spatial hazard fields and time series output at point locations from TC tracks. R's compatibility to simply use fast 'cpp' code via the 'Rcpp' package and the wide range spatial analysis tools via the 'terra' package makes it an attractive open source environment to study 'TCs'. This package estimates TC vortex wind and pressure fields using parametric equations originally coded up in 'python' by 'TCRM' < https://github.com/GeoscienceAustralia/tcrm> and then coded up in 'Cuda' 'cpp' by 'TCwindgen' < https://github.com/CyprienBosserelle/TCwindgen>.

diffeqr — by Christopher Rackauckas, 8 months ago

Solving Differential Equations (ODEs, SDEs, DDEs, DAEs)

An interface to 'DifferentialEquations.jl' < https://diffeq.sciml.ai/dev/> from the R programming language. It has unique high performance methods for solving ordinary differential equations (ODE), stochastic differential equations (SDE), delay differential equations (DDE), differential-algebraic equations (DAE), and more. Much of the functionality, including features like adaptive time stepping in SDEs, are unique and allow for multiple orders of magnitude speedup over more common methods. Supports GPUs, with support for CUDA (NVIDIA), AMD GPUs, Intel oneAPI GPUs, and Apple's Metal (M-series chip GPUs). 'diffeqr' attaches an R interface onto the package, allowing seamless use of this tooling by R users. For more information, see Rackauckas and Nie (2017) .

geodl — by Aaron Maxwell, 3 months ago

Geospatial Semantic Segmentation with Torch and Terra

Provides tools for semantic segmentation of geospatial data using convolutional neural network-based deep learning. Utility functions allow for creating masks, image chips, data frames listing image chips in a directory, and DataSets for use within DataLoaders. Additional functions are provided to serve as checks during the data preparation and training process. A UNet architecture can be defined with 4 blocks in the encoder, a bottleneck block, and 4 blocks in the decoder. The UNet can accept a variable number of input channels, and the user can define the number of feature maps produced in each encoder and decoder block and the bottleneck. Users can also choose to (1) replace all rectified linear unit (ReLU) activation functions with leaky ReLU or swish, (2) implement attention gates along the skip connections, (3) implement squeeze and excitation modules within the encoder blocks, (4) add residual connections within all blocks, (5) replace the bottleneck with a modified atrous spatial pyramid pooling (ASPP) module, and/or (6) implement deep supervision using predictions generated at each stage in the decoder. A unified focal loss framework is implemented after Yeung et al. (2022) . We have also implemented assessment metrics using the 'luz' package including F1-score, recall, and precision. Trained models can be used to predict to spatial data without the need to generate chips from larger spatial extents. Functions are available for performing accuracy assessment. The package relies on 'torch' for implementing deep learning, which does not require the installation of a 'Python' environment. Raster geospatial data are handled with 'terra'. Models can be trained using a Compute Unified Device Architecture (CUDA)-enabled graphics processing unit (GPU); however, multi-GPU training is not supported by 'torch' in 'R'.

rkeops — by Ghislain Durif, 9 months ago

Kernel Operations on GPU or CPU, with Autodiff, without Memory Overflows

The 'KeOps' library lets you compute generic reductions of very large arrays whose entries are given by a mathematical formula with CPU and GPU computing support. It combines a tiled reduction scheme with an automatic differentiation engine. It is perfectly suited to the efficient computation of Kernel dot products and the associated gradients, even when the full kernel matrix does not fit into the GPU memory.