Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 10000 packages in 0.12 seconds

starschemar — by Jose Samos, a year ago

Obtaining Stars from Flat Tables

Data in multidimensional systems is obtained from operational systems and is transformed to adapt it to the new structure. Frequently, the operations to be performed aim to transform a flat table into a star schema. Transformations can be carried out using professional extract, transform and load tools or tools intended for data transformation for end users. With the tools mentioned, this transformation can be carried out, but it requires a lot of work. The main objective of this package is to define transformations that allow obtaining stars from flat tables easily. In addition, it includes basic data cleaning, dimension enrichment, incremental data refresh and query operations, adapted to this context.

gen5helper — by Yanxian Lin, 6 years ago

Processing 'Gen5' 2.06 Exported Data

A collection of functions for processing 'Gen5' 2.06 exported data. 'Gen5' is an essential data analysis software for BioTek plate readers < https://www.biotek.com/products/software-robotics-software/gen5-microplate-reader-and-imager-software/>. This package contains functions for data cleaning, modeling and plotting using exported data from 'Gen5' version 2.06. It exports technically correct data defined in (Edwin de Jonge and Mark van der Loo (2013) < https://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf>) for customized analysis. It contains Boltzmann fitting for general kinetic analysis. See < https://www.github.com/yanxianUCSB/gen5helper> for more information, documentation and examples.

wordpredictor — by Nadir Latif, 8 months ago

Develop Text Prediction Models Based on N-Grams

A framework for developing n-gram models for text prediction. It provides data cleaning, data sampling, extracting tokens from text, model generation, model evaluation and word prediction. For information on how n-gram models work we referred to: "Speech and Language Processing" < https://web.archive.org/web/20240919222934/https%3A%2F%2Fweb.stanford.edu%2F~jurafsky%2Fslp3%2F3.pdf>. For optimizing R code and using R6 classes we referred to "Advanced R" < https://adv-r.hadley.nz/r6.html>. For writing R extensions we referred to "R Packages", < https://r-pkgs.org/index.html>.

LLMAgentR — by Kwadwo Daddy Nyame Owusu Boakye, 25 days ago

Language Model Agents in R for AI Workflows and Research

Provides modular, graph-based agents powered by large language models (LLMs) for intelligent task execution in R. Supports structured workflows for tasks such as forecasting, data visualization, feature engineering, data wrangling, data cleaning, 'SQL', code generation, weather reporting, and research-driven question answering. Each agent performs iterative reasoning: recommending steps, generating R code, executing, debugging, and explaining results. Includes built-in support for packages such as 'tidymodels', 'modeltime', 'plotly', 'ggplot2', and 'prophet'. Designed for analysts, developers, and teams building intelligent, reproducible AI workflows in R. Compatible with LLM providers such as 'OpenAI', 'Anthropic', 'Groq', and 'Ollama'. Inspired by the Python package 'langagent'.

contact — by Trevor Farthing, 4 years ago

Creating Contact and Social Networks

Process spatially- and temporally-discrete data into contact and social networks, and facilitate network analysis by randomizing individuals' movement paths and/or related categorical variables. To use this package, users need only have a dataset containing spatial data (i.e., latitude/longitude, or planar x & y coordinates), individual IDs relating spatial data to specific individuals, and date/time information relating spatial locations to temporal locations. The functionality of this package ranges from data "cleaning" via multiple filtration functions, to spatial and temporal data interpolation, and network creation and analysis. Functions within this package are not limited to describing interpersonal contacts. Package functions can also identify and quantify "contacts" between individuals and fixed areas (e.g., home ranges, water bodies, buildings, etc.). As such, this package is an incredibly useful resource for facilitating epidemiological, ecological, ethological and sociological research.

PVplr — by Roger French, 2 years ago

Performance Loss Rate Analysis Pipeline

The pipeline contained in this package provides tools used in the Solar Durability and Lifetime Extension Center (SDLE) for the analysis of Performance Loss Rates (PLR) in real world photovoltaic systems. Functions included allow for data cleaning, feature correction, power predictive modeling, PLR determination, and uncertainty bootstrapping through various methods . The vignette "Pipeline Walkthrough" gives an explicit run through of typical package usage. This material is based upon work supported by the U.S Department of Energy's Office of Energy Efficiency and Renewable Energy (EERE) under Solar Energy Technologies Office (SETO) Agreement Number DE-EE-0008172. This work made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Case Western Reserve University.

lubridate — by Vitalie Spinu, 6 months ago

Make Dealing with Dates a Little Easier

Functions to work with date-times and time-spans: fast and user friendly parsing of date-time data, extraction and updating of components of a date-time (years, months, days, hours, minutes, and seconds), algebraic manipulation on date-time and time-span objects. The 'lubridate' package has a consistent and memorable syntax that makes working with dates easy and fun.

readr — by Jennifer Bryan, a year ago

Read Rectangular Text Data

The goal of 'readr' is to provide a fast and friendly way to read rectangular data (like 'csv', 'tsv', and 'fwf'). It is designed to flexibly parse many types of data found in the wild, while still cleanly failing when data unexpectedly changes.

eodhdR2 — by Marcelo S. Perlin, 9 months ago

Official R API for Fetching Data from 'EODHD'

Second and backward-incompatible version of R package 'eodhd' < https://eodhd.com/>, extended with a cache and quota system, also offering functions for cleaning and aggregating the financial data.

DBERlibR — by Changsoo Song, 3 years ago

Automated Assessment Data Analysis for Discipline-Based Education Research

Discipline-Based Education Research scientists repeatedly analyze assessment data to ensure question items’ reliability and examine the efficacy of a new educational intervention. Analyzing assessment data comprises multiple steps and statistical techniques that consume much of researchers’ time and are error-prone. While education research continues to grow across many disciplines of science, technology, engineering, and mathematics (STEM), the discipline-based education research community lacks tools to streamline education research data analysis. ‘DBERlibR’—an ‘R’ package to streamline and automate assessment data processing and analysis—fills this gap. The package reads user-provided assessment data, cleans them, merges multiple datasets (as necessary), checks assumption(s) for specific statistical techniques (as necessary), applies various statistical tests (e.g., one-way analysis of covariance, one-way repeated-measures analysis of variance), and presents and interprets the results all at once. By providing the most frequently used analytic techniques, this package will contribute to education research by facilitating the creation and widespread use of evidence-based knowledge and practices. The outputs contain a sample interpretation of the results for users’ convenience. User inputs are minimal; they only need to prepare the data files as instructed and type a function in the 'R' console to conduct a specific data analysis.\n For descriptions of the statistical methods employed in package, refer to the following Encyclopedia of Research Design, edited by Salkind, N. (2010) .