Examples: visualization, C++, networks, data cleaning, html widgets, ropensci.

Found 10000 packages in 0.02 seconds

LLMAgentR — by Kwadwo Daddy Nyame Owusu Boakye, 6 months ago

Language Model Agents in R for AI Workflows and Research

Provides modular, graph-based agents powered by large language models (LLMs) for intelligent task execution in R. Supports structured workflows for tasks such as forecasting, data visualization, feature engineering, data wrangling, data cleaning, 'SQL', code generation, weather reporting, and research-driven question answering. Each agent performs iterative reasoning: recommending steps, generating R code, executing, debugging, and explaining results. Includes built-in support for packages such as 'tidymodels', 'modeltime', 'plotly', 'ggplot2', and 'prophet'. Designed for analysts, developers, and teams building intelligent, reproducible AI workflows in R. Compatible with LLM providers such as 'OpenAI', 'Anthropic', 'Groq', and 'Ollama'. Inspired by the Python package 'langagent'.

tidyrhrv — by Steven Lawrence, 4 months ago

Read, Iteratively Filter, and Analyze Multiple ECG Datasets

Allows users to quickly load multiple patients' electrocardiographic (ECG) data at once and conduct relevant time analysis of heart rate variability (HRV) without manual edits from a physician or data cleaning specialist. The package provides the unique ability to iteratively filter, plot, and store time analysis results in a data frame while writing plots to a predefined folder. This streamlines the workflow for HRV analysis across multiple datasets. Methods are based on Rodríguez-Liñares et al. (2011) . Examples of applications using this package include Kwon et al. (2022) and Lawrence et al. (2023) .

contact — by Trevor Farthing, 5 years ago

Creating Contact and Social Networks

Process spatially- and temporally-discrete data into contact and social networks, and facilitate network analysis by randomizing individuals' movement paths and/or related categorical variables. To use this package, users need only have a dataset containing spatial data (i.e., latitude/longitude, or planar x & y coordinates), individual IDs relating spatial data to specific individuals, and date/time information relating spatial locations to temporal locations. The functionality of this package ranges from data "cleaning" via multiple filtration functions, to spatial and temporal data interpolation, and network creation and analysis. Functions within this package are not limited to describing interpersonal contacts. Package functions can also identify and quantify "contacts" between individuals and fixed areas (e.g., home ranges, water bodies, buildings, etc.). As such, this package is an incredibly useful resource for facilitating epidemiological, ecological, ethological and sociological research.

PVplr — by Roger French, 3 years ago

Performance Loss Rate Analysis Pipeline

The pipeline contained in this package provides tools used in the Solar Durability and Lifetime Extension Center (SDLE) for the analysis of Performance Loss Rates (PLR) in real world photovoltaic systems. Functions included allow for data cleaning, feature correction, power predictive modeling, PLR determination, and uncertainty bootstrapping through various methods . The vignette "Pipeline Walkthrough" gives an explicit run through of typical package usage. This material is based upon work supported by the U.S Department of Energy's Office of Energy Efficiency and Renewable Energy (EERE) under Solar Energy Technologies Office (SETO) Agreement Number DE-EE-0008172. This work made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Case Western Reserve University.

lubridate — by Vitalie Spinu, a year ago

Make Dealing with Dates a Little Easier

Functions to work with date-times and time-spans: fast and user friendly parsing of date-time data, extraction and updating of components of a date-time (years, months, days, hours, minutes, and seconds), algebraic manipulation on date-time and time-span objects. The 'lubridate' package has a consistent and memorable syntax that makes working with dates easy and fun.

furniture — by Tyson S. Barrett, 2 years ago

Furniture for Quantitative Scientists

Contains four main functions (i.e., four pieces of furniture): table1() which produces a well-formatted table of descriptive statistics common as Table 1 in research articles, tableC() which produces a well-formatted table of correlations, tableF() which provides frequency counts, and washer() which is helpful in cleaning up the data. These furniture-themed functions are designed to simplify common tasks in quantitative analysis. Other data summary and cleaning tools are also available.

readr — by Jennifer Bryan, 15 days ago

Read Rectangular Text Data

The goal of 'readr' is to provide a fast and friendly way to read rectangular data (like 'csv', 'tsv', and 'fwf'). It is designed to flexibly parse many types of data found in the wild, while still cleanly failing when data unexpectedly changes.

eodhdR2 — by Marcelo S. Perlin, 4 months ago

Official R API for Fetching Data from 'EODHD'

Second and backward-incompatible version of R package 'eodhd' < https://eodhd.com/>, extended with a cache and quota system, also offering functions for cleaning and aggregating the financial data.

DBERlibR — by Changsoo Song, 3 years ago

Automated Assessment Data Analysis for Discipline-Based Education Research

Discipline-Based Education Research scientists repeatedly analyze assessment data to ensure question items’ reliability and examine the efficacy of a new educational intervention. Analyzing assessment data comprises multiple steps and statistical techniques that consume much of researchers’ time and are error-prone. While education research continues to grow across many disciplines of science, technology, engineering, and mathematics (STEM), the discipline-based education research community lacks tools to streamline education research data analysis. ‘DBERlibR’—an ‘R’ package to streamline and automate assessment data processing and analysis—fills this gap. The package reads user-provided assessment data, cleans them, merges multiple datasets (as necessary), checks assumption(s) for specific statistical techniques (as necessary), applies various statistical tests (e.g., one-way analysis of covariance, one-way repeated-measures analysis of variance), and presents and interprets the results all at once. By providing the most frequently used analytic techniques, this package will contribute to education research by facilitating the creation and widespread use of evidence-based knowledge and practices. The outputs contain a sample interpretation of the results for users’ convenience. User inputs are minimal; they only need to prepare the data files as instructed and type a function in the 'R' console to conduct a specific data analysis.\n For descriptions of the statistical methods employed in package, refer to the following Encyclopedia of Research Design, edited by Salkind, N. (2010) .

CleaningValidation — by Xiande Yang, 2 years ago

Cleaning Validation Functions for Pharmaceutical Cleaning Process

Provides essential Cleaning Validation functions for complying with pharmaceutical cleaning process regulatory standards. The package includes non-parametric methods to analyze drug active-ingredient residue (DAR), cleaning agent residue (CAR), and microbial colonies (Mic) for non-Poisson distributions. Additionally, Poisson methods are provided for Mic analysis when Mic data follow a Poisson distribution.