OpenAI Gym is a open-source Python toolkit for developing and comparing reinforcement learning algorithms. This is a wrapper for the OpenAI Gym API, and enables access to an ever-growing variety of environments. For more details on OpenAI Gym, please see here: < https://github.com/openai/gym>. For more details on the OpenAI Gym API specification, please see here: < https://github.com/openai/gym-http-api>.
OpenAI Gym is a open-source Python toolkit for developing and comparing reinforcement learning algorithms. This R package is a wrapper for the OpenAI Gym API, and enables access to an ever-growing variety of environments.
You can install:
the latest released version from CRAN:
install.packages("gym")
the latest development version from Github:
if (packageVersion("devtools") < 1.6) {install.packages("devtools")}devtools::install_github("paulhendricks/gym-R", subdir = "gym")
If you encounter a clear bug, please file a minimal reproducible example on github.
library(gym)remote_base <- "http://127.0.0.1:5000"client <- create_GymClient(remote_base)print(client)env_id <- "CartPole-v0"instance_id <- env_create(client, env_id)print(instance_id)# List all environmentsall_envs <- env_list_all(client)print(all_envs)# Set up agentaction_space_info <- env_action_space_info(client, instance_id)print(action_space_info)agent <- random_discrete_agent(action_space_info[["n"]])# Run experiment, with monitoroutdir <- "/tmp/random-agent-results"env_monitor_start(client, instance_id, outdir, force = TRUE, resume = FALSE)episode_count <- 100max_steps <- 200reward <- 0done <- FALSEfor (i in 1:episode_count) {ob <- env_reset(client, instance_id)for (i in 1:max_steps) {action <- env_action_space_sample(client, instance_id)results <- env_step(client, instance_id, action, render = TRUE)if (results[["done"]]) break}}# Dump result info to diskenv_monitor_close(client, instance_id)
The original author of gym
is Paul Hendricks.
The lead maintainer of gym
is Paul Hendricks.