• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

cpnota/autonomous-learning-library: A PyTorch library for building deep reinforc ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

cpnota/autonomous-learning-library

开源软件地址(OpenSource Url):

https://github.com/cpnota/autonomous-learning-library

开源编程语言(OpenSource Language):

Python 99.9%

开源软件介绍(OpenSource Introduction):

The Autonomous Learning Library: A PyTorch Library for Building Reinforcement Learning Agents

The autonomous-learning-library is an object-oriented deep reinforcement learning (DRL) library for PyTorch. The goal of the library is to provide the necessary components for quickly building and evaluating novel reinforcement learning agents, as well as providing high-quality reference implementations of modern DRL algorithms. The full documentation can be found at the following URL: https://autonomous-learning-library.readthedocs.io.

Tools for Building New Agents

The primary goal of the autonomous-learning-library is to facilitate the rapid development of new reinforcement learning agents by providing common tools for building and evaluation agents, such as:

  • A flexible function Approximation API that integrates features such as target networks, gradient clipping, learning rate schedules, model checkpointing, multi-headed networks, loss scaling, logging, and more.
  • Various memory buffers, including prioritized experience replay (PER), generalized advantage estimation (GAE), and more.
  • A torch-based Environment interface that simplies agent implementations by cutting out the numpy middleman.
  • Common wrappers and agent enhancements for replicating standard benchmarks.
  • Slurm integration for running large-scale experiments.
  • Plotting and logging utilities including tensorboard integration and utilities for generating common plots.

See the documentation guide for a full description of the functionality provided by the autonomous-learning-library. Additionally, we provide an example project which demonstrates the best practices for building new agents.

High-Quality Reference Implementations

The autonomous-learning-library separates reinforcement learning agents into two modules: all.agents, which provides flexible, high-level implementations of many common algorithms which can be adapted to new problems and environments, and all.presets which provides specific instansiations of these agents tuned for particular sets of environments, including Atari games, classic control tasks, and PyBullet robotics simulations. Some benchmark results showing results on-par with published results can be found below:

atari40 pybullet

As of today, all contains implementations of the following deep RL algorithms:

  • Advantage Actor-Critic (A2C)
  • Categorical DQN (C51)
  • Deep Deterministic Policy Gradient (DDPG)
  • Deep Q-Learning (DQN) + extensions
  • Proximal Policy Optimization (PPO)
  • Rainbow (Rainbow)
  • Soft Actor-Critic (SAC)

It also contains implementations of the following "vanilla" agents, which provide useful baselines and perform better than you may expect:

  • Vanilla Actor-Critic
  • Vanilla Policy Gradient
  • Vanilla Q-Learning
  • Vanilla Sarsa

Installation

First, you will need a new version of PyTorch (>1.3), as well as Tensorboard. Then, you can install the core autonomous-learning-library through PyPi:

pip install autonomous-learning-library

You can also install all of the extras (such as Gym environments) using:

pip install autonomous-learning-library[all]

Finally, you can install directly from this repository including the dev dependencies using:

git clone https://github.com/cpnota/autonomous-learning-library.git
cd autonomous-learning-library
pip install -e .[dev]

Running the Presets

If you just want to test out some cool agents, the library includes several scripts for doing so:

all-atari Breakout a2c

You can watch the training progress using:

tensorboard --logdir runs

and opening your browser to http://localhost:6006. Once the model is fully trained, you can watch the trained model play using:

all-watch-atari Breakout "runs/a2c_[id]/preset.pt"

where id is the ID of your particular run. You should should be able to find it using tab completion or by looking in the runs directory. The autonomous-learning-library also contains presets and scripts for classic control and PyBullet environments.

If you want to test out your own agents, you will need to define your own scripts. Some examples can be found in the examples folder). See the docs for information on building your own agents!

Note

This library was built in the Autonomous Learning Laboratory (ALL) at the University of Massachusetts, Amherst. It was written and is currently maintained by Chris Nota (@cpnota). The views expressed or implied in this repository do not necessarily reflect the views of the ALL.

Citing the Autonomous Learning Library

We recommend the following citation:

@misc{nota2020autonomous,
  author = {Nota, Chris},
  title = {The Autonomous Learning Library},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/cpnota/autonomous-learning-library}},
}



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap