2019-10-12 01:26:36 +00:00
# SAC+AE implementation in PyTorch
2019-09-23 18:20:48 +00:00
2019-10-12 01:11:27 +00:00
This is PyTorch implementation of SAC+AE from
**Improving Sample Efficiency in Model-Free Reinforcement Learning from Images** by
2019-10-12 01:11:57 +00:00
[Denis Yarats ](https://cs.nyu.edu/~dy1042/ ), [Amy Zhang ](https://mila.quebec/en/person/amy-zhang/ ), [Ilya Kostrikov ](https://github.com/ikostrikov ), [Brandon Amos ](http://bamos.github.io/ ), [Joelle Pineau ](https://www.cs.mcgill.ca/~jpineau/ ), [Rob Fergus ](https://cs.nyu.edu/~fergus/pmwiki/pmwiki.php ).
2019-10-12 01:11:27 +00:00
[[Paper]](https://arxiv.org/abs/1910.01741) [[Webpage]](https://sites.google.com/view/sac-ae/home)
2020-05-03 01:46:01 +00:00
## Citation
If you use this repo in your research, please consider citing the paper as follows
```
@article {yarats2019improving,
title={Improving Sample Efficiency in Model-Free Reinforcement Learning from Images},
author={Denis Yarats and Amy Zhang and Ilya Kostrikov and Brandon Amos and Joelle Pineau and Rob Fergus},
year={2019},
eprint={1910.01741},
archivePrefix={arXiv}
}
```
2019-09-23 18:38:55 +00:00
## Requirements
2019-09-23 19:11:40 +00:00
We assume you have access to a gpu that can run CUDA 9.2. Then, the simplest way to install all required dependencies is to create an anaconda environment by running:
2019-09-23 19:00:08 +00:00
```
conda env create -f conda_env.yml
```
2019-09-23 19:11:40 +00:00
After the instalation ends you can activate your environment with:
```
source activate pytorch_sac_ae
```
2019-09-23 18:20:48 +00:00
2019-09-23 18:38:55 +00:00
## Instructions
To train an SAC+AE agent on the `cheetah run` task from image-based observations run:
2019-09-23 18:20:48 +00:00
```
2019-09-23 18:38:55 +00:00
python train.py \
--domain_name cheetah \
--task_name run \
--encoder_type pixel \
--decoder_type pixel \
--action_repeat 4 \
--save_video \
--save_tb \
2019-09-23 19:00:08 +00:00
--work_dir ./log \
2019-09-23 18:38:55 +00:00
--seed 1
2019-09-23 18:20:48 +00:00
```
2019-09-23 19:20:14 +00:00
This will produce 'log' folder, where all the outputs are going to be stored including train/eval logs, tensorboard blobs, and evaluation episode videos. One can attacha tensorboard to monitor training by running:
2019-09-23 18:20:48 +00:00
```
2019-09-23 19:20:14 +00:00
tensorboard --logdir log
2019-09-23 18:20:48 +00:00
```
2019-09-23 19:20:14 +00:00
and opening up tensorboad in your browser.
2019-09-23 18:20:48 +00:00
2019-09-23 19:20:14 +00:00
The console output is also available in a form:
2019-09-23 18:20:48 +00:00
```
| train | E: 1 | S: 1000 | D: 0.8 s | R: 0.0000 | BR: 0.0000 | ALOSS: 0.0000 | CLOSS: 0.0000 | RLOSS: 0.0000
```
2019-09-23 19:20:14 +00:00
a training entry decodes as:
2019-09-23 18:20:48 +00:00
```
train - training episode
E - total number of episodes
S - total number of environment steps
D - duration in seconds to train 1 episode
R - episode reward
BR - average reward of sampled batch
ALOSS - average loss of actor
CLOSS - average loss of critic
RLOSS - average reconstruction loss (only if is trained from pixels and decoder)
```
2019-09-23 19:21:50 +00:00
while an evaluation entry:
2019-09-23 18:20:48 +00:00
```
| eval | S: 0 | ER: 21.1676
```
2019-09-23 19:21:50 +00:00
which just tells the expected reward `ER` evaluating current policy after `S` steps. Note that `ER` is average evaluation performance over `num_eval_episodes` episodes (usually 10).
2019-10-12 01:30:42 +00:00
## Results
Our method demonstrates significantly improved performance over the baseline SAC:pixel. It matches the state-of-the-art performance of model-based algorithms, such as PlaNet (Hafner et al., 2018) and SLAC (Lee et al., 2019), as well
as a model-free algorithm D4PG (Barth-Maron et al., 2018), that also learns from raw images. Our
algorithm exhibits stable learning across ten random seeds and is extremely easy to implement.
2019-10-12 18:23:11 +00:00
![Results ](results/graph.png )