745H1N commited on
Commit
d625e0a
1 Parent(s): a9718bb

Upload best PPO CarRacing-v0 agent (tuned with Optuna).

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
CarRacing-v0-PPO-optuna.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3c9566f4b3ed70c186e3fd2ef771b28cce5e4e95a3118bdb0bb59c3df6dbe75
3
+ size 43334047
CarRacing-v0-PPO-optuna/_stable_baselines3_version ADDED
@@ -0,0 +1 @@
 
 
1
+ 1.5.0
CarRacing-v0-PPO-optuna/data ADDED
The diff for this file is too large to render. See raw diff
 
CarRacing-v0-PPO-optuna/policy.optimizer.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ed6264e4f5683ba629ceb5fdcac94292f4524f0e7a7729a7b1ec488adbb453e
3
+ size 28388247
CarRacing-v0-PPO-optuna/policy.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f72599f19894a229857f3453bbe44f281f370b8ce6335e64832f734a25cf253
3
+ size 14194942
CarRacing-v0-PPO-optuna/pytorch_variables.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d030ad8db708280fcae77d87e973102039acd23a11bdecc3db8eb6c0ac940ee1
3
+ size 431
CarRacing-v0-PPO-optuna/system_info.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ OS: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic #1 SMP Sun Apr 24 10:03:06 PDT 2022
2
+ Python: 3.7.13
3
+ Stable-Baselines3: 1.5.0
4
+ PyTorch: 1.11.0+cu113
5
+ GPU Enabled: True
6
+ Numpy: 1.21.6
7
+ Gym: 0.21.0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: stable-baselines3
3
+ tags:
4
+ - CarRacing-v0
5
+ - deep-reinforcement-learning
6
+ - reinforcement-learning
7
+ - stable-baselines3
8
+ model-index:
9
+ - name: PPO
10
+ results:
11
+ - metrics:
12
+ - type: mean_reward
13
+ value: -45.62 +/- 37.23
14
+ name: mean_reward
15
+ task:
16
+ type: reinforcement-learning
17
+ name: reinforcement-learning
18
+ dataset:
19
+ name: CarRacing-v0
20
+ type: CarRacing-v0
21
+ ---
22
+
23
+ # **PPO** Agent playing **CarRacing-v0**
24
+ This is a trained model of a **PPO** agent playing **CarRacing-v0**
25
+ using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
26
+
27
+ ## Usage (with Stable-baselines3)
28
+ TODO: Add your code
29
+
30
+
31
+ ```python
32
+ from stable_baselines3 import ...
33
+ from huggingface_sb3 import load_from_hub
34
+
35
+ ...
36
+ ```
config.json ADDED
The diff for this file is too large to render. See raw diff
 
replay.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17a76a900a1ac089ea649ff4e88152b20134d51c1a654673750d2c3de7bd8bbf
3
+ size 448414
results.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"mean_reward": -45.61550070494413, "std_reward": 37.22652284086403, "is_deterministic": true, "n_eval_episodes": 10, "eval_datetime": "2022-06-28T16:41:48.984680"}