Juicy1122 commited on
Commit
64d60ce
1 Parent(s): f72601c

upload agent_walker

Browse files
README.md CHANGED
@@ -1,202 +1,35 @@
1
  ---
2
- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
- # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
- {}
 
 
 
5
  ---
6
 
7
- # Model Card for Model ID
 
 
8
 
9
- <!-- Provide a quick summary of what the model is/does. -->
 
10
 
11
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
 
 
 
 
12
 
13
- ## Model Details
14
-
15
- ### Model Description
16
-
17
- <!-- Provide a longer summary of what this model is. -->
18
-
19
-
20
-
21
- - **Developed by:** [More Information Needed]
22
- - **Funded by [optional]:** [More Information Needed]
23
- - **Shared by [optional]:** [More Information Needed]
24
- - **Model type:** [More Information Needed]
25
- - **Language(s) (NLP):** [More Information Needed]
26
- - **License:** [More Information Needed]
27
- - **Finetuned from model [optional]:** [More Information Needed]
28
-
29
- ### Model Sources [optional]
30
-
31
- <!-- Provide the basic links for the model. -->
32
-
33
- - **Repository:** [More Information Needed]
34
- - **Paper [optional]:** [More Information Needed]
35
- - **Demo [optional]:** [More Information Needed]
36
-
37
- ## Uses
38
-
39
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
-
41
- ### Direct Use
42
-
43
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
-
45
- [More Information Needed]
46
-
47
- ### Downstream Use [optional]
48
-
49
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
-
51
- [More Information Needed]
52
-
53
- ### Out-of-Scope Use
54
-
55
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
56
-
57
- [More Information Needed]
58
-
59
- ## Bias, Risks, and Limitations
60
-
61
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
-
63
- [More Information Needed]
64
-
65
- ### Recommendations
66
-
67
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
-
69
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
70
-
71
- ## How to Get Started with the Model
72
-
73
- Use the code below to get started with the model.
74
-
75
- [More Information Needed]
76
-
77
- ## Training Details
78
-
79
- ### Training Data
80
-
81
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
-
83
- [More Information Needed]
84
-
85
- ### Training Procedure
86
-
87
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
88
-
89
- #### Preprocessing [optional]
90
-
91
- [More Information Needed]
92
-
93
-
94
- #### Training Hyperparameters
95
-
96
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
97
-
98
- #### Speeds, Sizes, Times [optional]
99
-
100
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
-
102
- [More Information Needed]
103
-
104
- ## Evaluation
105
-
106
- <!-- This section describes the evaluation protocols and provides the results. -->
107
-
108
- ### Testing Data, Factors & Metrics
109
-
110
- #### Testing Data
111
-
112
- <!-- This should link to a Dataset Card if possible. -->
113
-
114
- [More Information Needed]
115
-
116
- #### Factors
117
-
118
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
-
120
- [More Information Needed]
121
-
122
- #### Metrics
123
-
124
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
-
126
- [More Information Needed]
127
-
128
- ### Results
129
-
130
- [More Information Needed]
131
-
132
- #### Summary
133
-
134
-
135
-
136
- ## Model Examination [optional]
137
-
138
- <!-- Relevant interpretability work for the model goes here -->
139
-
140
- [More Information Needed]
141
-
142
- ## Environmental Impact
143
-
144
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
-
146
- Carbon emissions can be estimated using the.
147
-
148
- - **Hardware Type:** [More Information Needed]
149
- - **Hours used:** [More Information Needed]
150
- - **Cloud Provider:** [More Information Needed]
151
- - **Compute Region:** [More Information Needed]
152
- - **Carbon Emitted:** [More Information Needed]
153
-
154
- ## Technical Specifications [optional]
155
-
156
- ### Model Architecture and Objective
157
-
158
- [More Information Needed]
159
-
160
- ### Compute Infrastructure
161
-
162
- [More Information Needed]
163
-
164
- #### Hardware
165
-
166
- [More Information Needed]
167
-
168
- #### Software
169
-
170
- [More Information Needed]
171
-
172
- ## Citation [optional]
173
-
174
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
175
-
176
- **BibTeX:**
177
-
178
- [More Information Needed]
179
-
180
- **APA:**
181
-
182
- [More Information Needed]
183
-
184
- ## Glossary [optional]
185
-
186
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
187
-
188
- [More Information Needed]
189
-
190
- ## More Information [optional]
191
-
192
- [More Information Needed]
193
-
194
- ## Model Card Authors [optional]
195
-
196
- [More Information Needed]
197
-
198
- ## Model Card Contact
199
-
200
- [More Information Needed]
201
 
 
 
202
 
 
 
 
 
 
 
1
  ---
2
+ library_name: ml-agents
3
+ tags:
4
+ - Walker
5
+ - deep-reinforcement-learning
6
+ - reinforcement-learning
7
+ - ML-Agents-Walker
8
  ---
9
 
10
+ # **ppo** Agent playing **Walker**
11
+ This is a trained model of a **ppo** agent playing **Walker**
12
+ using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
13
 
14
+ ## Usage (with ML-Agents)
15
+ The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
16
 
17
+ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
18
+ - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
19
+ browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
20
+ - A *longer tutorial* to understand how works ML-Agents:
21
+ https://huggingface.co/learn/deep-rl-course/unit5/introduction
22
 
23
+ ### Resume the training
24
+ ```bash
25
+ mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
26
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
+ ### Watch your Agent play
29
+ You can watch your agent **playing directly in your browser**
30
 
31
+ 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
32
+ 2. Step 1: Find your model_id: Juicy1122/1124RL
33
+ 3. Step 2: Select your *.nn /*.onnx file
34
+ 4. Click on Watch the agent play 👀
35
+
Walker/Walker-28499616.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:839a7e12053728c167a4fb78f8dc61e7cfb7779de1d4a332d8a37f0122bab4f7
3
+ size 824597
Walker/Walker-28499616.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29c7d5dbf052cefc3a761df948832db0ac9f6c102616b1aba1f0dd8f1a9468b9
3
+ size 4811915
Walker/Walker-28999814.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b461c3682dc5aefe41e77a11e53f01bedca931c99b4b4169fb8f46e3f0813f0
3
+ size 824597
Walker/Walker-28999814.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae431de927c4ee5ead93d98fdccef9ddfd0770c624b88f65784120e2c820e592
3
+ size 4811915
Walker/Walker-29499968.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:940b3b8d232adc479018f3a5fb2cdf8a23072d6f0b0a9a5e41eb89dcbc076caf
3
+ size 824597
Walker/Walker-29499968.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c08c6b489e0a5036825090173614fe8fb8f3a9bae53dde89823fe2dc0a3030a
3
+ size 4811915
Walker/Walker-29999516.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19dd8c4bfb960499f69ab35506967bde7777ff417dd477efeb46a194fe4583b5
3
+ size 824597
Walker/Walker-29999516.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df2bb9e9a9453a9c463dcf541618fe028ed0ff993d136a83a9b5f279de79de3d
3
+ size 4811915
Walker/Walker-30000516.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10f7086270ec9e07dcb343da2b8cb6467693b4de72c0f41319a2d27db54b0a0b
3
+ size 824597
Walker/Walker-30000516.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29396599e42b16acc71bd3a8b3a22ad37047157edd2e48af5e8d318e8bd2aba2
3
+ size 4811915
Walker/checkpoint.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2b7d9fdd325444318847a229f95dcc63819bf88f235507fb2ea57fb940b27e0
3
+ size 4811490
Walker/events.out.tfevents.1700726288.Scar17SE_BK.47376.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73223c257469da3fc6622daab62877064ba9caaf625eefd838892b31e938ef64
3
+ size 10190188
config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default_settings": null, "behaviors": {"Walker": {"trainer_type": "ppo", "hyperparameters": {"batch_size": 2048, "buffer_size": 20480, "learning_rate": 0.0003, "beta": 0.005, "epsilon": 0.2, "lambd": 0.95, "num_epoch": 3, "shared_critic": false, "learning_rate_schedule": "linear", "beta_schedule": "linear", "epsilon_schedule": "linear"}, "checkpoint_interval": 500000, "network_settings": {"normalize": true, "hidden_units": 256, "num_layers": 3, "vis_encode_type": "simple", "memory": null, "goal_conditioning_type": "hyper", "deterministic": false}, "reward_signals": {"extrinsic": {"gamma": 0.995, "strength": 1.0, "network_settings": {"normalize": false, "hidden_units": 128, "num_layers": 2, "vis_encode_type": "simple", "memory": null, "goal_conditioning_type": "hyper", "deterministic": false}}}, "init_path": null, "keep_checkpoints": 5, "even_checkpoints": false, "max_steps": 30000000, "time_horizon": 1000, "summary_freq": 30000, "threaded": false, "self_play": null, "behavioral_cloning": null}}, "env_settings": {"env_path": null, "env_args": null, "base_port": 5005, "num_envs": 1, "num_areas": 1, "timeout_wait": 60, "seed": -1, "max_lifetime_restarts": 10, "restarts_rate_limit_n": 1, "restarts_rate_limit_period_s": 60}, "engine_settings": {"width": 84, "height": 84, "quality_level": 5, "time_scale": 20, "target_frame_rate": -1, "capture_frame_rate": 60, "no_graphics": false}, "environment_parameters": null, "checkpoint_settings": {"run_id": "peoeoe", "initialize_from": null, "load_model": false, "resume": false, "force": false, "train_model": false, "inference": false, "results_dir": "results"}, "torch_settings": {"device": null}, "debug": false}
configuration.yaml ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ default_settings: null
2
+ behaviors:
3
+ Walker:
4
+ trainer_type: ppo
5
+ hyperparameters:
6
+ batch_size: 2048
7
+ buffer_size: 20480
8
+ learning_rate: 0.0003
9
+ beta: 0.005
10
+ epsilon: 0.2
11
+ lambd: 0.95
12
+ num_epoch: 3
13
+ shared_critic: false
14
+ learning_rate_schedule: linear
15
+ beta_schedule: linear
16
+ epsilon_schedule: linear
17
+ checkpoint_interval: 500000
18
+ network_settings:
19
+ normalize: true
20
+ hidden_units: 256
21
+ num_layers: 3
22
+ vis_encode_type: simple
23
+ memory: null
24
+ goal_conditioning_type: hyper
25
+ deterministic: false
26
+ reward_signals:
27
+ extrinsic:
28
+ gamma: 0.995
29
+ strength: 1.0
30
+ network_settings:
31
+ normalize: false
32
+ hidden_units: 128
33
+ num_layers: 2
34
+ vis_encode_type: simple
35
+ memory: null
36
+ goal_conditioning_type: hyper
37
+ deterministic: false
38
+ init_path: null
39
+ keep_checkpoints: 5
40
+ even_checkpoints: false
41
+ max_steps: 30000000
42
+ time_horizon: 1000
43
+ summary_freq: 30000
44
+ threaded: false
45
+ self_play: null
46
+ behavioral_cloning: null
47
+ env_settings:
48
+ env_path: null
49
+ env_args: null
50
+ base_port: 5005
51
+ num_envs: 1
52
+ num_areas: 1
53
+ timeout_wait: 60
54
+ seed: -1
55
+ max_lifetime_restarts: 10
56
+ restarts_rate_limit_n: 1
57
+ restarts_rate_limit_period_s: 60
58
+ engine_settings:
59
+ width: 84
60
+ height: 84
61
+ quality_level: 5
62
+ time_scale: 20
63
+ target_frame_rate: -1
64
+ capture_frame_rate: 60
65
+ no_graphics: false
66
+ environment_parameters: null
67
+ checkpoint_settings:
68
+ run_id: peoeoe
69
+ initialize_from: null
70
+ load_model: false
71
+ resume: false
72
+ force: false
73
+ train_model: false
74
+ inference: false
75
+ results_dir: results
76
+ torch_settings:
77
+ device: null
78
+ debug: false
run_logs/timers.json ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "root",
3
+ "gauges": {
4
+ "Walker.Policy.Entropy.mean": {
5
+ "value": 0.7576263546943665,
6
+ "min": 0.7576207518577576,
7
+ "max": 1.4197757244110107,
8
+ "count": 1000
9
+ },
10
+ "Walker.Policy.Entropy.sum": {
11
+ "value": 24675.890625,
12
+ "min": 19736.384765625,
13
+ "max": 42716.0859375,
14
+ "count": 1000
15
+ },
16
+ "Walker.Environment.EpisodeLength.mean": {
17
+ "value": 614.3061224489796,
18
+ "min": 11.339777869189634,
19
+ "max": 751.475,
20
+ "count": 1000
21
+ },
22
+ "Walker.Environment.EpisodeLength.sum": {
23
+ "value": 30101.0,
24
+ "min": 27567.0,
25
+ "max": 30811.0,
26
+ "count": 1000
27
+ },
28
+ "Walker.Step.mean": {
29
+ "value": 29999516.0,
30
+ "min": 29997.0,
31
+ "max": 29999516.0,
32
+ "count": 1000
33
+ },
34
+ "Walker.Step.sum": {
35
+ "value": 29999516.0,
36
+ "min": 29997.0,
37
+ "max": 29999516.0,
38
+ "count": 1000
39
+ },
40
+ "Walker.Policy.ExtrinsicValueEstimate.mean": {
41
+ "value": 230.66744995117188,
42
+ "min": -0.6896133422851562,
43
+ "max": 324.305419921875,
44
+ "count": 1000
45
+ },
46
+ "Walker.Policy.ExtrinsicValueEstimate.sum": {
47
+ "value": 11302.705078125,
48
+ "min": -1605.419921875,
49
+ "max": 22368.21875,
50
+ "count": 1000
51
+ },
52
+ "Walker.Environment.CumulativeReward.mean": {
53
+ "value": 909.9295683319167,
54
+ "min": -0.6570962896811855,
55
+ "max": 1362.2951894673433,
56
+ "count": 1000
57
+ },
58
+ "Walker.Environment.CumulativeReward.sum": {
59
+ "value": 44586.54884826392,
60
+ "min": -1536.948221564293,
61
+ "max": 62378.6633939147,
62
+ "count": 1000
63
+ },
64
+ "Walker.Policy.ExtrinsicReward.mean": {
65
+ "value": 909.9295683319167,
66
+ "min": -0.6570962896811855,
67
+ "max": 1362.2951894673433,
68
+ "count": 1000
69
+ },
70
+ "Walker.Policy.ExtrinsicReward.sum": {
71
+ "value": 44586.54884826392,
72
+ "min": -1536.948221564293,
73
+ "max": 62378.6633939147,
74
+ "count": 1000
75
+ },
76
+ "Walker.Losses.PolicyLoss.mean": {
77
+ "value": 0.014960785885341466,
78
+ "min": 0.010199054366482111,
79
+ "max": 0.025043683033436535,
80
+ "count": 1000
81
+ },
82
+ "Walker.Losses.PolicyLoss.sum": {
83
+ "value": 0.014960785885341466,
84
+ "min": 0.010199054366482111,
85
+ "max": 0.0446107293808988,
86
+ "count": 1000
87
+ },
88
+ "Walker.Losses.ValueLoss.mean": {
89
+ "value": 738.7845825195312,
90
+ "min": 0.08481643547614416,
91
+ "max": 1760.7361653645833,
92
+ "count": 1000
93
+ },
94
+ "Walker.Losses.ValueLoss.sum": {
95
+ "value": 738.7845825195312,
96
+ "min": 0.08481643547614416,
97
+ "max": 3228.9439615885417,
98
+ "count": 1000
99
+ },
100
+ "Walker.Policy.LearningRate.mean": {
101
+ "value": 1.4884995041667527e-07,
102
+ "min": 1.4884995041667527e-07,
103
+ "max": 0.0002997951200682934,
104
+ "count": 1000
105
+ },
106
+ "Walker.Policy.LearningRate.sum": {
107
+ "value": 1.4884995041667527e-07,
108
+ "min": 1.4884995041667527e-07,
109
+ "max": 0.0005985658504780501,
110
+ "count": 1000
111
+ },
112
+ "Walker.Policy.Epsilon.mean": {
113
+ "value": 0.10004958333333337,
114
+ "min": 0.10004958333333337,
115
+ "max": 0.1999317066666667,
116
+ "count": 1000
117
+ },
118
+ "Walker.Policy.Epsilon.sum": {
119
+ "value": 0.10004958333333337,
120
+ "min": 0.10004958333333337,
121
+ "max": 0.3995219500000001,
122
+ "count": 1000
123
+ },
124
+ "Walker.Policy.Beta.mean": {
125
+ "value": 1.247420833333348e-05,
126
+ "min": 1.247420833333348e-05,
127
+ "max": 0.0049965921626666686,
128
+ "count": 1000
129
+ },
130
+ "Walker.Policy.Beta.sum": {
131
+ "value": 1.247420833333348e-05,
132
+ "min": 1.247420833333348e-05,
133
+ "max": 0.009976145305,
134
+ "count": 1000
135
+ },
136
+ "Walker.IsTraining.mean": {
137
+ "value": 1.0,
138
+ "min": 1.0,
139
+ "max": 1.0,
140
+ "count": 1000
141
+ },
142
+ "Walker.IsTraining.sum": {
143
+ "value": 1.0,
144
+ "min": 1.0,
145
+ "max": 1.0,
146
+ "count": 1000
147
+ }
148
+ },
149
+ "metadata": {
150
+ "timer_format_version": "0.1.0",
151
+ "start_time_seconds": "1700726264",
152
+ "python_version": "3.10.12 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 19:09:20) [MSC v.1916 64 bit (AMD64)]",
153
+ "command_line_arguments": "\\\\?\\C:\\Users\\brian\\anaconda3\\envs\\mlagents\\Scripts\\mlagents-learn C:\\Users\\brian\\Downloads\\ml-agents-release_21\\ml-agents-release_21\\config\\ppo\\Walker.yaml --run-id=peoeoe",
154
+ "mlagents_version": "1.0.0",
155
+ "mlagents_envs_version": "1.0.0",
156
+ "communication_protocol_version": "1.5.0",
157
+ "pytorch_version": "2.1.1",
158
+ "numpy_version": "1.23.1",
159
+ "end_time_seconds": "1700778062"
160
+ },
161
+ "total": 51797.8043054,
162
+ "count": 1,
163
+ "self": 0.015706799997133203,
164
+ "children": {
165
+ "run_training.setup": {
166
+ "total": 0.2039580999989994,
167
+ "count": 1,
168
+ "self": 0.2039580999989994
169
+ },
170
+ "TrainerController.start_learning": {
171
+ "total": 51797.584640500005,
172
+ "count": 1,
173
+ "self": 70.29768369962403,
174
+ "children": {
175
+ "TrainerController._reset_env": {
176
+ "total": 23.856557399994927,
177
+ "count": 1,
178
+ "self": 23.856557399994927
179
+ },
180
+ "TrainerController.advance": {
181
+ "total": 51703.395899900395,
182
+ "count": 3267459,
183
+ "self": 68.43259382342512,
184
+ "children": {
185
+ "env_step": {
186
+ "total": 40749.21110778533,
187
+ "count": 3267459,
188
+ "self": 35325.32746058762,
189
+ "children": {
190
+ "SubprocessEnvManager._take_step": {
191
+ "total": 5379.4323795005475,
192
+ "count": 3267459,
193
+ "self": 248.4734907105012,
194
+ "children": {
195
+ "TorchPolicy.evaluate": {
196
+ "total": 5130.958888790046,
197
+ "count": 3000521,
198
+ "self": 5130.958888790046
199
+ }
200
+ }
201
+ },
202
+ "workers": {
203
+ "total": 44.45126769716444,
204
+ "count": 3267459,
205
+ "self": 0.0,
206
+ "children": {
207
+ "worker_root": {
208
+ "total": 51683.09217708981,
209
+ "count": 3267459,
210
+ "is_parallel": true,
211
+ "self": 20887.179955797023,
212
+ "children": {
213
+ "steps_from_proto": {
214
+ "total": 0.0011389999999664724,
215
+ "count": 1,
216
+ "is_parallel": true,
217
+ "self": 0.00026250000519212335,
218
+ "children": {
219
+ "_process_rank_one_or_two_observation": {
220
+ "total": 0.000876499994774349,
221
+ "count": 2,
222
+ "is_parallel": true,
223
+ "self": 0.000876499994774349
224
+ }
225
+ }
226
+ },
227
+ "UnityEnvironment.step": {
228
+ "total": 30795.91108229279,
229
+ "count": 3267459,
230
+ "is_parallel": true,
231
+ "self": 512.4209040938294,
232
+ "children": {
233
+ "UnityEnvironment._generate_step_input": {
234
+ "total": 871.5271096933284,
235
+ "count": 3267459,
236
+ "is_parallel": true,
237
+ "self": 871.5271096933284
238
+ },
239
+ "communicator.exchange": {
240
+ "total": 28046.3752482016,
241
+ "count": 3267459,
242
+ "is_parallel": true,
243
+ "self": 28046.3752482016
244
+ },
245
+ "steps_from_proto": {
246
+ "total": 1365.5878203040338,
247
+ "count": 3267459,
248
+ "is_parallel": true,
249
+ "self": 335.55627301402274,
250
+ "children": {
251
+ "_process_rank_one_or_two_observation": {
252
+ "total": 1030.031547290011,
253
+ "count": 6534918,
254
+ "is_parallel": true,
255
+ "self": 1030.031547290011
256
+ }
257
+ }
258
+ }
259
+ }
260
+ }
261
+ }
262
+ }
263
+ }
264
+ }
265
+ }
266
+ },
267
+ "trainer_advance": {
268
+ "total": 10885.752198291637,
269
+ "count": 3267459,
270
+ "self": 94.68912797140365,
271
+ "children": {
272
+ "process_trajectory": {
273
+ "total": 2920.225314320385,
274
+ "count": 3267459,
275
+ "self": 2916.5946360203816,
276
+ "children": {
277
+ "RLTrainer._checkpoint": {
278
+ "total": 3.6306783000036376,
279
+ "count": 60,
280
+ "self": 3.6306783000036376
281
+ }
282
+ }
283
+ },
284
+ "_update_policy": {
285
+ "total": 7870.837755999848,
286
+ "count": 1448,
287
+ "self": 3846.254959000711,
288
+ "children": {
289
+ "TorchPPOOptimizer.update": {
290
+ "total": 4024.582796999137,
291
+ "count": 43440,
292
+ "self": 4024.582796999137
293
+ }
294
+ }
295
+ }
296
+ }
297
+ }
298
+ }
299
+ },
300
+ "trainer_threads": {
301
+ "total": 7.999915396794677e-07,
302
+ "count": 1,
303
+ "self": 7.999915396794677e-07
304
+ },
305
+ "TrainerController._save_models": {
306
+ "total": 0.03449869999894872,
307
+ "count": 1,
308
+ "self": 0.001767699999618344,
309
+ "children": {
310
+ "RLTrainer._checkpoint": {
311
+ "total": 0.03273099999933038,
312
+ "count": 1,
313
+ "self": 0.03273099999933038
314
+ }
315
+ }
316
+ }
317
+ }
318
+ }
319
+ }
320
+ }
run_logs/training_status.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "Walker": {
3
+ "checkpoints": [
4
+ {
5
+ "steps": 28499616,
6
+ "file_path": "results\\peoeoe\\Walker\\Walker-28499616.onnx",
7
+ "reward": 1258.0341087488027,
8
+ "creation_time": 1700776273.5707734,
9
+ "auxillary_file_paths": [
10
+ "results\\peoeoe\\Walker\\Walker-28499616.pt"
11
+ ]
12
+ },
13
+ {
14
+ "steps": 28999814,
15
+ "file_path": "results\\peoeoe\\Walker\\Walker-28999814.onnx",
16
+ "reward": 1046.5179156541824,
17
+ "creation_time": 1700776823.5884752,
18
+ "auxillary_file_paths": [
19
+ "results\\peoeoe\\Walker\\Walker-28999814.pt"
20
+ ]
21
+ },
22
+ {
23
+ "steps": 29499968,
24
+ "file_path": "results\\peoeoe\\Walker\\Walker-29499968.onnx",
25
+ "reward": 1329.8401733066726,
26
+ "creation_time": 1700777473.6491804,
27
+ "auxillary_file_paths": [
28
+ "results\\peoeoe\\Walker\\Walker-29499968.pt"
29
+ ]
30
+ },
31
+ {
32
+ "steps": 29999516,
33
+ "file_path": "results\\peoeoe\\Walker\\Walker-29999516.onnx",
34
+ "reward": 773.562527270019,
35
+ "creation_time": 1700778062.6647894,
36
+ "auxillary_file_paths": [
37
+ "results\\peoeoe\\Walker\\Walker-29999516.pt"
38
+ ]
39
+ },
40
+ {
41
+ "steps": 30000516,
42
+ "file_path": "results\\peoeoe\\Walker\\Walker-30000516.onnx",
43
+ "reward": 824.4981669964126,
44
+ "creation_time": 1700778062.7271342,
45
+ "auxillary_file_paths": [
46
+ "results\\peoeoe\\Walker\\Walker-30000516.pt"
47
+ ]
48
+ }
49
+ ],
50
+ "final_checkpoint": {
51
+ "steps": 30000516,
52
+ "file_path": "results\\peoeoe\\Walker.onnx",
53
+ "reward": 824.4981669964126,
54
+ "creation_time": 1700778062.7271342,
55
+ "auxillary_file_paths": [
56
+ "results\\peoeoe\\Walker\\Walker-30000516.pt"
57
+ ]
58
+ }
59
+ },
60
+ "metadata": {
61
+ "stats_format_version": "0.3.0",
62
+ "mlagents_version": "1.0.0",
63
+ "torch_version": "2.1.1"
64
+ }
65
+ }